Oct 9 01:09:00.896410 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 9 01:09:00.896432 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Oct 8 23:34:40 -00 2024 Oct 9 01:09:00.896442 kernel: KASLR enabled Oct 9 01:09:00.896474 kernel: efi: EFI v2.7 by EDK II Oct 9 01:09:00.896483 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Oct 9 01:09:00.896489 kernel: random: crng init done Oct 9 01:09:00.896496 kernel: secureboot: Secure boot disabled Oct 9 01:09:00.896502 kernel: ACPI: Early table checksum verification disabled Oct 9 01:09:00.896508 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Oct 9 01:09:00.896518 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 9 01:09:00.896524 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:09:00.896530 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:09:00.896536 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:09:00.896542 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:09:00.896550 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:09:00.896558 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:09:00.896565 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:09:00.896571 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:09:00.896578 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:09:00.896584 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 9 01:09:00.896591 kernel: NUMA: Failed to initialise from firmware Oct 9 01:09:00.896597 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 9 01:09:00.896604 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Oct 9 01:09:00.896610 kernel: Zone ranges: Oct 9 01:09:00.896616 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 9 01:09:00.896624 kernel: DMA32 empty Oct 9 01:09:00.896630 kernel: Normal empty Oct 9 01:09:00.896637 kernel: Movable zone start for each node Oct 9 01:09:00.896643 kernel: Early memory node ranges Oct 9 01:09:00.896650 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Oct 9 01:09:00.896656 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Oct 9 01:09:00.896662 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Oct 9 01:09:00.896669 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 9 01:09:00.896675 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 9 01:09:00.896681 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 9 01:09:00.896688 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 9 01:09:00.896695 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 9 01:09:00.896702 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 9 01:09:00.896709 kernel: psci: probing for conduit method from ACPI. Oct 9 01:09:00.896716 kernel: psci: PSCIv1.1 detected in firmware. Oct 9 01:09:00.896725 kernel: psci: Using standard PSCI v0.2 function IDs Oct 9 01:09:00.896732 kernel: psci: Trusted OS migration not required Oct 9 01:09:00.896739 kernel: psci: SMC Calling Convention v1.1 Oct 9 01:09:00.896747 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 9 01:09:00.896754 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 9 01:09:00.896761 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 9 01:09:00.896768 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 9 01:09:00.896775 kernel: Detected PIPT I-cache on CPU0 Oct 9 01:09:00.896782 kernel: CPU features: detected: GIC system register CPU interface Oct 9 01:09:00.896789 kernel: CPU features: detected: Hardware dirty bit management Oct 9 01:09:00.896796 kernel: CPU features: detected: Spectre-v4 Oct 9 01:09:00.896802 kernel: CPU features: detected: Spectre-BHB Oct 9 01:09:00.896809 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 9 01:09:00.896818 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 9 01:09:00.896825 kernel: CPU features: detected: ARM erratum 1418040 Oct 9 01:09:00.896832 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 9 01:09:00.896838 kernel: alternatives: applying boot alternatives Oct 9 01:09:00.896852 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d2d67b5440410ae2d0aa86eba97891969be0a7a421fa55f13442706ef7ed2a5e Oct 9 01:09:00.896860 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 01:09:00.896867 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 9 01:09:00.896874 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 01:09:00.896881 kernel: Fallback order for Node 0: 0 Oct 9 01:09:00.896888 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 9 01:09:00.896895 kernel: Policy zone: DMA Oct 9 01:09:00.896903 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 01:09:00.896911 kernel: software IO TLB: area num 4. Oct 9 01:09:00.896918 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Oct 9 01:09:00.896925 kernel: Memory: 2386404K/2572288K available (10240K kernel code, 2184K rwdata, 8092K rodata, 39552K init, 897K bss, 185884K reserved, 0K cma-reserved) Oct 9 01:09:00.896932 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 9 01:09:00.896939 kernel: trace event string verifier disabled Oct 9 01:09:00.896946 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 01:09:00.896953 kernel: rcu: RCU event tracing is enabled. Oct 9 01:09:00.896960 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 9 01:09:00.896968 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 01:09:00.896975 kernel: Tracing variant of Tasks RCU enabled. Oct 9 01:09:00.896981 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 01:09:00.896990 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 9 01:09:00.896997 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 9 01:09:00.897004 kernel: GICv3: 256 SPIs implemented Oct 9 01:09:00.897010 kernel: GICv3: 0 Extended SPIs implemented Oct 9 01:09:00.897017 kernel: Root IRQ handler: gic_handle_irq Oct 9 01:09:00.897024 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 9 01:09:00.897031 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 9 01:09:00.897038 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 9 01:09:00.897045 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Oct 9 01:09:00.897052 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Oct 9 01:09:00.897059 kernel: GICv3: using LPI property table @0x00000000400f0000 Oct 9 01:09:00.897068 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Oct 9 01:09:00.897075 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 01:09:00.897082 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 01:09:00.897089 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 9 01:09:00.897097 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 9 01:09:00.897104 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 9 01:09:00.897111 kernel: arm-pv: using stolen time PV Oct 9 01:09:00.897118 kernel: Console: colour dummy device 80x25 Oct 9 01:09:00.897125 kernel: ACPI: Core revision 20230628 Oct 9 01:09:00.897132 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 9 01:09:00.897139 kernel: pid_max: default: 32768 minimum: 301 Oct 9 01:09:00.897148 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 01:09:00.897155 kernel: landlock: Up and running. Oct 9 01:09:00.897162 kernel: SELinux: Initializing. Oct 9 01:09:00.897169 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 01:09:00.897176 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 01:09:00.897183 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:09:00.897190 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:09:00.897197 kernel: rcu: Hierarchical SRCU implementation. Oct 9 01:09:00.897205 kernel: rcu: Max phase no-delay instances is 400. Oct 9 01:09:00.897213 kernel: Platform MSI: ITS@0x8080000 domain created Oct 9 01:09:00.897220 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 9 01:09:00.897228 kernel: Remapping and enabling EFI services. Oct 9 01:09:00.897235 kernel: smp: Bringing up secondary CPUs ... Oct 9 01:09:00.897243 kernel: Detected PIPT I-cache on CPU1 Oct 9 01:09:00.897250 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 9 01:09:00.897257 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Oct 9 01:09:00.897264 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 01:09:00.897272 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 9 01:09:00.897280 kernel: Detected PIPT I-cache on CPU2 Oct 9 01:09:00.897288 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 9 01:09:00.897300 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Oct 9 01:09:00.897309 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 01:09:00.897316 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 9 01:09:00.897323 kernel: Detected PIPT I-cache on CPU3 Oct 9 01:09:00.897331 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 9 01:09:00.897339 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Oct 9 01:09:00.897347 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 01:09:00.897355 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 9 01:09:00.897363 kernel: smp: Brought up 1 node, 4 CPUs Oct 9 01:09:00.897371 kernel: SMP: Total of 4 processors activated. Oct 9 01:09:00.897378 kernel: CPU features: detected: 32-bit EL0 Support Oct 9 01:09:00.897386 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 9 01:09:00.897394 kernel: CPU features: detected: Common not Private translations Oct 9 01:09:00.897402 kernel: CPU features: detected: CRC32 instructions Oct 9 01:09:00.897409 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 9 01:09:00.897418 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 9 01:09:00.897425 kernel: CPU features: detected: LSE atomic instructions Oct 9 01:09:00.897433 kernel: CPU features: detected: Privileged Access Never Oct 9 01:09:00.897440 kernel: CPU features: detected: RAS Extension Support Oct 9 01:09:00.897468 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 9 01:09:00.897495 kernel: CPU: All CPU(s) started at EL1 Oct 9 01:09:00.897504 kernel: alternatives: applying system-wide alternatives Oct 9 01:09:00.897511 kernel: devtmpfs: initialized Oct 9 01:09:00.897519 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 01:09:00.897530 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 9 01:09:00.897537 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 01:09:00.897545 kernel: SMBIOS 3.0.0 present. Oct 9 01:09:00.897552 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Oct 9 01:09:00.897560 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 01:09:00.897567 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 9 01:09:00.897575 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 9 01:09:00.897582 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 9 01:09:00.897590 kernel: audit: initializing netlink subsys (disabled) Oct 9 01:09:00.897599 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Oct 9 01:09:00.897607 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 01:09:00.897615 kernel: cpuidle: using governor menu Oct 9 01:09:00.897622 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 9 01:09:00.897630 kernel: ASID allocator initialised with 32768 entries Oct 9 01:09:00.897638 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 01:09:00.897645 kernel: Serial: AMBA PL011 UART driver Oct 9 01:09:00.897653 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 9 01:09:00.897661 kernel: Modules: 0 pages in range for non-PLT usage Oct 9 01:09:00.897670 kernel: Modules: 508992 pages in range for PLT usage Oct 9 01:09:00.897677 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 01:09:00.897685 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 01:09:00.897693 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 9 01:09:00.897701 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 9 01:09:00.897709 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 01:09:00.897716 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 01:09:00.897724 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 9 01:09:00.897732 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 9 01:09:00.897739 kernel: ACPI: Added _OSI(Module Device) Oct 9 01:09:00.897749 kernel: ACPI: Added _OSI(Processor Device) Oct 9 01:09:00.897757 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 01:09:00.897764 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 01:09:00.897772 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 01:09:00.897780 kernel: ACPI: Interpreter enabled Oct 9 01:09:00.897787 kernel: ACPI: Using GIC for interrupt routing Oct 9 01:09:00.897795 kernel: ACPI: MCFG table detected, 1 entries Oct 9 01:09:00.897803 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 9 01:09:00.897810 kernel: printk: console [ttyAMA0] enabled Oct 9 01:09:00.897820 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 01:09:00.897968 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 9 01:09:00.898047 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 9 01:09:00.898119 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 9 01:09:00.898188 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 9 01:09:00.898258 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 9 01:09:00.898269 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 9 01:09:00.898279 kernel: PCI host bridge to bus 0000:00 Oct 9 01:09:00.898364 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 9 01:09:00.898431 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 9 01:09:00.898528 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 9 01:09:00.898595 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 01:09:00.898683 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 9 01:09:00.898773 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 9 01:09:00.898861 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 9 01:09:00.898938 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 9 01:09:00.899010 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 9 01:09:00.899083 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 9 01:09:00.899155 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 9 01:09:00.899228 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 9 01:09:00.899295 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 9 01:09:00.899358 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 9 01:09:00.899425 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 9 01:09:00.899436 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 9 01:09:00.899444 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 9 01:09:00.899470 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 9 01:09:00.899479 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 9 01:09:00.899487 kernel: iommu: Default domain type: Translated Oct 9 01:09:00.899497 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 9 01:09:00.899505 kernel: efivars: Registered efivars operations Oct 9 01:09:00.899513 kernel: vgaarb: loaded Oct 9 01:09:00.899521 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 9 01:09:00.899528 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 01:09:00.899536 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 01:09:00.899544 kernel: pnp: PnP ACPI init Oct 9 01:09:00.899624 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 9 01:09:00.899635 kernel: pnp: PnP ACPI: found 1 devices Oct 9 01:09:00.899646 kernel: NET: Registered PF_INET protocol family Oct 9 01:09:00.899653 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 9 01:09:00.899661 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 9 01:09:00.899669 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 01:09:00.899677 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 01:09:00.899685 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 9 01:09:00.899693 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 9 01:09:00.899701 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 01:09:00.899711 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 01:09:00.899718 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 01:09:00.899726 kernel: PCI: CLS 0 bytes, default 64 Oct 9 01:09:00.899734 kernel: kvm [1]: HYP mode not available Oct 9 01:09:00.899741 kernel: Initialise system trusted keyrings Oct 9 01:09:00.899749 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 9 01:09:00.899757 kernel: Key type asymmetric registered Oct 9 01:09:00.899764 kernel: Asymmetric key parser 'x509' registered Oct 9 01:09:00.899772 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 9 01:09:00.899780 kernel: io scheduler mq-deadline registered Oct 9 01:09:00.899789 kernel: io scheduler kyber registered Oct 9 01:09:00.899797 kernel: io scheduler bfq registered Oct 9 01:09:00.899805 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 9 01:09:00.899812 kernel: ACPI: button: Power Button [PWRB] Oct 9 01:09:00.899820 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 9 01:09:00.899901 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 9 01:09:00.899913 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 01:09:00.899921 kernel: thunder_xcv, ver 1.0 Oct 9 01:09:00.899929 kernel: thunder_bgx, ver 1.0 Oct 9 01:09:00.899939 kernel: nicpf, ver 1.0 Oct 9 01:09:00.899947 kernel: nicvf, ver 1.0 Oct 9 01:09:00.900028 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 9 01:09:00.900108 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-09T01:09:00 UTC (1728436140) Oct 9 01:09:00.900119 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 9 01:09:00.900127 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 9 01:09:00.900135 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 9 01:09:00.900143 kernel: watchdog: Hard watchdog permanently disabled Oct 9 01:09:00.900153 kernel: NET: Registered PF_INET6 protocol family Oct 9 01:09:00.900161 kernel: Segment Routing with IPv6 Oct 9 01:09:00.900168 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 01:09:00.900176 kernel: NET: Registered PF_PACKET protocol family Oct 9 01:09:00.900184 kernel: Key type dns_resolver registered Oct 9 01:09:00.900191 kernel: registered taskstats version 1 Oct 9 01:09:00.900199 kernel: Loading compiled-in X.509 certificates Oct 9 01:09:00.900207 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 80611b0a9480eaf6d787b908c6349fdb5d07fa81' Oct 9 01:09:00.900214 kernel: Key type .fscrypt registered Oct 9 01:09:00.900224 kernel: Key type fscrypt-provisioning registered Oct 9 01:09:00.900232 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 01:09:00.900240 kernel: ima: Allocated hash algorithm: sha1 Oct 9 01:09:00.900247 kernel: ima: No architecture policies found Oct 9 01:09:00.900255 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 9 01:09:00.900263 kernel: clk: Disabling unused clocks Oct 9 01:09:00.900270 kernel: Freeing unused kernel memory: 39552K Oct 9 01:09:00.900278 kernel: Run /init as init process Oct 9 01:09:00.900285 kernel: with arguments: Oct 9 01:09:00.900294 kernel: /init Oct 9 01:09:00.900302 kernel: with environment: Oct 9 01:09:00.900309 kernel: HOME=/ Oct 9 01:09:00.900317 kernel: TERM=linux Oct 9 01:09:00.900325 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 01:09:00.900335 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:09:00.900344 systemd[1]: Detected virtualization kvm. Oct 9 01:09:00.900355 systemd[1]: Detected architecture arm64. Oct 9 01:09:00.900363 systemd[1]: Running in initrd. Oct 9 01:09:00.900371 systemd[1]: No hostname configured, using default hostname. Oct 9 01:09:00.900379 systemd[1]: Hostname set to . Oct 9 01:09:00.900388 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:09:00.900396 systemd[1]: Queued start job for default target initrd.target. Oct 9 01:09:00.900404 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:09:00.900413 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:09:00.900423 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 01:09:00.900432 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:09:00.900441 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 01:09:00.900459 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 01:09:00.900483 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 01:09:00.900493 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 01:09:00.900501 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:09:00.900512 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:09:00.900521 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:09:00.900529 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:09:00.900537 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:09:00.900545 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:09:00.900554 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:09:00.900562 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:09:00.900571 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 01:09:00.900579 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 01:09:00.900589 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:09:00.900597 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:09:00.900606 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:09:00.900614 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:09:00.900623 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 01:09:00.900631 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:09:00.900639 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 01:09:00.900648 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 01:09:00.900658 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:09:00.900666 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:09:00.900674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:09:00.900683 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 01:09:00.900691 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:09:00.900699 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 01:09:00.900710 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 01:09:00.900719 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:09:00.900746 systemd-journald[238]: Collecting audit messages is disabled. Oct 9 01:09:00.900768 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:09:00.900778 systemd-journald[238]: Journal started Oct 9 01:09:00.900798 systemd-journald[238]: Runtime Journal (/run/log/journal/327a9a1964384056b5db20110364c571) is 5.9M, max 47.3M, 41.4M free. Oct 9 01:09:00.902494 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:09:00.892394 systemd-modules-load[239]: Inserted module 'overlay' Oct 9 01:09:00.903876 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:09:00.909463 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:09:00.909508 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 01:09:00.910318 systemd-modules-load[239]: Inserted module 'br_netfilter' Oct 9 01:09:00.911168 kernel: Bridge firewalling registered Oct 9 01:09:00.911435 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:09:00.914511 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:09:00.916618 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:09:00.918857 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:09:00.924195 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:09:00.927522 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:09:00.928650 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:09:00.941676 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 01:09:00.943896 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:09:00.951600 dracut-cmdline[278]: dracut-dracut-053 Oct 9 01:09:00.954141 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d2d67b5440410ae2d0aa86eba97891969be0a7a421fa55f13442706ef7ed2a5e Oct 9 01:09:00.971416 systemd-resolved[281]: Positive Trust Anchors: Oct 9 01:09:00.971503 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:09:00.971536 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:09:00.976243 systemd-resolved[281]: Defaulting to hostname 'linux'. Oct 9 01:09:00.977518 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:09:00.978371 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:09:01.023463 kernel: SCSI subsystem initialized Oct 9 01:09:01.026467 kernel: Loading iSCSI transport class v2.0-870. Oct 9 01:09:01.033482 kernel: iscsi: registered transport (tcp) Oct 9 01:09:01.048491 kernel: iscsi: registered transport (qla4xxx) Oct 9 01:09:01.048529 kernel: QLogic iSCSI HBA Driver Oct 9 01:09:01.092357 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 01:09:01.105602 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 01:09:01.122964 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 01:09:01.123023 kernel: device-mapper: uevent: version 1.0.3 Oct 9 01:09:01.123036 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 01:09:01.167478 kernel: raid6: neonx8 gen() 15783 MB/s Oct 9 01:09:01.184470 kernel: raid6: neonx4 gen() 15490 MB/s Oct 9 01:09:01.201463 kernel: raid6: neonx2 gen() 13113 MB/s Oct 9 01:09:01.218469 kernel: raid6: neonx1 gen() 10376 MB/s Oct 9 01:09:01.235464 kernel: raid6: int64x8 gen() 6916 MB/s Oct 9 01:09:01.252468 kernel: raid6: int64x4 gen() 7290 MB/s Oct 9 01:09:01.269470 kernel: raid6: int64x2 gen() 6084 MB/s Oct 9 01:09:01.286462 kernel: raid6: int64x1 gen() 5025 MB/s Oct 9 01:09:01.286476 kernel: raid6: using algorithm neonx8 gen() 15783 MB/s Oct 9 01:09:01.303470 kernel: raid6: .... xor() 11865 MB/s, rmw enabled Oct 9 01:09:01.303486 kernel: raid6: using neon recovery algorithm Oct 9 01:09:01.308533 kernel: xor: measuring software checksum speed Oct 9 01:09:01.308555 kernel: 8regs : 19487 MB/sec Oct 9 01:09:01.309564 kernel: 32regs : 19655 MB/sec Oct 9 01:09:01.309577 kernel: arm64_neon : 27123 MB/sec Oct 9 01:09:01.309592 kernel: xor: using function: arm64_neon (27123 MB/sec) Oct 9 01:09:01.363516 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 01:09:01.374531 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:09:01.396666 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:09:01.408415 systemd-udevd[464]: Using default interface naming scheme 'v255'. Oct 9 01:09:01.411567 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:09:01.423652 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 01:09:01.434689 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Oct 9 01:09:01.463317 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:09:01.477670 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:09:01.518753 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:09:01.527612 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 01:09:01.539484 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 01:09:01.540803 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:09:01.542238 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:09:01.544433 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:09:01.551594 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 01:09:01.563352 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:09:01.569632 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 9 01:09:01.569789 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 9 01:09:01.574212 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 01:09:01.574250 kernel: GPT:9289727 != 19775487 Oct 9 01:09:01.574262 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 01:09:01.574278 kernel: GPT:9289727 != 19775487 Oct 9 01:09:01.574289 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 01:09:01.574299 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:09:01.576310 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:09:01.576513 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:09:01.577588 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:09:01.578349 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:09:01.578482 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:09:01.579328 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:09:01.589719 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:09:01.603527 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (511) Oct 9 01:09:01.603572 kernel: BTRFS: device fsid c25b3a2f-539f-42a7-8842-97b35e474647 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (514) Oct 9 01:09:01.605788 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 01:09:01.606953 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:09:01.612072 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 01:09:01.618990 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 01:09:01.622541 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 01:09:01.623415 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 01:09:01.636613 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 01:09:01.638154 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:09:01.644099 disk-uuid[553]: Primary Header is updated. Oct 9 01:09:01.644099 disk-uuid[553]: Secondary Entries is updated. Oct 9 01:09:01.644099 disk-uuid[553]: Secondary Header is updated. Oct 9 01:09:01.650435 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:09:01.660699 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:09:02.662477 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:09:02.663096 disk-uuid[554]: The operation has completed successfully. Oct 9 01:09:02.684670 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 01:09:02.684785 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 01:09:02.703571 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 01:09:02.706351 sh[576]: Success Oct 9 01:09:02.715507 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 9 01:09:02.740616 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 01:09:02.751589 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 01:09:02.754489 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 01:09:02.762073 kernel: BTRFS info (device dm-0): first mount of filesystem c25b3a2f-539f-42a7-8842-97b35e474647 Oct 9 01:09:02.762106 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 9 01:09:02.762117 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 01:09:02.762891 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 01:09:02.763904 kernel: BTRFS info (device dm-0): using free space tree Oct 9 01:09:02.767110 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 01:09:02.768144 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 01:09:02.776583 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 01:09:02.777791 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 01:09:02.786683 kernel: BTRFS info (device vda6): first mount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 01:09:02.786722 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 9 01:09:02.787483 kernel: BTRFS info (device vda6): using free space tree Oct 9 01:09:02.789473 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 01:09:02.796116 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 01:09:02.797487 kernel: BTRFS info (device vda6): last unmount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 01:09:02.802558 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 01:09:02.808601 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 01:09:02.873730 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:09:02.891580 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:09:02.909246 ignition[669]: Ignition 2.19.0 Oct 9 01:09:02.909255 ignition[669]: Stage: fetch-offline Oct 9 01:09:02.909290 ignition[669]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:09:02.909298 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:09:02.909526 ignition[669]: parsed url from cmdline: "" Oct 9 01:09:02.909530 ignition[669]: no config URL provided Oct 9 01:09:02.909535 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 01:09:02.909542 ignition[669]: no config at "/usr/lib/ignition/user.ign" Oct 9 01:09:02.909567 ignition[669]: op(1): [started] loading QEMU firmware config module Oct 9 01:09:02.909572 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 9 01:09:02.918732 systemd-networkd[769]: lo: Link UP Oct 9 01:09:02.918746 systemd-networkd[769]: lo: Gained carrier Oct 9 01:09:02.919549 systemd-networkd[769]: Enumeration completed Oct 9 01:09:02.920057 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:09:02.920060 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:09:02.922252 ignition[669]: op(1): [finished] loading QEMU firmware config module Oct 9 01:09:02.920854 systemd-networkd[769]: eth0: Link UP Oct 9 01:09:02.920857 systemd-networkd[769]: eth0: Gained carrier Oct 9 01:09:02.920862 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:09:02.922132 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:09:02.923628 systemd[1]: Reached target network.target - Network. Oct 9 01:09:02.937485 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.157/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 01:09:02.965814 ignition[669]: parsing config with SHA512: 465f515f2dc5f072130f7314ae11504ff094dadd30fc8a7cd5076e1fff069325dfe896a2f913c1f39339d98b984dc38f6ea86ddba494106293bcb94a9ab1a6a5 Oct 9 01:09:02.971042 unknown[669]: fetched base config from "system" Oct 9 01:09:02.971055 unknown[669]: fetched user config from "qemu" Oct 9 01:09:02.973183 ignition[669]: fetch-offline: fetch-offline passed Oct 9 01:09:02.973271 ignition[669]: Ignition finished successfully Oct 9 01:09:02.975665 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:09:02.976812 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 9 01:09:02.983637 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 01:09:02.993953 ignition[777]: Ignition 2.19.0 Oct 9 01:09:02.993964 ignition[777]: Stage: kargs Oct 9 01:09:02.994157 ignition[777]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:09:02.994169 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:09:02.995131 ignition[777]: kargs: kargs passed Oct 9 01:09:02.995178 ignition[777]: Ignition finished successfully Oct 9 01:09:02.998038 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 01:09:02.999778 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 01:09:03.013323 ignition[786]: Ignition 2.19.0 Oct 9 01:09:03.013333 ignition[786]: Stage: disks Oct 9 01:09:03.013597 ignition[786]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:09:03.013616 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:09:03.014519 ignition[786]: disks: disks passed Oct 9 01:09:03.014568 ignition[786]: Ignition finished successfully Oct 9 01:09:03.017521 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 01:09:03.019190 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 01:09:03.020063 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 01:09:03.021542 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:09:03.023046 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:09:03.024299 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:09:03.033594 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 01:09:03.044219 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 01:09:03.048137 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 01:09:03.056562 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 01:09:03.098473 kernel: EXT4-fs (vda9): mounted filesystem 3a4adf89-ce2b-46a9-8e1a-433a27a27d16 r/w with ordered data mode. Quota mode: none. Oct 9 01:09:03.099137 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 01:09:03.100227 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 01:09:03.109559 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:09:03.111105 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 01:09:03.112350 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 9 01:09:03.112391 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 01:09:03.117565 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (804) Oct 9 01:09:03.112416 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:09:03.121271 kernel: BTRFS info (device vda6): first mount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 01:09:03.121291 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 9 01:09:03.121302 kernel: BTRFS info (device vda6): using free space tree Oct 9 01:09:03.121312 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 01:09:03.117945 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 01:09:03.122750 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:09:03.124968 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 01:09:03.168446 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 01:09:03.171613 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory Oct 9 01:09:03.175530 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 01:09:03.179442 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 01:09:03.254850 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 01:09:03.262558 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 01:09:03.263949 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 01:09:03.269496 kernel: BTRFS info (device vda6): last unmount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 01:09:03.286682 ignition[916]: INFO : Ignition 2.19.0 Oct 9 01:09:03.288018 ignition[916]: INFO : Stage: mount Oct 9 01:09:03.288018 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:09:03.288018 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:09:03.290124 ignition[916]: INFO : mount: mount passed Oct 9 01:09:03.290124 ignition[916]: INFO : Ignition finished successfully Oct 9 01:09:03.289729 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 01:09:03.291017 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 01:09:03.299621 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 01:09:03.761722 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 01:09:03.771709 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:09:03.777883 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (931) Oct 9 01:09:03.777916 kernel: BTRFS info (device vda6): first mount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 01:09:03.777928 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 9 01:09:03.778542 kernel: BTRFS info (device vda6): using free space tree Oct 9 01:09:03.781478 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 01:09:03.782818 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:09:03.811779 ignition[949]: INFO : Ignition 2.19.0 Oct 9 01:09:03.811779 ignition[949]: INFO : Stage: files Oct 9 01:09:03.813007 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:09:03.813007 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:09:03.813007 ignition[949]: DEBUG : files: compiled without relabeling support, skipping Oct 9 01:09:03.815656 ignition[949]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 01:09:03.815656 ignition[949]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 01:09:03.815656 ignition[949]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 01:09:03.818581 ignition[949]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 01:09:03.818581 ignition[949]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 01:09:03.818581 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 9 01:09:03.818581 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 9 01:09:03.816008 unknown[949]: wrote ssh authorized keys file for user: core Oct 9 01:09:03.859102 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 01:09:03.966253 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 9 01:09:03.967688 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 9 01:09:03.967688 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Oct 9 01:09:04.308740 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 9 01:09:04.463793 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 9 01:09:04.465911 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 9 01:09:04.465911 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 01:09:04.465911 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:09:04.465911 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:09:04.465911 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:09:04.465911 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:09:04.465911 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:09:04.465911 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:09:04.465911 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:09:04.465911 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:09:04.465911 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Oct 9 01:09:04.465911 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Oct 9 01:09:04.465911 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Oct 9 01:09:04.465911 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Oct 9 01:09:04.639522 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 9 01:09:04.716569 systemd-networkd[769]: eth0: Gained IPv6LL Oct 9 01:09:04.932929 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Oct 9 01:09:04.932929 ignition[949]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 9 01:09:04.936538 ignition[949]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:09:04.936538 ignition[949]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:09:04.936538 ignition[949]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 9 01:09:04.936538 ignition[949]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 9 01:09:04.936538 ignition[949]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 01:09:04.936538 ignition[949]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 01:09:04.936538 ignition[949]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 9 01:09:04.936538 ignition[949]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 9 01:09:04.955243 ignition[949]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 01:09:04.958489 ignition[949]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 01:09:04.959946 ignition[949]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 9 01:09:04.959946 ignition[949]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 9 01:09:04.959946 ignition[949]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 01:09:04.959946 ignition[949]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:09:04.959946 ignition[949]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:09:04.959946 ignition[949]: INFO : files: files passed Oct 9 01:09:04.959946 ignition[949]: INFO : Ignition finished successfully Oct 9 01:09:04.962483 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 01:09:04.969644 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 01:09:04.972431 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 01:09:04.973602 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 01:09:04.974496 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 01:09:04.979433 initrd-setup-root-after-ignition[977]: grep: /sysroot/oem/oem-release: No such file or directory Oct 9 01:09:04.982283 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:09:04.982283 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:09:04.985590 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:09:04.986209 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:09:04.987980 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 01:09:04.994611 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 01:09:05.011891 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 01:09:05.012879 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 01:09:05.014634 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 01:09:05.015992 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 01:09:05.017594 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 01:09:05.018422 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 01:09:05.033610 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:09:05.035909 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 01:09:05.047108 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:09:05.048312 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:09:05.050122 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 01:09:05.051633 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 01:09:05.051748 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:09:05.053810 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 01:09:05.055443 systemd[1]: Stopped target basic.target - Basic System. Oct 9 01:09:05.056918 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 01:09:05.058410 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:09:05.060130 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 01:09:05.061792 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 01:09:05.063350 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:09:05.065050 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 01:09:05.066791 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 01:09:05.068259 systemd[1]: Stopped target swap.target - Swaps. Oct 9 01:09:05.069546 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 01:09:05.069671 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:09:05.071610 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:09:05.073269 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:09:05.074960 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 01:09:05.076335 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:09:05.078480 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 01:09:05.078597 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 01:09:05.080865 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 01:09:05.080983 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:09:05.082737 systemd[1]: Stopped target paths.target - Path Units. Oct 9 01:09:05.084128 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 01:09:05.085344 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:09:05.087574 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 01:09:05.088564 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 01:09:05.090020 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 01:09:05.090108 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:09:05.091398 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 01:09:05.091491 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:09:05.092837 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 01:09:05.092946 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:09:05.094482 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 01:09:05.094583 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 01:09:05.106625 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 01:09:05.108171 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 01:09:05.109024 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 01:09:05.109142 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:09:05.110804 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 01:09:05.110966 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:09:05.116041 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 01:09:05.116134 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 01:09:05.118891 ignition[1004]: INFO : Ignition 2.19.0 Oct 9 01:09:05.118891 ignition[1004]: INFO : Stage: umount Oct 9 01:09:05.118891 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:09:05.118891 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:09:05.124408 ignition[1004]: INFO : umount: umount passed Oct 9 01:09:05.124408 ignition[1004]: INFO : Ignition finished successfully Oct 9 01:09:05.120753 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 01:09:05.120858 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 01:09:05.122311 systemd[1]: Stopped target network.target - Network. Oct 9 01:09:05.123714 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 01:09:05.123771 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 01:09:05.125518 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 01:09:05.125563 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 01:09:05.127083 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 01:09:05.127128 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 01:09:05.128665 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 01:09:05.128710 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 01:09:05.131545 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 01:09:05.133189 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 01:09:05.135640 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 01:09:05.142530 systemd-networkd[769]: eth0: DHCPv6 lease lost Oct 9 01:09:05.143684 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 01:09:05.143786 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 01:09:05.145193 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 01:09:05.145239 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:09:05.160598 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 01:09:05.161320 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 01:09:05.161383 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:09:05.163235 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:09:05.167250 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 01:09:05.167340 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 01:09:05.171103 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 01:09:05.171213 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:09:05.172967 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 01:09:05.173016 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 01:09:05.174604 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 01:09:05.174644 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:09:05.177791 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 01:09:05.177895 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 01:09:05.179981 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 01:09:05.180120 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:09:05.182053 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 01:09:05.182121 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 01:09:05.183086 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 01:09:05.183116 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:09:05.184629 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 01:09:05.184673 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:09:05.187362 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 01:09:05.187409 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 01:09:05.190001 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:09:05.190047 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:09:05.200604 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 01:09:05.201419 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 01:09:05.201482 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:09:05.203340 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 9 01:09:05.203378 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:09:05.205094 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 01:09:05.205129 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:09:05.207054 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:09:05.207093 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:09:05.209091 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 01:09:05.209178 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 01:09:05.210846 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 01:09:05.210914 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 01:09:05.213038 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 01:09:05.214104 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 01:09:05.214159 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 01:09:05.216389 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 01:09:05.225710 systemd[1]: Switching root. Oct 9 01:09:05.254351 systemd-journald[238]: Journal stopped Oct 9 01:09:05.950934 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Oct 9 01:09:05.950990 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 01:09:05.951006 kernel: SELinux: policy capability open_perms=1 Oct 9 01:09:05.951016 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 01:09:05.951027 kernel: SELinux: policy capability always_check_network=0 Oct 9 01:09:05.951037 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 01:09:05.951047 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 01:09:05.951056 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 01:09:05.951066 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 01:09:05.951080 kernel: audit: type=1403 audit(1728436145.411:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 01:09:05.951090 systemd[1]: Successfully loaded SELinux policy in 31.950ms. Oct 9 01:09:05.951109 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.252ms. Oct 9 01:09:05.951121 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:09:05.951132 systemd[1]: Detected virtualization kvm. Oct 9 01:09:05.951143 systemd[1]: Detected architecture arm64. Oct 9 01:09:05.951153 systemd[1]: Detected first boot. Oct 9 01:09:05.951167 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:09:05.951179 zram_generator::config[1050]: No configuration found. Oct 9 01:09:05.951190 systemd[1]: Populated /etc with preset unit settings. Oct 9 01:09:05.951202 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 01:09:05.951213 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 01:09:05.951224 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 01:09:05.951235 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 01:09:05.951252 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 01:09:05.951262 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 01:09:05.951273 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 01:09:05.951284 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 01:09:05.951294 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 01:09:05.951309 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 01:09:05.951319 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 01:09:05.951332 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:09:05.951343 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:09:05.951354 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 01:09:05.951364 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 01:09:05.951375 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 01:09:05.951386 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:09:05.951397 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 9 01:09:05.951410 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:09:05.951421 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 01:09:05.951432 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 01:09:05.951443 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 01:09:05.951477 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 01:09:05.951489 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:09:05.951499 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:09:05.951516 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:09:05.951527 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:09:05.951537 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 01:09:05.951548 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 01:09:05.951558 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:09:05.951570 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:09:05.951581 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:09:05.951591 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 01:09:05.951602 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 01:09:05.951612 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 01:09:05.951624 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 01:09:05.951635 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 01:09:05.951645 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 01:09:05.951656 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 01:09:05.951667 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 01:09:05.951677 systemd[1]: Reached target machines.target - Containers. Oct 9 01:09:05.951688 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 01:09:05.951698 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:09:05.951710 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:09:05.951721 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 01:09:05.951731 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:09:05.951742 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:09:05.951753 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:09:05.951764 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 01:09:05.951774 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:09:05.951785 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 01:09:05.951799 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 01:09:05.951810 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 01:09:05.951826 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 01:09:05.951839 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 01:09:05.951849 kernel: fuse: init (API version 7.39) Oct 9 01:09:05.951859 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:09:05.951874 kernel: loop: module loaded Oct 9 01:09:05.951885 kernel: ACPI: bus type drm_connector registered Oct 9 01:09:05.951894 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:09:05.951905 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 01:09:05.951918 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 01:09:05.951929 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:09:05.951939 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 01:09:05.951949 systemd[1]: Stopped verity-setup.service. Oct 9 01:09:05.951959 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 01:09:05.951970 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 01:09:05.951980 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 01:09:05.951992 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 01:09:05.952021 systemd-journald[1114]: Collecting audit messages is disabled. Oct 9 01:09:05.952042 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 01:09:05.952053 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 01:09:05.952065 systemd-journald[1114]: Journal started Oct 9 01:09:05.952088 systemd-journald[1114]: Runtime Journal (/run/log/journal/327a9a1964384056b5db20110364c571) is 5.9M, max 47.3M, 41.4M free. Oct 9 01:09:05.764278 systemd[1]: Queued start job for default target multi-user.target. Oct 9 01:09:05.779466 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 01:09:05.779795 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 01:09:05.954095 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:09:05.956560 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:09:05.958087 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 01:09:05.958216 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 01:09:05.959365 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:09:05.959532 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:09:05.960640 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 01:09:05.961712 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:09:05.961850 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:09:05.962884 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:09:05.963013 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:09:05.964424 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 01:09:05.964569 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 01:09:05.965619 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:09:05.965750 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:09:05.966861 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:09:05.967978 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 01:09:05.969182 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 01:09:05.981162 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 01:09:05.986573 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 01:09:05.988392 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 01:09:05.989282 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 01:09:05.989318 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:09:05.991079 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 01:09:05.993011 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 01:09:05.994852 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 01:09:05.995742 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:09:05.997129 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 01:09:05.998860 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 01:09:05.999832 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:09:06.001626 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 01:09:06.002406 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:09:06.004677 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:09:06.006725 systemd-journald[1114]: Time spent on flushing to /var/log/journal/327a9a1964384056b5db20110364c571 is 16.163ms for 860 entries. Oct 9 01:09:06.006725 systemd-journald[1114]: System Journal (/var/log/journal/327a9a1964384056b5db20110364c571) is 8.0M, max 195.6M, 187.6M free. Oct 9 01:09:06.030302 systemd-journald[1114]: Received client request to flush runtime journal. Oct 9 01:09:06.030348 kernel: loop0: detected capacity change from 0 to 194096 Oct 9 01:09:06.008272 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 01:09:06.011723 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 01:09:06.016915 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:09:06.018258 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 01:09:06.019532 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 01:09:06.020887 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 01:09:06.030676 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 01:09:06.032517 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 01:09:06.034978 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 01:09:06.037264 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 01:09:06.041439 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 01:09:06.044897 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:09:06.048369 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 01:09:06.048925 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 9 01:09:06.056730 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Oct 9 01:09:06.056752 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Oct 9 01:09:06.060866 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:09:06.067725 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 01:09:06.072362 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 01:09:06.073171 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 01:09:06.080483 kernel: loop1: detected capacity change from 0 to 113456 Oct 9 01:09:06.098421 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 01:09:06.105624 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:09:06.111537 kernel: loop2: detected capacity change from 0 to 116808 Oct 9 01:09:06.122199 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Oct 9 01:09:06.122218 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Oct 9 01:09:06.127497 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:09:06.145492 kernel: loop3: detected capacity change from 0 to 194096 Oct 9 01:09:06.155467 kernel: loop4: detected capacity change from 0 to 113456 Oct 9 01:09:06.160593 kernel: loop5: detected capacity change from 0 to 116808 Oct 9 01:09:06.172000 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 9 01:09:06.172424 (sd-merge)[1190]: Merged extensions into '/usr'. Oct 9 01:09:06.179974 systemd[1]: Reloading requested from client PID 1161 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 01:09:06.180151 systemd[1]: Reloading... Oct 9 01:09:06.235477 zram_generator::config[1216]: No configuration found. Oct 9 01:09:06.276139 ldconfig[1156]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 01:09:06.328973 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:09:06.364042 systemd[1]: Reloading finished in 181 ms. Oct 9 01:09:06.395866 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 01:09:06.398487 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 01:09:06.411601 systemd[1]: Starting ensure-sysext.service... Oct 9 01:09:06.413190 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:09:06.424857 systemd[1]: Reloading requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... Oct 9 01:09:06.424871 systemd[1]: Reloading... Oct 9 01:09:06.450750 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 01:09:06.451070 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 01:09:06.451763 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 01:09:06.452010 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Oct 9 01:09:06.452059 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Oct 9 01:09:06.454608 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:09:06.454620 systemd-tmpfiles[1252]: Skipping /boot Oct 9 01:09:06.464785 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:09:06.464801 systemd-tmpfiles[1252]: Skipping /boot Oct 9 01:09:06.481782 zram_generator::config[1277]: No configuration found. Oct 9 01:09:06.564908 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:09:06.600208 systemd[1]: Reloading finished in 175 ms. Oct 9 01:09:06.616540 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 01:09:06.627007 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:09:06.634935 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:09:06.637196 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 01:09:06.639427 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 01:09:06.642429 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:09:06.646040 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:09:06.651533 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 01:09:06.654443 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:09:06.655603 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:09:06.660724 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:09:06.665543 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:09:06.667207 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:09:06.669385 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 01:09:06.672017 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:09:06.673199 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:09:06.676621 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:09:06.676748 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:09:06.684820 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:09:06.684972 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:09:06.685693 systemd-udevd[1320]: Using default interface naming scheme 'v255'. Oct 9 01:09:06.687217 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:09:06.697753 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:09:06.702445 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:09:06.703385 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:09:06.703937 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:09:06.708569 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 01:09:06.710432 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 01:09:06.711788 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 01:09:06.713280 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:09:06.713429 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:09:06.714891 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:09:06.715028 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:09:06.725352 systemd[1]: Finished ensure-sysext.service. Oct 9 01:09:06.730654 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:09:06.738676 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:09:06.742795 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:09:06.748315 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:09:06.755663 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:09:06.757484 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1357) Oct 9 01:09:06.758418 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:09:06.761343 augenrules[1382]: No rules Oct 9 01:09:06.761740 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:09:06.765677 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 01:09:06.769783 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 01:09:06.770559 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 01:09:06.770908 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 01:09:06.772054 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:09:06.772259 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:09:06.774789 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:09:06.774932 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:09:06.776169 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:09:06.776285 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:09:06.777395 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:09:06.777543 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:09:06.778679 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:09:06.778806 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:09:06.787846 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 9 01:09:06.787948 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:09:06.787997 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:09:06.802476 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1357) Oct 9 01:09:06.802119 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 01:09:06.832252 systemd-resolved[1318]: Positive Trust Anchors: Oct 9 01:09:06.832331 systemd-resolved[1318]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:09:06.832363 systemd-resolved[1318]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:09:06.837482 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1348) Oct 9 01:09:06.841225 systemd-resolved[1318]: Defaulting to hostname 'linux'. Oct 9 01:09:06.842705 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:09:06.844114 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:09:06.849533 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 01:09:06.852374 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 01:09:06.865162 systemd-networkd[1384]: lo: Link UP Oct 9 01:09:06.865171 systemd-networkd[1384]: lo: Gained carrier Oct 9 01:09:06.865986 systemd-networkd[1384]: Enumeration completed Oct 9 01:09:06.866102 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:09:06.866862 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:09:06.866868 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:09:06.867072 systemd[1]: Reached target network.target - Network. Oct 9 01:09:06.867750 systemd-networkd[1384]: eth0: Link UP Oct 9 01:09:06.867801 systemd-networkd[1384]: eth0: Gained carrier Oct 9 01:09:06.867863 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:09:06.874698 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 01:09:06.880929 systemd-networkd[1384]: eth0: DHCPv4 address 10.0.0.157/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 01:09:06.881614 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Oct 9 01:09:06.882110 systemd-timesyncd[1389]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 9 01:09:06.882160 systemd-timesyncd[1389]: Initial clock synchronization to Wed 2024-10-09 01:09:07.055724 UTC. Oct 9 01:09:06.888818 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 01:09:06.899719 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 01:09:06.902167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:09:06.913232 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 01:09:06.914530 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 01:09:06.927695 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 01:09:06.947559 lvm[1416]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:09:06.954687 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:09:06.979899 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 01:09:06.981026 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:09:06.981881 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:09:06.982725 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 01:09:06.983626 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 01:09:06.984706 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 01:09:06.985578 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 01:09:06.986471 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 01:09:06.987326 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 01:09:06.987359 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:09:06.988059 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:09:06.989558 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 01:09:06.991748 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 01:09:06.999317 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 01:09:07.001252 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 01:09:07.002562 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 01:09:07.003453 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:09:07.004237 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:09:07.004990 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:09:07.005018 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:09:07.005894 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 01:09:07.007635 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 01:09:07.010606 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:09:07.010633 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 01:09:07.014934 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 01:09:07.015708 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 01:09:07.018024 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 01:09:07.020482 jq[1426]: false Oct 9 01:09:07.023166 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 01:09:07.026232 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 01:09:07.027119 extend-filesystems[1427]: Found loop3 Oct 9 01:09:07.027913 extend-filesystems[1427]: Found loop4 Oct 9 01:09:07.027913 extend-filesystems[1427]: Found loop5 Oct 9 01:09:07.027913 extend-filesystems[1427]: Found vda Oct 9 01:09:07.027913 extend-filesystems[1427]: Found vda1 Oct 9 01:09:07.027913 extend-filesystems[1427]: Found vda2 Oct 9 01:09:07.027913 extend-filesystems[1427]: Found vda3 Oct 9 01:09:07.027913 extend-filesystems[1427]: Found usr Oct 9 01:09:07.027913 extend-filesystems[1427]: Found vda4 Oct 9 01:09:07.027913 extend-filesystems[1427]: Found vda6 Oct 9 01:09:07.027913 extend-filesystems[1427]: Found vda7 Oct 9 01:09:07.027913 extend-filesystems[1427]: Found vda9 Oct 9 01:09:07.027913 extend-filesystems[1427]: Checking size of /dev/vda9 Oct 9 01:09:07.059820 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1348) Oct 9 01:09:07.059894 extend-filesystems[1427]: Resized partition /dev/vda9 Oct 9 01:09:07.028662 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 01:09:07.061091 dbus-daemon[1425]: [system] SELinux support is enabled Oct 9 01:09:07.031674 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 01:09:07.037672 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 01:09:07.069628 extend-filesystems[1450]: resize2fs 1.47.1 (20-May-2024) Oct 9 01:09:07.038136 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 01:09:07.038971 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 01:09:07.042564 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 01:09:07.074425 jq[1444]: true Oct 9 01:09:07.044535 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 01:09:07.048794 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 01:09:07.051547 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 01:09:07.051925 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 01:09:07.052089 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 01:09:07.054074 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 01:09:07.054272 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 01:09:07.063783 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 01:09:07.076501 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 9 01:09:07.097648 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 9 01:09:07.120605 extend-filesystems[1450]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 01:09:07.120605 extend-filesystems[1450]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 9 01:09:07.120605 extend-filesystems[1450]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 9 01:09:07.098880 (ntainerd)[1453]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 01:09:07.132080 update_engine[1441]: I20241009 01:09:07.113843 1441 main.cc:92] Flatcar Update Engine starting Oct 9 01:09:07.132080 update_engine[1441]: I20241009 01:09:07.122695 1441 update_check_scheduler.cc:74] Next update check in 8m55s Oct 9 01:09:07.132256 tar[1449]: linux-arm64/helm Oct 9 01:09:07.133799 extend-filesystems[1427]: Resized filesystem in /dev/vda9 Oct 9 01:09:07.110662 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 01:09:07.135757 jq[1451]: true Oct 9 01:09:07.110697 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 01:09:07.112282 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 01:09:07.112298 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 01:09:07.116002 systemd[1]: Started update-engine.service - Update Engine. Oct 9 01:09:07.118803 systemd-logind[1437]: Watching system buttons on /dev/input/event0 (Power Button) Oct 9 01:09:07.120491 systemd-logind[1437]: New seat seat0. Oct 9 01:09:07.121781 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 01:09:07.123080 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 01:09:07.125124 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 01:09:07.125307 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 01:09:07.185107 bash[1480]: Updated "/home/core/.ssh/authorized_keys" Oct 9 01:09:07.189222 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 01:09:07.191023 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 9 01:09:07.201144 locksmithd[1464]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 01:09:07.320480 containerd[1453]: time="2024-10-09T01:09:07.319369015Z" level=info msg="starting containerd" revision=b2ce781edcbd6cb758f172ecab61c79d607cc41d version=v1.7.22 Oct 9 01:09:07.346704 containerd[1453]: time="2024-10-09T01:09:07.346605672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:09:07.348369 containerd[1453]: time="2024-10-09T01:09:07.348303184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:09:07.348369 containerd[1453]: time="2024-10-09T01:09:07.348336941Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 01:09:07.348369 containerd[1453]: time="2024-10-09T01:09:07.348354596Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 01:09:07.348536 containerd[1453]: time="2024-10-09T01:09:07.348515700Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 01:09:07.348565 containerd[1453]: time="2024-10-09T01:09:07.348539322Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 01:09:07.348612 containerd[1453]: time="2024-10-09T01:09:07.348595148Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:09:07.348646 containerd[1453]: time="2024-10-09T01:09:07.348611414Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:09:07.348787 containerd[1453]: time="2024-10-09T01:09:07.348767572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:09:07.348811 containerd[1453]: time="2024-10-09T01:09:07.348786045Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 01:09:07.348811 containerd[1453]: time="2024-10-09T01:09:07.348799449Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:09:07.348811 containerd[1453]: time="2024-10-09T01:09:07.348809053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 01:09:07.348893 containerd[1453]: time="2024-10-09T01:09:07.348877181Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:09:07.349093 containerd[1453]: time="2024-10-09T01:09:07.349073023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:09:07.349189 containerd[1453]: time="2024-10-09T01:09:07.349170535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:09:07.349189 containerd[1453]: time="2024-10-09T01:09:07.349186923Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 01:09:07.349290 containerd[1453]: time="2024-10-09T01:09:07.349271643Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 01:09:07.349331 containerd[1453]: time="2024-10-09T01:09:07.349317457Z" level=info msg="metadata content store policy set" policy=shared Oct 9 01:09:07.353631 containerd[1453]: time="2024-10-09T01:09:07.353577421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 01:09:07.353631 containerd[1453]: time="2024-10-09T01:09:07.353623439Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 01:09:07.353720 containerd[1453]: time="2024-10-09T01:09:07.353647469Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 01:09:07.353720 containerd[1453]: time="2024-10-09T01:09:07.353663572Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 01:09:07.353720 containerd[1453]: time="2024-10-09T01:09:07.353680409Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 01:09:07.353848 containerd[1453]: time="2024-10-09T01:09:07.353802320Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 01:09:07.354831 containerd[1453]: time="2024-10-09T01:09:07.354160205Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 01:09:07.354831 containerd[1453]: time="2024-10-09T01:09:07.354306146Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 01:09:07.354831 containerd[1453]: time="2024-10-09T01:09:07.354322902Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 01:09:07.354831 containerd[1453]: time="2024-10-09T01:09:07.354352287Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 01:09:07.354831 containerd[1453]: time="2024-10-09T01:09:07.354369165Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 01:09:07.354831 containerd[1453]: time="2024-10-09T01:09:07.354381916Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 01:09:07.354831 containerd[1453]: time="2024-10-09T01:09:07.354394422Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 01:09:07.354831 containerd[1453]: time="2024-10-09T01:09:07.354408481Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 01:09:07.354831 containerd[1453]: time="2024-10-09T01:09:07.354421641Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 01:09:07.354831 containerd[1453]: time="2024-10-09T01:09:07.354434146Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 01:09:07.354831 containerd[1453]: time="2024-10-09T01:09:07.354445712Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 01:09:07.354831 containerd[1453]: time="2024-10-09T01:09:07.354471173Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 01:09:07.354831 containerd[1453]: time="2024-10-09T01:09:07.354493528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 01:09:07.354831 containerd[1453]: time="2024-10-09T01:09:07.354507219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 01:09:07.355100 containerd[1453]: time="2024-10-09T01:09:07.354519684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 01:09:07.355100 containerd[1453]: time="2024-10-09T01:09:07.354534560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 01:09:07.355100 containerd[1453]: time="2024-10-09T01:09:07.354546167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 01:09:07.355100 containerd[1453]: time="2024-10-09T01:09:07.354559612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 01:09:07.355100 containerd[1453]: time="2024-10-09T01:09:07.354571383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 01:09:07.355100 containerd[1453]: time="2024-10-09T01:09:07.354585033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 01:09:07.355100 containerd[1453]: time="2024-10-09T01:09:07.354597130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 01:09:07.355100 containerd[1453]: time="2024-10-09T01:09:07.354611434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 01:09:07.355100 containerd[1453]: time="2024-10-09T01:09:07.354622959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 01:09:07.355100 containerd[1453]: time="2024-10-09T01:09:07.354634443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 01:09:07.355100 containerd[1453]: time="2024-10-09T01:09:07.354646008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 01:09:07.355100 containerd[1453]: time="2024-10-09T01:09:07.354660721Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 01:09:07.355100 containerd[1453]: time="2024-10-09T01:09:07.354688185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 01:09:07.355100 containerd[1453]: time="2024-10-09T01:09:07.354703919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 01:09:07.355100 containerd[1453]: time="2024-10-09T01:09:07.354715362Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 01:09:07.355599 containerd[1453]: time="2024-10-09T01:09:07.355575071Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 01:09:07.355742 containerd[1453]: time="2024-10-09T01:09:07.355640215Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 01:09:07.356502 containerd[1453]: time="2024-10-09T01:09:07.355654560Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 01:09:07.356502 containerd[1453]: time="2024-10-09T01:09:07.355907658Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 01:09:07.356502 containerd[1453]: time="2024-10-09T01:09:07.355923188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 01:09:07.356502 containerd[1453]: time="2024-10-09T01:09:07.355936879Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 01:09:07.356502 containerd[1453]: time="2024-10-09T01:09:07.355946688Z" level=info msg="NRI interface is disabled by configuration." Oct 9 01:09:07.356502 containerd[1453]: time="2024-10-09T01:09:07.355956537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 01:09:07.356658 containerd[1453]: time="2024-10-09T01:09:07.356299015Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 01:09:07.356658 containerd[1453]: time="2024-10-09T01:09:07.356352920Z" level=info msg="Connect containerd service" Oct 9 01:09:07.356658 containerd[1453]: time="2024-10-09T01:09:07.356390478Z" level=info msg="using legacy CRI server" Oct 9 01:09:07.356658 containerd[1453]: time="2024-10-09T01:09:07.356400368Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 01:09:07.356972 containerd[1453]: time="2024-10-09T01:09:07.356948373Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 01:09:07.359256 containerd[1453]: time="2024-10-09T01:09:07.359226463Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 01:09:07.361990 containerd[1453]: time="2024-10-09T01:09:07.361744736Z" level=info msg="Start subscribing containerd event" Oct 9 01:09:07.361990 containerd[1453]: time="2024-10-09T01:09:07.361891208Z" level=info msg="Start recovering state" Oct 9 01:09:07.361990 containerd[1453]: time="2024-10-09T01:09:07.361955576Z" level=info msg="Start event monitor" Oct 9 01:09:07.361990 containerd[1453]: time="2024-10-09T01:09:07.361971392Z" level=info msg="Start snapshots syncer" Oct 9 01:09:07.361990 containerd[1453]: time="2024-10-09T01:09:07.361980056Z" level=info msg="Start cni network conf syncer for default" Oct 9 01:09:07.361990 containerd[1453]: time="2024-10-09T01:09:07.361987167Z" level=info msg="Start streaming server" Oct 9 01:09:07.362574 containerd[1453]: time="2024-10-09T01:09:07.362521359Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 01:09:07.362574 containerd[1453]: time="2024-10-09T01:09:07.362577430Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 01:09:07.362759 containerd[1453]: time="2024-10-09T01:09:07.362621814Z" level=info msg="containerd successfully booted in 0.044762s" Oct 9 01:09:07.362708 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 01:09:07.416439 sshd_keygen[1448]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 01:09:07.435713 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 01:09:07.443103 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 01:09:07.447468 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 01:09:07.447774 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 01:09:07.458933 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 01:09:07.468229 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 01:09:07.471120 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 01:09:07.471912 tar[1449]: linux-arm64/LICENSE Oct 9 01:09:07.471912 tar[1449]: linux-arm64/README.md Oct 9 01:09:07.473173 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 9 01:09:07.474313 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 01:09:07.482365 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 01:09:07.981115 systemd-networkd[1384]: eth0: Gained IPv6LL Oct 9 01:09:07.983466 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 01:09:07.984917 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 01:09:07.995705 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 9 01:09:07.997729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:09:07.999460 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 01:09:08.014057 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 9 01:09:08.014283 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 9 01:09:08.016037 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 01:09:08.017576 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 01:09:08.494306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:09:08.495711 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 01:09:08.498280 (kubelet)[1538]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:09:08.500546 systemd[1]: Startup finished in 522ms (kernel) + 4.712s (initrd) + 3.124s (userspace) = 8.359s. Oct 9 01:09:08.962515 kubelet[1538]: E1009 01:09:08.962391 1538 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:09:08.964844 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:09:08.964989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:09:13.686657 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 01:09:13.702770 systemd[1]: Started sshd@0-10.0.0.157:22-10.0.0.1:54082.service - OpenSSH per-connection server daemon (10.0.0.1:54082). Oct 9 01:09:13.758445 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 54082 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:09:13.760166 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:13.768362 systemd-logind[1437]: New session 1 of user core. Oct 9 01:09:13.769419 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 01:09:13.775723 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 01:09:13.786538 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 01:09:13.790724 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 01:09:13.795379 (systemd)[1557]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 01:09:13.863455 systemd[1557]: Queued start job for default target default.target. Oct 9 01:09:13.872411 systemd[1557]: Created slice app.slice - User Application Slice. Oct 9 01:09:13.872453 systemd[1557]: Reached target paths.target - Paths. Oct 9 01:09:13.872484 systemd[1557]: Reached target timers.target - Timers. Oct 9 01:09:13.873650 systemd[1557]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 01:09:13.882793 systemd[1557]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 01:09:13.882852 systemd[1557]: Reached target sockets.target - Sockets. Oct 9 01:09:13.882864 systemd[1557]: Reached target basic.target - Basic System. Oct 9 01:09:13.882898 systemd[1557]: Reached target default.target - Main User Target. Oct 9 01:09:13.882923 systemd[1557]: Startup finished in 82ms. Oct 9 01:09:13.883139 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 01:09:13.884378 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 01:09:13.943924 systemd[1]: Started sshd@1-10.0.0.157:22-10.0.0.1:54090.service - OpenSSH per-connection server daemon (10.0.0.1:54090). Oct 9 01:09:13.982915 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 54090 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:09:13.984154 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:13.988023 systemd-logind[1437]: New session 2 of user core. Oct 9 01:09:14.000715 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 01:09:14.053692 sshd[1568]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:14.062733 systemd[1]: sshd@1-10.0.0.157:22-10.0.0.1:54090.service: Deactivated successfully. Oct 9 01:09:14.064012 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 01:09:14.065184 systemd-logind[1437]: Session 2 logged out. Waiting for processes to exit. Oct 9 01:09:14.066272 systemd[1]: Started sshd@2-10.0.0.157:22-10.0.0.1:54106.service - OpenSSH per-connection server daemon (10.0.0.1:54106). Oct 9 01:09:14.068088 systemd-logind[1437]: Removed session 2. Oct 9 01:09:14.106640 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 54106 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:09:14.107882 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:14.111874 systemd-logind[1437]: New session 3 of user core. Oct 9 01:09:14.126633 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 01:09:14.175440 sshd[1575]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:14.190123 systemd[1]: sshd@2-10.0.0.157:22-10.0.0.1:54106.service: Deactivated successfully. Oct 9 01:09:14.191566 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 01:09:14.193686 systemd-logind[1437]: Session 3 logged out. Waiting for processes to exit. Oct 9 01:09:14.208810 systemd[1]: Started sshd@3-10.0.0.157:22-10.0.0.1:54110.service - OpenSSH per-connection server daemon (10.0.0.1:54110). Oct 9 01:09:14.209836 systemd-logind[1437]: Removed session 3. Oct 9 01:09:14.243912 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 54110 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:09:14.245144 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:14.249169 systemd-logind[1437]: New session 4 of user core. Oct 9 01:09:14.259612 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 01:09:14.313364 sshd[1582]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:14.338988 systemd[1]: sshd@3-10.0.0.157:22-10.0.0.1:54110.service: Deactivated successfully. Oct 9 01:09:14.341860 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 01:09:14.343035 systemd-logind[1437]: Session 4 logged out. Waiting for processes to exit. Oct 9 01:09:14.349800 systemd[1]: Started sshd@4-10.0.0.157:22-10.0.0.1:54124.service - OpenSSH per-connection server daemon (10.0.0.1:54124). Oct 9 01:09:14.350688 systemd-logind[1437]: Removed session 4. Oct 9 01:09:14.384885 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 54124 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:09:14.386114 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:14.389972 systemd-logind[1437]: New session 5 of user core. Oct 9 01:09:14.403608 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 01:09:14.466266 sudo[1592]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 01:09:14.466598 sudo[1592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:09:14.482196 sudo[1592]: pam_unix(sudo:session): session closed for user root Oct 9 01:09:14.483854 sshd[1589]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:14.502008 systemd[1]: sshd@4-10.0.0.157:22-10.0.0.1:54124.service: Deactivated successfully. Oct 9 01:09:14.504697 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 01:09:14.505781 systemd-logind[1437]: Session 5 logged out. Waiting for processes to exit. Oct 9 01:09:14.507042 systemd[1]: Started sshd@5-10.0.0.157:22-10.0.0.1:54132.service - OpenSSH per-connection server daemon (10.0.0.1:54132). Oct 9 01:09:14.507751 systemd-logind[1437]: Removed session 5. Oct 9 01:09:14.546676 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 54132 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:09:14.547998 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:14.551999 systemd-logind[1437]: New session 6 of user core. Oct 9 01:09:14.563632 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 01:09:14.615767 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 01:09:14.616042 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:09:14.618847 sudo[1601]: pam_unix(sudo:session): session closed for user root Oct 9 01:09:14.623113 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 9 01:09:14.623638 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:09:14.644782 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:09:14.666682 augenrules[1623]: No rules Oct 9 01:09:14.667864 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:09:14.669503 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:09:14.670499 sudo[1600]: pam_unix(sudo:session): session closed for user root Oct 9 01:09:14.672186 sshd[1597]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:14.682749 systemd[1]: sshd@5-10.0.0.157:22-10.0.0.1:54132.service: Deactivated successfully. Oct 9 01:09:14.684205 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 01:09:14.686519 systemd-logind[1437]: Session 6 logged out. Waiting for processes to exit. Oct 9 01:09:14.687605 systemd[1]: Started sshd@6-10.0.0.157:22-10.0.0.1:54142.service - OpenSSH per-connection server daemon (10.0.0.1:54142). Oct 9 01:09:14.688397 systemd-logind[1437]: Removed session 6. Oct 9 01:09:14.725875 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 54142 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:09:14.727344 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:14.730755 systemd-logind[1437]: New session 7 of user core. Oct 9 01:09:14.746599 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 01:09:14.797111 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 01:09:14.797375 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:09:15.112702 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 01:09:15.112794 (dockerd)[1654]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 01:09:15.364300 dockerd[1654]: time="2024-10-09T01:09:15.364158408Z" level=info msg="Starting up" Oct 9 01:09:15.515188 dockerd[1654]: time="2024-10-09T01:09:15.514944573Z" level=info msg="Loading containers: start." Oct 9 01:09:15.654544 kernel: Initializing XFRM netlink socket Oct 9 01:09:15.719307 systemd-networkd[1384]: docker0: Link UP Oct 9 01:09:15.749628 dockerd[1654]: time="2024-10-09T01:09:15.749577780Z" level=info msg="Loading containers: done." Oct 9 01:09:15.762049 dockerd[1654]: time="2024-10-09T01:09:15.761985297Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 01:09:15.762170 dockerd[1654]: time="2024-10-09T01:09:15.762086446Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Oct 9 01:09:15.762195 dockerd[1654]: time="2024-10-09T01:09:15.762177802Z" level=info msg="Daemon has completed initialization" Oct 9 01:09:15.789650 dockerd[1654]: time="2024-10-09T01:09:15.789547594Z" level=info msg="API listen on /run/docker.sock" Oct 9 01:09:15.789713 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 01:09:16.405866 containerd[1453]: time="2024-10-09T01:09:16.405827324Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\"" Oct 9 01:09:16.497774 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1336450983-merged.mount: Deactivated successfully. Oct 9 01:09:17.030453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount397534644.mount: Deactivated successfully. Oct 9 01:09:18.113105 containerd[1453]: time="2024-10-09T01:09:18.113061185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:18.114869 containerd[1453]: time="2024-10-09T01:09:18.114785761Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.5: active requests=0, bytes read=29945964" Oct 9 01:09:18.115632 containerd[1453]: time="2024-10-09T01:09:18.115596553Z" level=info msg="ImageCreate event name:\"sha256:2bf7f63bc5e4cb1f93cdd13e325e181862614b805d7cc45282599fb6dd1d329d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:18.119169 containerd[1453]: time="2024-10-09T01:09:18.119127834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:18.119971 containerd[1453]: time="2024-10-09T01:09:18.119931189Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.5\" with image id \"sha256:2bf7f63bc5e4cb1f93cdd13e325e181862614b805d7cc45282599fb6dd1d329d\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\", size \"29942762\" in 1.71406438s" Oct 9 01:09:18.119971 containerd[1453]: time="2024-10-09T01:09:18.119966444Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\" returns image reference \"sha256:2bf7f63bc5e4cb1f93cdd13e325e181862614b805d7cc45282599fb6dd1d329d\"" Oct 9 01:09:18.141529 containerd[1453]: time="2024-10-09T01:09:18.141495546Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\"" Oct 9 01:09:19.215999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 01:09:19.227633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:09:19.318338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:09:19.322684 (kubelet)[1929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:09:19.369273 kubelet[1929]: E1009 01:09:19.369184 1929 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:09:19.373192 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:09:19.373333 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:09:19.846709 containerd[1453]: time="2024-10-09T01:09:19.846658100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:19.847353 containerd[1453]: time="2024-10-09T01:09:19.847315160Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.5: active requests=0, bytes read=26885775" Oct 9 01:09:19.848010 containerd[1453]: time="2024-10-09T01:09:19.847974752Z" level=info msg="ImageCreate event name:\"sha256:e1be44cf89df192ebc5b44737bf94ac472fe9a0eb3ddf9422d96eed2380ea7e6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:19.851252 containerd[1453]: time="2024-10-09T01:09:19.851215743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:19.852094 containerd[1453]: time="2024-10-09T01:09:19.852008113Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.5\" with image id \"sha256:e1be44cf89df192ebc5b44737bf94ac472fe9a0eb3ddf9422d96eed2380ea7e6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\", size \"28373587\" in 1.710473058s" Oct 9 01:09:19.852094 containerd[1453]: time="2024-10-09T01:09:19.852042985Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\" returns image reference \"sha256:e1be44cf89df192ebc5b44737bf94ac472fe9a0eb3ddf9422d96eed2380ea7e6\"" Oct 9 01:09:19.869986 containerd[1453]: time="2024-10-09T01:09:19.869946778Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\"" Oct 9 01:09:29.480513 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 01:09:29.489647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:09:29.579631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:09:29.583153 (kubelet)[1958]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:09:29.616808 kubelet[1958]: E1009 01:09:29.616757 1958 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:09:29.619266 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:09:29.619418 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:09:39.730488 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 9 01:09:39.739701 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:09:39.825883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:09:39.829315 (kubelet)[1976]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:09:39.864895 kubelet[1976]: E1009 01:09:39.864851 1976 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:09:39.867545 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:09:39.867684 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:09:42.144042 containerd[1453]: time="2024-10-09T01:09:42.143984290Z" level=error msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\" failed" error="failed to pull and unpack image \"registry.k8s.io/kube-scheduler:v1.30.5\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry.k8s.io/v2/kube-scheduler/blobs/sha256:b6db73bf7694d702f3d1cb29dc3e4051df33cc6316cd3636eabbab1e6d26466f: 500 Internal Server Error" Oct 9 01:09:42.144500 containerd[1453]: time="2024-10-09T01:09:42.144052787Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.5: active requests=0, bytes read=5258" Oct 9 01:09:42.162925 containerd[1453]: time="2024-10-09T01:09:42.162897284Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\"" Oct 9 01:09:43.156759 containerd[1453]: time="2024-10-09T01:09:43.156714515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:43.157668 containerd[1453]: time="2024-10-09T01:09:43.157411354Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.5: active requests=0, bytes read=16149499" Oct 9 01:09:43.158329 containerd[1453]: time="2024-10-09T01:09:43.158296276Z" level=info msg="ImageCreate event name:\"sha256:b6db73bf7694d702f3d1cb29dc3e4051df33cc6316cd3636eabbab1e6d26466f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:43.161155 containerd[1453]: time="2024-10-09T01:09:43.161113997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:43.162390 containerd[1453]: time="2024-10-09T01:09:43.162271261Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.5\" with image id \"sha256:b6db73bf7694d702f3d1cb29dc3e4051df33cc6316cd3636eabbab1e6d26466f\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\", size \"17642104\" in 999.341089ms" Oct 9 01:09:43.162390 containerd[1453]: time="2024-10-09T01:09:43.162301348Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\" returns image reference \"sha256:b6db73bf7694d702f3d1cb29dc3e4051df33cc6316cd3636eabbab1e6d26466f\"" Oct 9 01:09:43.180196 containerd[1453]: time="2024-10-09T01:09:43.180164296Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\"" Oct 9 01:09:44.157640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2646377965.mount: Deactivated successfully. Oct 9 01:09:44.341542 containerd[1453]: time="2024-10-09T01:09:44.341493262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:44.342477 containerd[1453]: time="2024-10-09T01:09:44.342401297Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.5: active requests=0, bytes read=25648343" Oct 9 01:09:44.343242 containerd[1453]: time="2024-10-09T01:09:44.343191748Z" level=info msg="ImageCreate event name:\"sha256:57f247cd1b5672dc99f46b3e3e288bbc06e9c17dfcfdb6b855cd83af9a418d43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:44.345075 containerd[1453]: time="2024-10-09T01:09:44.345020942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:44.345734 containerd[1453]: time="2024-10-09T01:09:44.345595866Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.5\" with image id \"sha256:57f247cd1b5672dc99f46b3e3e288bbc06e9c17dfcfdb6b855cd83af9a418d43\", repo tag \"registry.k8s.io/kube-proxy:v1.30.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\", size \"25647360\" in 1.165397843s" Oct 9 01:09:44.345734 containerd[1453]: time="2024-10-09T01:09:44.345629633Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\" returns image reference \"sha256:57f247cd1b5672dc99f46b3e3e288bbc06e9c17dfcfdb6b855cd83af9a418d43\"" Oct 9 01:09:44.363353 containerd[1453]: time="2024-10-09T01:09:44.363327966Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 01:09:44.969260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2325434925.mount: Deactivated successfully. Oct 9 01:09:45.563981 containerd[1453]: time="2024-10-09T01:09:45.563933639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:45.564404 containerd[1453]: time="2024-10-09T01:09:45.564336761Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Oct 9 01:09:45.565159 containerd[1453]: time="2024-10-09T01:09:45.565129243Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:45.568017 containerd[1453]: time="2024-10-09T01:09:45.567985225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:45.570176 containerd[1453]: time="2024-10-09T01:09:45.570122461Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.206760768s" Oct 9 01:09:45.570176 containerd[1453]: time="2024-10-09T01:09:45.570169071Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 9 01:09:45.588273 containerd[1453]: time="2024-10-09T01:09:45.588206950Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 01:09:46.164301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount666522722.mount: Deactivated successfully. Oct 9 01:09:46.168466 containerd[1453]: time="2024-10-09T01:09:46.168161023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:46.169233 containerd[1453]: time="2024-10-09T01:09:46.169181500Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Oct 9 01:09:46.170233 containerd[1453]: time="2024-10-09T01:09:46.170187414Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:46.172358 containerd[1453]: time="2024-10-09T01:09:46.172302463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:46.173120 containerd[1453]: time="2024-10-09T01:09:46.172970632Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 584.733836ms" Oct 9 01:09:46.173120 containerd[1453]: time="2024-10-09T01:09:46.173000958Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Oct 9 01:09:46.191289 containerd[1453]: time="2024-10-09T01:09:46.191253285Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Oct 9 01:09:46.755229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount588876187.mount: Deactivated successfully. Oct 9 01:09:48.503899 containerd[1453]: time="2024-10-09T01:09:48.503834680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:48.504850 containerd[1453]: time="2024-10-09T01:09:48.504756600Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Oct 9 01:09:48.505503 containerd[1453]: time="2024-10-09T01:09:48.505469004Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:48.509119 containerd[1453]: time="2024-10-09T01:09:48.509081031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:48.510516 containerd[1453]: time="2024-10-09T01:09:48.510477554Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.319185302s" Oct 9 01:09:48.510553 containerd[1453]: time="2024-10-09T01:09:48.510514000Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Oct 9 01:09:49.980289 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 9 01:09:49.990719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:09:50.076128 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:09:50.079531 (kubelet)[2205]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:09:50.117388 kubelet[2205]: E1009 01:09:50.117352 2205 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:09:50.120011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:09:50.120160 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:09:52.178551 update_engine[1441]: I20241009 01:09:52.178474 1441 update_attempter.cc:509] Updating boot flags... Oct 9 01:09:52.269472 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2221) Oct 9 01:09:53.768212 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:09:53.786653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:09:53.799973 systemd[1]: Reloading requested from client PID 2233 ('systemctl') (unit session-7.scope)... Oct 9 01:09:53.799999 systemd[1]: Reloading... Oct 9 01:09:53.863469 zram_generator::config[2269]: No configuration found. Oct 9 01:09:53.973994 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:09:54.024904 systemd[1]: Reloading finished in 224 ms. Oct 9 01:09:54.063683 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:09:54.066323 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 01:09:54.066530 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:09:54.067877 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:09:54.160755 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:09:54.165264 (kubelet)[2319]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:09:54.200871 kubelet[2319]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:09:54.200871 kubelet[2319]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:09:54.200871 kubelet[2319]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:09:54.201150 kubelet[2319]: I1009 01:09:54.201003 2319 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:09:55.316503 kubelet[2319]: I1009 01:09:55.316467 2319 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 9 01:09:55.316503 kubelet[2319]: I1009 01:09:55.316492 2319 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:09:55.316796 kubelet[2319]: I1009 01:09:55.316647 2319 server.go:927] "Client rotation is on, will bootstrap in background" Oct 9 01:09:55.369187 kubelet[2319]: I1009 01:09:55.368573 2319 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:09:55.369894 kubelet[2319]: E1009 01:09:55.369874 2319 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.157:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.157:6443: connect: connection refused Oct 9 01:09:55.376634 kubelet[2319]: I1009 01:09:55.376608 2319 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:09:55.377843 kubelet[2319]: I1009 01:09:55.377809 2319 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:09:55.378076 kubelet[2319]: I1009 01:09:55.377925 2319 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:09:55.378248 kubelet[2319]: I1009 01:09:55.378236 2319 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:09:55.378296 kubelet[2319]: I1009 01:09:55.378289 2319 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:09:55.378606 kubelet[2319]: I1009 01:09:55.378591 2319 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:09:55.380046 kubelet[2319]: I1009 01:09:55.380030 2319 kubelet.go:400] "Attempting to sync node with API server" Oct 9 01:09:55.380150 kubelet[2319]: W1009 01:09:55.380102 2319 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.157:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Oct 9 01:09:55.380186 kubelet[2319]: E1009 01:09:55.380158 2319 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.157:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Oct 9 01:09:55.380186 kubelet[2319]: I1009 01:09:55.380131 2319 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:09:55.380401 kubelet[2319]: I1009 01:09:55.380391 2319 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:09:55.380498 kubelet[2319]: I1009 01:09:55.380484 2319 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:09:55.380906 kubelet[2319]: W1009 01:09:55.380869 2319 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.157:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Oct 9 01:09:55.380964 kubelet[2319]: E1009 01:09:55.380914 2319 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.157:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Oct 9 01:09:55.381370 kubelet[2319]: I1009 01:09:55.381348 2319 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:09:55.381779 kubelet[2319]: I1009 01:09:55.381762 2319 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:09:55.381940 kubelet[2319]: W1009 01:09:55.381926 2319 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 01:09:55.382799 kubelet[2319]: I1009 01:09:55.382674 2319 server.go:1264] "Started kubelet" Oct 9 01:09:55.384179 kubelet[2319]: I1009 01:09:55.384135 2319 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:09:55.384476 kubelet[2319]: I1009 01:09:55.384442 2319 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:09:55.384668 kubelet[2319]: I1009 01:09:55.384240 2319 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:09:55.385875 kubelet[2319]: I1009 01:09:55.385837 2319 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:09:55.386316 kubelet[2319]: E1009 01:09:55.385794 2319 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.157:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.157:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fca38fe2b5e0b0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 01:09:55.382649008 +0000 UTC m=+1.213595130,LastTimestamp:2024-10-09 01:09:55.382649008 +0000 UTC m=+1.213595130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 01:09:55.386473 kubelet[2319]: E1009 01:09:55.386313 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.157:6443: connect: connection refused" interval="200ms" Oct 9 01:09:55.386664 kubelet[2319]: I1009 01:09:55.386642 2319 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:09:55.386726 kubelet[2319]: I1009 01:09:55.386712 2319 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 9 01:09:55.386726 kubelet[2319]: I1009 01:09:55.386721 2319 server.go:455] "Adding debug handlers to kubelet server" Oct 9 01:09:55.386939 kubelet[2319]: I1009 01:09:55.386818 2319 reconciler.go:26] "Reconciler: start to sync state" Oct 9 01:09:55.387113 kubelet[2319]: W1009 01:09:55.387035 2319 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Oct 9 01:09:55.387113 kubelet[2319]: E1009 01:09:55.387078 2319 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Oct 9 01:09:55.387711 kubelet[2319]: E1009 01:09:55.387688 2319 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 01:09:55.387843 kubelet[2319]: I1009 01:09:55.387794 2319 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:09:55.388050 kubelet[2319]: I1009 01:09:55.387925 2319 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:09:55.388939 kubelet[2319]: I1009 01:09:55.388887 2319 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:09:55.404315 kubelet[2319]: I1009 01:09:55.404266 2319 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:09:55.405237 kubelet[2319]: I1009 01:09:55.405206 2319 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:09:55.405237 kubelet[2319]: I1009 01:09:55.405233 2319 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:09:55.405341 kubelet[2319]: I1009 01:09:55.405247 2319 kubelet.go:2337] "Starting kubelet main sync loop" Oct 9 01:09:55.405341 kubelet[2319]: E1009 01:09:55.405280 2319 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:09:55.405977 kubelet[2319]: W1009 01:09:55.405870 2319 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Oct 9 01:09:55.405977 kubelet[2319]: E1009 01:09:55.405922 2319 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Oct 9 01:09:55.406537 kubelet[2319]: I1009 01:09:55.406518 2319 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:09:55.406715 kubelet[2319]: I1009 01:09:55.406652 2319 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:09:55.406715 kubelet[2319]: I1009 01:09:55.406671 2319 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:09:55.469681 kubelet[2319]: I1009 01:09:55.469649 2319 policy_none.go:49] "None policy: Start" Oct 9 01:09:55.470706 kubelet[2319]: I1009 01:09:55.470615 2319 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:09:55.470706 kubelet[2319]: I1009 01:09:55.470642 2319 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:09:55.475860 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 01:09:55.486945 kubelet[2319]: I1009 01:09:55.486908 2319 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:09:55.487262 kubelet[2319]: E1009 01:09:55.487231 2319 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.157:6443/api/v1/nodes\": dial tcp 10.0.0.157:6443: connect: connection refused" node="localhost" Oct 9 01:09:55.488223 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 01:09:55.490777 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 01:09:55.501195 kubelet[2319]: I1009 01:09:55.501167 2319 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:09:55.501989 kubelet[2319]: I1009 01:09:55.501936 2319 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 01:09:55.502680 kubelet[2319]: I1009 01:09:55.502622 2319 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:09:55.502996 kubelet[2319]: E1009 01:09:55.502963 2319 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 9 01:09:55.506098 kubelet[2319]: I1009 01:09:55.506072 2319 topology_manager.go:215] "Topology Admit Handler" podUID="9e9ffee0953b6c46740544502efcc2eb" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 01:09:55.506897 kubelet[2319]: I1009 01:09:55.506867 2319 topology_manager.go:215] "Topology Admit Handler" podUID="e5c757a7a09759fc423ca409747c56ae" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 01:09:55.507643 kubelet[2319]: I1009 01:09:55.507498 2319 topology_manager.go:215] "Topology Admit Handler" podUID="2fcea4df269cc1e6513f9e3e768ded5a" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 01:09:55.513251 systemd[1]: Created slice kubepods-burstable-pode5c757a7a09759fc423ca409747c56ae.slice - libcontainer container kubepods-burstable-pode5c757a7a09759fc423ca409747c56ae.slice. Oct 9 01:09:55.523579 systemd[1]: Created slice kubepods-burstable-pod9e9ffee0953b6c46740544502efcc2eb.slice - libcontainer container kubepods-burstable-pod9e9ffee0953b6c46740544502efcc2eb.slice. Oct 9 01:09:55.536391 systemd[1]: Created slice kubepods-burstable-pod2fcea4df269cc1e6513f9e3e768ded5a.slice - libcontainer container kubepods-burstable-pod2fcea4df269cc1e6513f9e3e768ded5a.slice. Oct 9 01:09:55.587058 kubelet[2319]: I1009 01:09:55.586936 2319 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e9ffee0953b6c46740544502efcc2eb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e9ffee0953b6c46740544502efcc2eb\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:09:55.587058 kubelet[2319]: E1009 01:09:55.586947 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.157:6443: connect: connection refused" interval="400ms" Oct 9 01:09:55.687443 kubelet[2319]: I1009 01:09:55.687201 2319 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e9ffee0953b6c46740544502efcc2eb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9e9ffee0953b6c46740544502efcc2eb\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:09:55.687443 kubelet[2319]: I1009 01:09:55.687261 2319 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:09:55.687443 kubelet[2319]: I1009 01:09:55.687284 2319 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:09:55.687443 kubelet[2319]: I1009 01:09:55.687305 2319 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:09:55.687443 kubelet[2319]: I1009 01:09:55.687322 2319 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:09:55.687635 kubelet[2319]: I1009 01:09:55.687340 2319 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fcea4df269cc1e6513f9e3e768ded5a-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2fcea4df269cc1e6513f9e3e768ded5a\") " pod="kube-system/kube-scheduler-localhost" Oct 9 01:09:55.687635 kubelet[2319]: I1009 01:09:55.687378 2319 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e9ffee0953b6c46740544502efcc2eb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e9ffee0953b6c46740544502efcc2eb\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:09:55.687635 kubelet[2319]: I1009 01:09:55.687393 2319 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:09:55.688002 kubelet[2319]: I1009 01:09:55.687959 2319 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:09:55.688217 kubelet[2319]: E1009 01:09:55.688194 2319 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.157:6443/api/v1/nodes\": dial tcp 10.0.0.157:6443: connect: connection refused" node="localhost" Oct 9 01:09:55.823110 kubelet[2319]: E1009 01:09:55.823064 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:55.823753 containerd[1453]: time="2024-10-09T01:09:55.823670888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e5c757a7a09759fc423ca409747c56ae,Namespace:kube-system,Attempt:0,}" Oct 9 01:09:55.834935 kubelet[2319]: E1009 01:09:55.834900 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:55.836370 containerd[1453]: time="2024-10-09T01:09:55.836342755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9e9ffee0953b6c46740544502efcc2eb,Namespace:kube-system,Attempt:0,}" Oct 9 01:09:55.838616 kubelet[2319]: E1009 01:09:55.838541 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:55.838860 containerd[1453]: time="2024-10-09T01:09:55.838834259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2fcea4df269cc1e6513f9e3e768ded5a,Namespace:kube-system,Attempt:0,}" Oct 9 01:09:55.988343 kubelet[2319]: E1009 01:09:55.988302 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.157:6443: connect: connection refused" interval="800ms" Oct 9 01:09:56.089568 kubelet[2319]: I1009 01:09:56.089506 2319 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:09:56.089816 kubelet[2319]: E1009 01:09:56.089792 2319 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.157:6443/api/v1/nodes\": dial tcp 10.0.0.157:6443: connect: connection refused" node="localhost" Oct 9 01:09:56.244662 kubelet[2319]: W1009 01:09:56.244589 2319 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.157:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Oct 9 01:09:56.244758 kubelet[2319]: E1009 01:09:56.244683 2319 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.157:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Oct 9 01:09:56.413918 kubelet[2319]: W1009 01:09:56.413811 2319 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.157:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Oct 9 01:09:56.413918 kubelet[2319]: E1009 01:09:56.413855 2319 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.157:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Oct 9 01:09:56.472298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2667716738.mount: Deactivated successfully. Oct 9 01:09:56.478086 containerd[1453]: time="2024-10-09T01:09:56.478029561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:09:56.478865 containerd[1453]: time="2024-10-09T01:09:56.478842135Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:09:56.479560 containerd[1453]: time="2024-10-09T01:09:56.479541977Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:09:56.479838 containerd[1453]: time="2024-10-09T01:09:56.479764483Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:09:56.480376 containerd[1453]: time="2024-10-09T01:09:56.480333109Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:09:56.480863 containerd[1453]: time="2024-10-09T01:09:56.480834047Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Oct 9 01:09:56.481660 containerd[1453]: time="2024-10-09T01:09:56.481620699Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:09:56.485157 containerd[1453]: time="2024-10-09T01:09:56.485096784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:09:56.486100 containerd[1453]: time="2024-10-09T01:09:56.486044094Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 647.047535ms" Oct 9 01:09:56.488883 containerd[1453]: time="2024-10-09T01:09:56.488793374Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 652.394052ms" Oct 9 01:09:56.489526 containerd[1453]: time="2024-10-09T01:09:56.489496016Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 665.733077ms" Oct 9 01:09:56.547398 kubelet[2319]: W1009 01:09:56.545939 2319 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Oct 9 01:09:56.547398 kubelet[2319]: E1009 01:09:56.545984 2319 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Oct 9 01:09:56.634363 kubelet[2319]: E1009 01:09:56.634218 2319 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.157:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.157:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fca38fe2b5e0b0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 01:09:55.382649008 +0000 UTC m=+1.213595130,LastTimestamp:2024-10-09 01:09:55.382649008 +0000 UTC m=+1.213595130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 01:09:56.676371 containerd[1453]: time="2024-10-09T01:09:56.676181312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:09:56.676371 containerd[1453]: time="2024-10-09T01:09:56.676250880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:09:56.676564 containerd[1453]: time="2024-10-09T01:09:56.676264282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:56.676564 containerd[1453]: time="2024-10-09T01:09:56.676346491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:56.677990 containerd[1453]: time="2024-10-09T01:09:56.677920275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:09:56.678074 containerd[1453]: time="2024-10-09T01:09:56.677985402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:09:56.678074 containerd[1453]: time="2024-10-09T01:09:56.678000924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:56.678170 containerd[1453]: time="2024-10-09T01:09:56.678077773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:56.681929 containerd[1453]: time="2024-10-09T01:09:56.681113046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:09:56.681929 containerd[1453]: time="2024-10-09T01:09:56.681199416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:09:56.681929 containerd[1453]: time="2024-10-09T01:09:56.681209778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:56.681929 containerd[1453]: time="2024-10-09T01:09:56.681285026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:56.694611 systemd[1]: Started cri-containerd-923233b1694547814a24559cc3bb93c9324fc4e2f908ad3f7a45f695ead81985.scope - libcontainer container 923233b1694547814a24559cc3bb93c9324fc4e2f908ad3f7a45f695ead81985. Oct 9 01:09:56.698127 systemd[1]: Started cri-containerd-78925601b722e7b19d33d5afb3db8ea9b11cc2f0c691c17428dd234c009fb7ba.scope - libcontainer container 78925601b722e7b19d33d5afb3db8ea9b11cc2f0c691c17428dd234c009fb7ba. Oct 9 01:09:56.699947 systemd[1]: Started cri-containerd-f20c8d61c7c0e12dbb9ce539739f03f7ed85360d72ec6ebe99c4ff7a9e8f758d.scope - libcontainer container f20c8d61c7c0e12dbb9ce539739f03f7ed85360d72ec6ebe99c4ff7a9e8f758d. Oct 9 01:09:56.726380 containerd[1453]: time="2024-10-09T01:09:56.726344553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2fcea4df269cc1e6513f9e3e768ded5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"78925601b722e7b19d33d5afb3db8ea9b11cc2f0c691c17428dd234c009fb7ba\"" Oct 9 01:09:56.728083 kubelet[2319]: E1009 01:09:56.728046 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:56.729934 containerd[1453]: time="2024-10-09T01:09:56.729906048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9e9ffee0953b6c46740544502efcc2eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"923233b1694547814a24559cc3bb93c9324fc4e2f908ad3f7a45f695ead81985\"" Oct 9 01:09:56.730707 kubelet[2319]: E1009 01:09:56.730679 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:56.732185 containerd[1453]: time="2024-10-09T01:09:56.732149909Z" level=info msg="CreateContainer within sandbox \"78925601b722e7b19d33d5afb3db8ea9b11cc2f0c691c17428dd234c009fb7ba\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 01:09:56.733621 containerd[1453]: time="2024-10-09T01:09:56.733565954Z" level=info msg="CreateContainer within sandbox \"923233b1694547814a24559cc3bb93c9324fc4e2f908ad3f7a45f695ead81985\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 01:09:56.735157 containerd[1453]: time="2024-10-09T01:09:56.735111014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e5c757a7a09759fc423ca409747c56ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"f20c8d61c7c0e12dbb9ce539739f03f7ed85360d72ec6ebe99c4ff7a9e8f758d\"" Oct 9 01:09:56.735795 kubelet[2319]: E1009 01:09:56.735778 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:56.737778 containerd[1453]: time="2024-10-09T01:09:56.737747521Z" level=info msg="CreateContainer within sandbox \"f20c8d61c7c0e12dbb9ce539739f03f7ed85360d72ec6ebe99c4ff7a9e8f758d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 01:09:56.750127 containerd[1453]: time="2024-10-09T01:09:56.750085157Z" level=info msg="CreateContainer within sandbox \"923233b1694547814a24559cc3bb93c9324fc4e2f908ad3f7a45f695ead81985\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9bba5bd3e0049a7a5fa7aaca5d9105ddd3d591470b3c36916febc82b66c10add\"" Oct 9 01:09:56.750961 containerd[1453]: time="2024-10-09T01:09:56.750931496Z" level=info msg="StartContainer for \"9bba5bd3e0049a7a5fa7aaca5d9105ddd3d591470b3c36916febc82b66c10add\"" Oct 9 01:09:56.753647 containerd[1453]: time="2024-10-09T01:09:56.753601246Z" level=info msg="CreateContainer within sandbox \"f20c8d61c7c0e12dbb9ce539739f03f7ed85360d72ec6ebe99c4ff7a9e8f758d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c530804edb6b75f1e21be70ec06767eee6287ba6bfb5a6db63583a5c1da93b8b\"" Oct 9 01:09:56.754877 containerd[1453]: time="2024-10-09T01:09:56.754773983Z" level=info msg="StartContainer for \"c530804edb6b75f1e21be70ec06767eee6287ba6bfb5a6db63583a5c1da93b8b\"" Oct 9 01:09:56.755811 containerd[1453]: time="2024-10-09T01:09:56.755717693Z" level=info msg="CreateContainer within sandbox \"78925601b722e7b19d33d5afb3db8ea9b11cc2f0c691c17428dd234c009fb7ba\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bc9b3d0fe68df41d6c15aa1c1afec8435d3150da5bf9acb1ef62a5067eafd439\"" Oct 9 01:09:56.756084 containerd[1453]: time="2024-10-09T01:09:56.756062053Z" level=info msg="StartContainer for \"bc9b3d0fe68df41d6c15aa1c1afec8435d3150da5bf9acb1ef62a5067eafd439\"" Oct 9 01:09:56.764590 kubelet[2319]: W1009 01:09:56.762820 2319 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Oct 9 01:09:56.764590 kubelet[2319]: E1009 01:09:56.762886 2319 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Oct 9 01:09:56.778602 systemd[1]: Started cri-containerd-9bba5bd3e0049a7a5fa7aaca5d9105ddd3d591470b3c36916febc82b66c10add.scope - libcontainer container 9bba5bd3e0049a7a5fa7aaca5d9105ddd3d591470b3c36916febc82b66c10add. Oct 9 01:09:56.782562 systemd[1]: Started cri-containerd-bc9b3d0fe68df41d6c15aa1c1afec8435d3150da5bf9acb1ef62a5067eafd439.scope - libcontainer container bc9b3d0fe68df41d6c15aa1c1afec8435d3150da5bf9acb1ef62a5067eafd439. Oct 9 01:09:56.783464 systemd[1]: Started cri-containerd-c530804edb6b75f1e21be70ec06767eee6287ba6bfb5a6db63583a5c1da93b8b.scope - libcontainer container c530804edb6b75f1e21be70ec06767eee6287ba6bfb5a6db63583a5c1da93b8b. Oct 9 01:09:56.789387 kubelet[2319]: E1009 01:09:56.789337 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.157:6443: connect: connection refused" interval="1.6s" Oct 9 01:09:56.815250 containerd[1453]: time="2024-10-09T01:09:56.815190217Z" level=info msg="StartContainer for \"9bba5bd3e0049a7a5fa7aaca5d9105ddd3d591470b3c36916febc82b66c10add\" returns successfully" Oct 9 01:09:56.816399 containerd[1453]: time="2024-10-09T01:09:56.816349872Z" level=info msg="StartContainer for \"c530804edb6b75f1e21be70ec06767eee6287ba6bfb5a6db63583a5c1da93b8b\" returns successfully" Oct 9 01:09:56.828297 containerd[1453]: time="2024-10-09T01:09:56.827884495Z" level=info msg="StartContainer for \"bc9b3d0fe68df41d6c15aa1c1afec8435d3150da5bf9acb1ef62a5067eafd439\" returns successfully" Oct 9 01:09:56.892746 kubelet[2319]: I1009 01:09:56.892708 2319 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:09:56.900036 kubelet[2319]: E1009 01:09:56.897808 2319 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.157:6443/api/v1/nodes\": dial tcp 10.0.0.157:6443: connect: connection refused" node="localhost" Oct 9 01:09:57.414328 kubelet[2319]: E1009 01:09:57.414271 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:57.415565 kubelet[2319]: E1009 01:09:57.415493 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:57.415749 kubelet[2319]: E1009 01:09:57.415707 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:58.417671 kubelet[2319]: E1009 01:09:58.417606 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:58.500632 kubelet[2319]: I1009 01:09:58.500602 2319 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:09:58.914635 kubelet[2319]: E1009 01:09:58.914594 2319 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 9 01:09:59.055754 kubelet[2319]: I1009 01:09:59.055602 2319 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 01:09:59.259182 kubelet[2319]: E1009 01:09:59.259148 2319 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 9 01:09:59.259435 kubelet[2319]: E1009 01:09:59.259408 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:59.382432 kubelet[2319]: I1009 01:09:59.382402 2319 apiserver.go:52] "Watching apiserver" Oct 9 01:09:59.387509 kubelet[2319]: I1009 01:09:59.387486 2319 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 9 01:10:01.115395 systemd[1]: Reloading requested from client PID 2598 ('systemctl') (unit session-7.scope)... Oct 9 01:10:01.115411 systemd[1]: Reloading... Oct 9 01:10:01.169482 zram_generator::config[2638]: No configuration found. Oct 9 01:10:01.247807 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:10:01.309392 systemd[1]: Reloading finished in 193 ms. Oct 9 01:10:01.340301 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:10:01.358334 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 01:10:01.358590 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:10:01.358634 systemd[1]: kubelet.service: Consumed 1.581s CPU time, 118.1M memory peak, 0B memory swap peak. Oct 9 01:10:01.367834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:10:01.454406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:10:01.458250 (kubelet)[2679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:10:01.492613 kubelet[2679]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:10:01.492613 kubelet[2679]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:10:01.492613 kubelet[2679]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:10:01.492916 kubelet[2679]: I1009 01:10:01.492660 2679 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:10:01.497542 kubelet[2679]: I1009 01:10:01.497506 2679 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 9 01:10:01.497542 kubelet[2679]: I1009 01:10:01.497538 2679 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:10:01.497748 kubelet[2679]: I1009 01:10:01.497710 2679 server.go:927] "Client rotation is on, will bootstrap in background" Oct 9 01:10:01.498990 kubelet[2679]: I1009 01:10:01.498962 2679 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 01:10:01.500146 kubelet[2679]: I1009 01:10:01.500062 2679 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:10:01.506115 kubelet[2679]: I1009 01:10:01.506096 2679 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:10:01.506767 kubelet[2679]: I1009 01:10:01.506369 2679 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:10:01.506767 kubelet[2679]: I1009 01:10:01.506390 2679 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:10:01.506767 kubelet[2679]: I1009 01:10:01.506595 2679 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:10:01.506767 kubelet[2679]: I1009 01:10:01.506604 2679 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:10:01.506767 kubelet[2679]: I1009 01:10:01.506634 2679 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:10:01.506981 kubelet[2679]: I1009 01:10:01.506967 2679 kubelet.go:400] "Attempting to sync node with API server" Oct 9 01:10:01.507400 kubelet[2679]: I1009 01:10:01.507382 2679 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:10:01.507522 kubelet[2679]: I1009 01:10:01.507509 2679 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:10:01.507604 kubelet[2679]: I1009 01:10:01.507591 2679 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:10:01.508834 kubelet[2679]: I1009 01:10:01.508812 2679 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:10:01.509245 kubelet[2679]: I1009 01:10:01.509231 2679 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:10:01.511723 kubelet[2679]: I1009 01:10:01.511429 2679 server.go:1264] "Started kubelet" Oct 9 01:10:01.512042 kubelet[2679]: I1009 01:10:01.511988 2679 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:10:01.512176 kubelet[2679]: I1009 01:10:01.512110 2679 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:10:01.512396 kubelet[2679]: I1009 01:10:01.512379 2679 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:10:01.513077 kubelet[2679]: I1009 01:10:01.513041 2679 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:10:01.514207 kubelet[2679]: I1009 01:10:01.513156 2679 server.go:455] "Adding debug handlers to kubelet server" Oct 9 01:10:01.523915 kubelet[2679]: I1009 01:10:01.523896 2679 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:10:01.524398 kubelet[2679]: I1009 01:10:01.524364 2679 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 9 01:10:01.524675 kubelet[2679]: I1009 01:10:01.524649 2679 reconciler.go:26] "Reconciler: start to sync state" Oct 9 01:10:01.529068 kubelet[2679]: I1009 01:10:01.529025 2679 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:10:01.529146 kubelet[2679]: I1009 01:10:01.529122 2679 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:10:01.533693 kubelet[2679]: I1009 01:10:01.533651 2679 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:10:01.535295 kubelet[2679]: I1009 01:10:01.534748 2679 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:10:01.535295 kubelet[2679]: E1009 01:10:01.534757 2679 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 01:10:01.535295 kubelet[2679]: I1009 01:10:01.534784 2679 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:10:01.535295 kubelet[2679]: I1009 01:10:01.534799 2679 kubelet.go:2337] "Starting kubelet main sync loop" Oct 9 01:10:01.535295 kubelet[2679]: E1009 01:10:01.534835 2679 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:10:01.535295 kubelet[2679]: I1009 01:10:01.535202 2679 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:10:01.563727 kubelet[2679]: I1009 01:10:01.563708 2679 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:10:01.564093 kubelet[2679]: I1009 01:10:01.563821 2679 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:10:01.564093 kubelet[2679]: I1009 01:10:01.563844 2679 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:10:01.564093 kubelet[2679]: I1009 01:10:01.563971 2679 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 01:10:01.564093 kubelet[2679]: I1009 01:10:01.563984 2679 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 01:10:01.564093 kubelet[2679]: I1009 01:10:01.563998 2679 policy_none.go:49] "None policy: Start" Oct 9 01:10:01.564525 kubelet[2679]: I1009 01:10:01.564510 2679 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:10:01.564585 kubelet[2679]: I1009 01:10:01.564531 2679 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:10:01.564663 kubelet[2679]: I1009 01:10:01.564647 2679 state_mem.go:75] "Updated machine memory state" Oct 9 01:10:01.571224 kubelet[2679]: I1009 01:10:01.571204 2679 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:10:01.571646 kubelet[2679]: I1009 01:10:01.571344 2679 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 01:10:01.571646 kubelet[2679]: I1009 01:10:01.571441 2679 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:10:01.628229 kubelet[2679]: I1009 01:10:01.627931 2679 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:10:01.634273 kubelet[2679]: I1009 01:10:01.634244 2679 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 9 01:10:01.634345 kubelet[2679]: I1009 01:10:01.634314 2679 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 01:10:01.635344 kubelet[2679]: I1009 01:10:01.635307 2679 topology_manager.go:215] "Topology Admit Handler" podUID="2fcea4df269cc1e6513f9e3e768ded5a" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 01:10:01.635413 kubelet[2679]: I1009 01:10:01.635399 2679 topology_manager.go:215] "Topology Admit Handler" podUID="9e9ffee0953b6c46740544502efcc2eb" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 01:10:01.635549 kubelet[2679]: I1009 01:10:01.635436 2679 topology_manager.go:215] "Topology Admit Handler" podUID="e5c757a7a09759fc423ca409747c56ae" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 01:10:01.725051 kubelet[2679]: I1009 01:10:01.725028 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:10:01.725137 kubelet[2679]: I1009 01:10:01.725058 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fcea4df269cc1e6513f9e3e768ded5a-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2fcea4df269cc1e6513f9e3e768ded5a\") " pod="kube-system/kube-scheduler-localhost" Oct 9 01:10:01.725137 kubelet[2679]: I1009 01:10:01.725077 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e9ffee0953b6c46740544502efcc2eb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e9ffee0953b6c46740544502efcc2eb\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:10:01.725137 kubelet[2679]: I1009 01:10:01.725092 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:10:01.725137 kubelet[2679]: I1009 01:10:01.725107 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:10:01.725137 kubelet[2679]: I1009 01:10:01.725123 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:10:01.725256 kubelet[2679]: I1009 01:10:01.725138 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:10:01.725256 kubelet[2679]: I1009 01:10:01.725152 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e9ffee0953b6c46740544502efcc2eb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e9ffee0953b6c46740544502efcc2eb\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:10:01.725256 kubelet[2679]: I1009 01:10:01.725168 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e9ffee0953b6c46740544502efcc2eb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9e9ffee0953b6c46740544502efcc2eb\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:10:01.941422 kubelet[2679]: E1009 01:10:01.941294 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:01.941605 kubelet[2679]: E1009 01:10:01.941437 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:01.941605 kubelet[2679]: E1009 01:10:01.941491 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:02.119334 sudo[2714]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 9 01:10:02.119643 sudo[2714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 9 01:10:02.508596 kubelet[2679]: I1009 01:10:02.508557 2679 apiserver.go:52] "Watching apiserver" Oct 9 01:10:02.524998 kubelet[2679]: I1009 01:10:02.524972 2679 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 9 01:10:02.548373 kubelet[2679]: E1009 01:10:02.548322 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:02.551605 kubelet[2679]: E1009 01:10:02.551570 2679 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 9 01:10:02.552296 kubelet[2679]: E1009 01:10:02.551922 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:02.552401 kubelet[2679]: E1009 01:10:02.552374 2679 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 9 01:10:02.552770 kubelet[2679]: E1009 01:10:02.552752 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:02.559970 sudo[2714]: pam_unix(sudo:session): session closed for user root Oct 9 01:10:02.569388 kubelet[2679]: I1009 01:10:02.568751 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5687363410000001 podStartE2EDuration="1.568736341s" podCreationTimestamp="2024-10-09 01:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:10:02.567892145 +0000 UTC m=+1.105490439" watchObservedRunningTime="2024-10-09 01:10:02.568736341 +0000 UTC m=+1.106334675" Oct 9 01:10:02.576406 kubelet[2679]: I1009 01:10:02.576289 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5762776939999998 podStartE2EDuration="1.576277694s" podCreationTimestamp="2024-10-09 01:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:10:02.576111919 +0000 UTC m=+1.113710213" watchObservedRunningTime="2024-10-09 01:10:02.576277694 +0000 UTC m=+1.113876028" Oct 9 01:10:02.586160 kubelet[2679]: I1009 01:10:02.586055 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5860438449999998 podStartE2EDuration="1.586043845s" podCreationTimestamp="2024-10-09 01:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:10:02.585965638 +0000 UTC m=+1.123563972" watchObservedRunningTime="2024-10-09 01:10:02.586043845 +0000 UTC m=+1.123642179" Oct 9 01:10:03.548512 kubelet[2679]: E1009 01:10:03.548207 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:03.548876 kubelet[2679]: E1009 01:10:03.548651 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:04.034608 sudo[1634]: pam_unix(sudo:session): session closed for user root Oct 9 01:10:04.036363 sshd[1631]: pam_unix(sshd:session): session closed for user core Oct 9 01:10:04.039892 systemd[1]: sshd@6-10.0.0.157:22-10.0.0.1:54142.service: Deactivated successfully. Oct 9 01:10:04.041609 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 01:10:04.041821 systemd[1]: session-7.scope: Consumed 7.583s CPU time, 193.4M memory peak, 0B memory swap peak. Oct 9 01:10:04.042205 systemd-logind[1437]: Session 7 logged out. Waiting for processes to exit. Oct 9 01:10:04.043204 systemd-logind[1437]: Removed session 7. Oct 9 01:10:04.549684 kubelet[2679]: E1009 01:10:04.549590 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:04.551035 kubelet[2679]: E1009 01:10:04.550555 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:07.941574 kubelet[2679]: E1009 01:10:07.941536 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:08.554938 kubelet[2679]: E1009 01:10:08.554868 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:13.114756 kubelet[2679]: E1009 01:10:13.114723 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:13.316329 kubelet[2679]: E1009 01:10:13.316283 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:16.916109 kubelet[2679]: I1009 01:10:16.916065 2679 topology_manager.go:215] "Topology Admit Handler" podUID="2511161e-2d0d-437c-abb2-84839e1a035e" podNamespace="kube-system" podName="kube-proxy-4k7fv" Oct 9 01:10:16.927417 systemd[1]: Created slice kubepods-besteffort-pod2511161e_2d0d_437c_abb2_84839e1a035e.slice - libcontainer container kubepods-besteffort-pod2511161e_2d0d_437c_abb2_84839e1a035e.slice. Oct 9 01:10:16.930057 kubelet[2679]: I1009 01:10:16.930004 2679 topology_manager.go:215] "Topology Admit Handler" podUID="6fb07bd6-6479-486b-92b5-6f919787ac9d" podNamespace="kube-system" podName="cilium-xvtvh" Oct 9 01:10:16.944251 systemd[1]: Created slice kubepods-burstable-pod6fb07bd6_6479_486b_92b5_6f919787ac9d.slice - libcontainer container kubepods-burstable-pod6fb07bd6_6479_486b_92b5_6f919787ac9d.slice. Oct 9 01:10:16.990855 kubelet[2679]: I1009 01:10:16.990810 2679 topology_manager.go:215] "Topology Admit Handler" podUID="1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858" podNamespace="kube-system" podName="cilium-operator-599987898-mxkff" Oct 9 01:10:16.998592 systemd[1]: Created slice kubepods-besteffort-pod1cc1c9eb_0c02_4ec5_a76b_37dffb9c4858.slice - libcontainer container kubepods-besteffort-pod1cc1c9eb_0c02_4ec5_a76b_37dffb9c4858.slice. Oct 9 01:10:17.016605 kubelet[2679]: I1009 01:10:17.016561 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-cni-path\") pod \"cilium-xvtvh\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " pod="kube-system/cilium-xvtvh" Oct 9 01:10:17.016605 kubelet[2679]: I1009 01:10:17.016607 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-etc-cni-netd\") pod \"cilium-xvtvh\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " pod="kube-system/cilium-xvtvh" Oct 9 01:10:17.016763 kubelet[2679]: I1009 01:10:17.016624 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-bpf-maps\") pod \"cilium-xvtvh\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " pod="kube-system/cilium-xvtvh" Oct 9 01:10:17.016763 kubelet[2679]: I1009 01:10:17.016640 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-cilium-cgroup\") pod \"cilium-xvtvh\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " pod="kube-system/cilium-xvtvh" Oct 9 01:10:17.016763 kubelet[2679]: I1009 01:10:17.016654 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-lib-modules\") pod \"cilium-xvtvh\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " pod="kube-system/cilium-xvtvh" Oct 9 01:10:17.016763 kubelet[2679]: I1009 01:10:17.016669 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-host-proc-sys-kernel\") pod \"cilium-xvtvh\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " pod="kube-system/cilium-xvtvh" Oct 9 01:10:17.016763 kubelet[2679]: I1009 01:10:17.016684 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhblp\" (UniqueName: \"kubernetes.io/projected/6fb07bd6-6479-486b-92b5-6f919787ac9d-kube-api-access-bhblp\") pod \"cilium-xvtvh\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " pod="kube-system/cilium-xvtvh" Oct 9 01:10:17.016763 kubelet[2679]: I1009 01:10:17.016703 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-cilium-run\") pod \"cilium-xvtvh\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " pod="kube-system/cilium-xvtvh" Oct 9 01:10:17.016886 kubelet[2679]: I1009 01:10:17.016720 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-hostproc\") pod \"cilium-xvtvh\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " pod="kube-system/cilium-xvtvh" Oct 9 01:10:17.016886 kubelet[2679]: I1009 01:10:17.016737 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2511161e-2d0d-437c-abb2-84839e1a035e-xtables-lock\") pod \"kube-proxy-4k7fv\" (UID: \"2511161e-2d0d-437c-abb2-84839e1a035e\") " pod="kube-system/kube-proxy-4k7fv" Oct 9 01:10:17.016886 kubelet[2679]: I1009 01:10:17.016753 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djnzv\" (UniqueName: \"kubernetes.io/projected/2511161e-2d0d-437c-abb2-84839e1a035e-kube-api-access-djnzv\") pod \"kube-proxy-4k7fv\" (UID: \"2511161e-2d0d-437c-abb2-84839e1a035e\") " pod="kube-system/kube-proxy-4k7fv" Oct 9 01:10:17.016886 kubelet[2679]: I1009 01:10:17.016769 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-host-proc-sys-net\") pod \"cilium-xvtvh\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " pod="kube-system/cilium-xvtvh" Oct 9 01:10:17.016886 kubelet[2679]: I1009 01:10:17.016786 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2511161e-2d0d-437c-abb2-84839e1a035e-lib-modules\") pod \"kube-proxy-4k7fv\" (UID: \"2511161e-2d0d-437c-abb2-84839e1a035e\") " pod="kube-system/kube-proxy-4k7fv" Oct 9 01:10:17.016987 kubelet[2679]: I1009 01:10:17.016800 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-xtables-lock\") pod \"cilium-xvtvh\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " pod="kube-system/cilium-xvtvh" Oct 9 01:10:17.016987 kubelet[2679]: I1009 01:10:17.016813 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6fb07bd6-6479-486b-92b5-6f919787ac9d-hubble-tls\") pod \"cilium-xvtvh\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " pod="kube-system/cilium-xvtvh" Oct 9 01:10:17.016987 kubelet[2679]: I1009 01:10:17.016830 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2511161e-2d0d-437c-abb2-84839e1a035e-kube-proxy\") pod \"kube-proxy-4k7fv\" (UID: \"2511161e-2d0d-437c-abb2-84839e1a035e\") " pod="kube-system/kube-proxy-4k7fv" Oct 9 01:10:17.016987 kubelet[2679]: I1009 01:10:17.016845 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6fb07bd6-6479-486b-92b5-6f919787ac9d-clustermesh-secrets\") pod \"cilium-xvtvh\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " pod="kube-system/cilium-xvtvh" Oct 9 01:10:17.016987 kubelet[2679]: I1009 01:10:17.016861 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6fb07bd6-6479-486b-92b5-6f919787ac9d-cilium-config-path\") pod \"cilium-xvtvh\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " pod="kube-system/cilium-xvtvh" Oct 9 01:10:17.029418 kubelet[2679]: I1009 01:10:17.029220 2679 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 01:10:17.036015 containerd[1453]: time="2024-10-09T01:10:17.035833642Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 01:10:17.036719 kubelet[2679]: I1009 01:10:17.036104 2679 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 01:10:17.118065 kubelet[2679]: I1009 01:10:17.117837 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j99nn\" (UniqueName: \"kubernetes.io/projected/1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858-kube-api-access-j99nn\") pod \"cilium-operator-599987898-mxkff\" (UID: \"1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858\") " pod="kube-system/cilium-operator-599987898-mxkff" Oct 9 01:10:17.118065 kubelet[2679]: I1009 01:10:17.117973 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858-cilium-config-path\") pod \"cilium-operator-599987898-mxkff\" (UID: \"1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858\") " pod="kube-system/cilium-operator-599987898-mxkff" Oct 9 01:10:17.238599 kubelet[2679]: E1009 01:10:17.238571 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:17.243194 containerd[1453]: time="2024-10-09T01:10:17.243151267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4k7fv,Uid:2511161e-2d0d-437c-abb2-84839e1a035e,Namespace:kube-system,Attempt:0,}" Oct 9 01:10:17.250582 kubelet[2679]: E1009 01:10:17.250558 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:17.252117 containerd[1453]: time="2024-10-09T01:10:17.252067505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xvtvh,Uid:6fb07bd6-6479-486b-92b5-6f919787ac9d,Namespace:kube-system,Attempt:0,}" Oct 9 01:10:17.263663 containerd[1453]: time="2024-10-09T01:10:17.263569401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:10:17.263663 containerd[1453]: time="2024-10-09T01:10:17.263618204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:10:17.263833 containerd[1453]: time="2024-10-09T01:10:17.263646685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:17.263949 containerd[1453]: time="2024-10-09T01:10:17.263734210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:17.271647 containerd[1453]: time="2024-10-09T01:10:17.271356898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:10:17.272251 containerd[1453]: time="2024-10-09T01:10:17.271959611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:10:17.272251 containerd[1453]: time="2024-10-09T01:10:17.271985852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:17.272251 containerd[1453]: time="2024-10-09T01:10:17.272066256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:17.280630 systemd[1]: Started cri-containerd-a78eb1568ffd4268f20249d97b81632121c1b190811c06152a9636d1b54fd025.scope - libcontainer container a78eb1568ffd4268f20249d97b81632121c1b190811c06152a9636d1b54fd025. Oct 9 01:10:17.285101 systemd[1]: Started cri-containerd-c47d2e99d7dc436a9dc896b22121077024a0314364120d82c15bf4ab679b796e.scope - libcontainer container c47d2e99d7dc436a9dc896b22121077024a0314364120d82c15bf4ab679b796e. Oct 9 01:10:17.302147 kubelet[2679]: E1009 01:10:17.301900 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:17.303489 containerd[1453]: time="2024-10-09T01:10:17.303440857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-mxkff,Uid:1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858,Namespace:kube-system,Attempt:0,}" Oct 9 01:10:17.303988 containerd[1453]: time="2024-10-09T01:10:17.303961645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4k7fv,Uid:2511161e-2d0d-437c-abb2-84839e1a035e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a78eb1568ffd4268f20249d97b81632121c1b190811c06152a9636d1b54fd025\"" Oct 9 01:10:17.307648 kubelet[2679]: E1009 01:10:17.307623 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:17.308948 containerd[1453]: time="2024-10-09T01:10:17.308806504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xvtvh,Uid:6fb07bd6-6479-486b-92b5-6f919787ac9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c47d2e99d7dc436a9dc896b22121077024a0314364120d82c15bf4ab679b796e\"" Oct 9 01:10:17.309399 kubelet[2679]: E1009 01:10:17.309362 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:17.311936 containerd[1453]: time="2024-10-09T01:10:17.311901150Z" level=info msg="CreateContainer within sandbox \"a78eb1568ffd4268f20249d97b81632121c1b190811c06152a9636d1b54fd025\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 01:10:17.312888 containerd[1453]: time="2024-10-09T01:10:17.312798518Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 9 01:10:17.325840 containerd[1453]: time="2024-10-09T01:10:17.325725891Z" level=info msg="CreateContainer within sandbox \"a78eb1568ffd4268f20249d97b81632121c1b190811c06152a9636d1b54fd025\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bee4948e31a29022119e22d544dc4f5f0b611c119dd91c47d02a577342ddc3d2\"" Oct 9 01:10:17.327684 containerd[1453]: time="2024-10-09T01:10:17.327406541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:10:17.327684 containerd[1453]: time="2024-10-09T01:10:17.327521667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:10:17.327684 containerd[1453]: time="2024-10-09T01:10:17.327539508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:17.328724 containerd[1453]: time="2024-10-09T01:10:17.328648647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:17.333320 containerd[1453]: time="2024-10-09T01:10:17.333294576Z" level=info msg="StartContainer for \"bee4948e31a29022119e22d544dc4f5f0b611c119dd91c47d02a577342ddc3d2\"" Oct 9 01:10:17.345617 systemd[1]: Started cri-containerd-b2fba8ad0bac54255cf62f9d3792417f6ce35f6d284e6ac9ec4417cd99ac0f16.scope - libcontainer container b2fba8ad0bac54255cf62f9d3792417f6ce35f6d284e6ac9ec4417cd99ac0f16. Oct 9 01:10:17.355498 systemd[1]: Started cri-containerd-bee4948e31a29022119e22d544dc4f5f0b611c119dd91c47d02a577342ddc3d2.scope - libcontainer container bee4948e31a29022119e22d544dc4f5f0b611c119dd91c47d02a577342ddc3d2. Oct 9 01:10:17.380411 containerd[1453]: time="2024-10-09T01:10:17.380377378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-mxkff,Uid:1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2fba8ad0bac54255cf62f9d3792417f6ce35f6d284e6ac9ec4417cd99ac0f16\"" Oct 9 01:10:17.381226 kubelet[2679]: E1009 01:10:17.381204 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:17.386263 containerd[1453]: time="2024-10-09T01:10:17.386231772Z" level=info msg="StartContainer for \"bee4948e31a29022119e22d544dc4f5f0b611c119dd91c47d02a577342ddc3d2\" returns successfully" Oct 9 01:10:17.571293 kubelet[2679]: E1009 01:10:17.571083 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:17.579271 kubelet[2679]: I1009 01:10:17.579146 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4k7fv" podStartSLOduration=1.579132506 podStartE2EDuration="1.579132506s" podCreationTimestamp="2024-10-09 01:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:10:17.57902802 +0000 UTC m=+16.116626354" watchObservedRunningTime="2024-10-09 01:10:17.579132506 +0000 UTC m=+16.116730840" Oct 9 01:10:24.358786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1176600461.mount: Deactivated successfully. Oct 9 01:10:25.559231 containerd[1453]: time="2024-10-09T01:10:25.559166385Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651470" Oct 9 01:10:25.561571 containerd[1453]: time="2024-10-09T01:10:25.561485489Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.248650689s" Oct 9 01:10:25.561571 containerd[1453]: time="2024-10-09T01:10:25.561515850Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 9 01:10:25.564016 containerd[1453]: time="2024-10-09T01:10:25.563969960Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 9 01:10:25.565614 containerd[1453]: time="2024-10-09T01:10:25.565501229Z" level=info msg="CreateContainer within sandbox \"c47d2e99d7dc436a9dc896b22121077024a0314364120d82c15bf4ab679b796e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 9 01:10:25.567473 containerd[1453]: time="2024-10-09T01:10:25.567322030Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:10:25.568355 containerd[1453]: time="2024-10-09T01:10:25.568306994Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:10:25.603646 containerd[1453]: time="2024-10-09T01:10:25.603604494Z" level=info msg="CreateContainer within sandbox \"c47d2e99d7dc436a9dc896b22121077024a0314364120d82c15bf4ab679b796e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7eaf8cfb76a09a00d5535846ed5656a8c29a81b03b336bca2e163445cabe6a08\"" Oct 9 01:10:25.604037 containerd[1453]: time="2024-10-09T01:10:25.604008472Z" level=info msg="StartContainer for \"7eaf8cfb76a09a00d5535846ed5656a8c29a81b03b336bca2e163445cabe6a08\"" Oct 9 01:10:25.631604 systemd[1]: Started cri-containerd-7eaf8cfb76a09a00d5535846ed5656a8c29a81b03b336bca2e163445cabe6a08.scope - libcontainer container 7eaf8cfb76a09a00d5535846ed5656a8c29a81b03b336bca2e163445cabe6a08. Oct 9 01:10:25.650139 containerd[1453]: time="2024-10-09T01:10:25.650104896Z" level=info msg="StartContainer for \"7eaf8cfb76a09a00d5535846ed5656a8c29a81b03b336bca2e163445cabe6a08\" returns successfully" Oct 9 01:10:25.693164 systemd[1]: cri-containerd-7eaf8cfb76a09a00d5535846ed5656a8c29a81b03b336bca2e163445cabe6a08.scope: Deactivated successfully. Oct 9 01:10:25.872237 containerd[1453]: time="2024-10-09T01:10:25.848081597Z" level=info msg="shim disconnected" id=7eaf8cfb76a09a00d5535846ed5656a8c29a81b03b336bca2e163445cabe6a08 namespace=k8s.io Oct 9 01:10:25.872237 containerd[1453]: time="2024-10-09T01:10:25.871878822Z" level=warning msg="cleaning up after shim disconnected" id=7eaf8cfb76a09a00d5535846ed5656a8c29a81b03b336bca2e163445cabe6a08 namespace=k8s.io Oct 9 01:10:25.872237 containerd[1453]: time="2024-10-09T01:10:25.871899143Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:10:26.594992 kubelet[2679]: E1009 01:10:26.594599 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:26.598546 containerd[1453]: time="2024-10-09T01:10:26.598381573Z" level=info msg="CreateContainer within sandbox \"c47d2e99d7dc436a9dc896b22121077024a0314364120d82c15bf4ab679b796e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 9 01:10:26.601744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7eaf8cfb76a09a00d5535846ed5656a8c29a81b03b336bca2e163445cabe6a08-rootfs.mount: Deactivated successfully. Oct 9 01:10:26.618462 containerd[1453]: time="2024-10-09T01:10:26.618415774Z" level=info msg="CreateContainer within sandbox \"c47d2e99d7dc436a9dc896b22121077024a0314364120d82c15bf4ab679b796e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1d32d2aa4f8e3acae311b10c5f9bfde30f7c7f648f2349ff383ccbac62ea39ad\"" Oct 9 01:10:26.620792 containerd[1453]: time="2024-10-09T01:10:26.620294576Z" level=info msg="StartContainer for \"1d32d2aa4f8e3acae311b10c5f9bfde30f7c7f648f2349ff383ccbac62ea39ad\"" Oct 9 01:10:26.649625 systemd[1]: Started cri-containerd-1d32d2aa4f8e3acae311b10c5f9bfde30f7c7f648f2349ff383ccbac62ea39ad.scope - libcontainer container 1d32d2aa4f8e3acae311b10c5f9bfde30f7c7f648f2349ff383ccbac62ea39ad. Oct 9 01:10:26.671486 containerd[1453]: time="2024-10-09T01:10:26.671377621Z" level=info msg="StartContainer for \"1d32d2aa4f8e3acae311b10c5f9bfde30f7c7f648f2349ff383ccbac62ea39ad\" returns successfully" Oct 9 01:10:26.691925 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 01:10:26.692126 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:10:26.692185 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:10:26.700786 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:10:26.700967 systemd[1]: cri-containerd-1d32d2aa4f8e3acae311b10c5f9bfde30f7c7f648f2349ff383ccbac62ea39ad.scope: Deactivated successfully. Oct 9 01:10:26.731125 containerd[1453]: time="2024-10-09T01:10:26.731069444Z" level=info msg="shim disconnected" id=1d32d2aa4f8e3acae311b10c5f9bfde30f7c7f648f2349ff383ccbac62ea39ad namespace=k8s.io Oct 9 01:10:26.731370 containerd[1453]: time="2024-10-09T01:10:26.731324775Z" level=warning msg="cleaning up after shim disconnected" id=1d32d2aa4f8e3acae311b10c5f9bfde30f7c7f648f2349ff383ccbac62ea39ad namespace=k8s.io Oct 9 01:10:26.731370 containerd[1453]: time="2024-10-09T01:10:26.731339776Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:10:26.738535 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:10:26.891661 containerd[1453]: time="2024-10-09T01:10:26.891554097Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:10:26.892441 containerd[1453]: time="2024-10-09T01:10:26.892394173Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138290" Oct 9 01:10:26.893306 containerd[1453]: time="2024-10-09T01:10:26.893261572Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:10:26.894582 containerd[1453]: time="2024-10-09T01:10:26.894548788Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.330546787s" Oct 9 01:10:26.894636 containerd[1453]: time="2024-10-09T01:10:26.894582470Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 9 01:10:26.896911 containerd[1453]: time="2024-10-09T01:10:26.896823728Z" level=info msg="CreateContainer within sandbox \"b2fba8ad0bac54255cf62f9d3792417f6ce35f6d284e6ac9ec4417cd99ac0f16\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 9 01:10:26.908964 containerd[1453]: time="2024-10-09T01:10:26.908919220Z" level=info msg="CreateContainer within sandbox \"b2fba8ad0bac54255cf62f9d3792417f6ce35f6d284e6ac9ec4417cd99ac0f16\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579\"" Oct 9 01:10:26.909427 containerd[1453]: time="2024-10-09T01:10:26.909396241Z" level=info msg="StartContainer for \"008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579\"" Oct 9 01:10:26.933627 systemd[1]: Started cri-containerd-008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579.scope - libcontainer container 008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579. Oct 9 01:10:26.954907 containerd[1453]: time="2024-10-09T01:10:26.954825237Z" level=info msg="StartContainer for \"008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579\" returns successfully" Oct 9 01:10:27.604176 kubelet[2679]: E1009 01:10:27.604125 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:27.606208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d32d2aa4f8e3acae311b10c5f9bfde30f7c7f648f2349ff383ccbac62ea39ad-rootfs.mount: Deactivated successfully. Oct 9 01:10:27.608982 containerd[1453]: time="2024-10-09T01:10:27.608935797Z" level=info msg="CreateContainer within sandbox \"c47d2e99d7dc436a9dc896b22121077024a0314364120d82c15bf4ab679b796e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 9 01:10:27.609892 kubelet[2679]: E1009 01:10:27.609778 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:27.652213 kubelet[2679]: I1009 01:10:27.652150 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-mxkff" podStartSLOduration=2.139494182 podStartE2EDuration="11.652132822s" podCreationTimestamp="2024-10-09 01:10:16 +0000 UTC" firstStartedPulling="2024-10-09 01:10:17.382702983 +0000 UTC m=+15.920301277" lastFinishedPulling="2024-10-09 01:10:26.895341583 +0000 UTC m=+25.432939917" observedRunningTime="2024-10-09 01:10:27.651728965 +0000 UTC m=+26.189327299" watchObservedRunningTime="2024-10-09 01:10:27.652132822 +0000 UTC m=+26.189731156" Oct 9 01:10:27.656219 containerd[1453]: time="2024-10-09T01:10:27.656170756Z" level=info msg="CreateContainer within sandbox \"c47d2e99d7dc436a9dc896b22121077024a0314364120d82c15bf4ab679b796e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1088f3a35b2ae8b84459c74555882be75324b07c9cab2fad38a31c8fae460028\"" Oct 9 01:10:27.658585 containerd[1453]: time="2024-10-09T01:10:27.658480136Z" level=info msg="StartContainer for \"1088f3a35b2ae8b84459c74555882be75324b07c9cab2fad38a31c8fae460028\"" Oct 9 01:10:27.697613 systemd[1]: Started cri-containerd-1088f3a35b2ae8b84459c74555882be75324b07c9cab2fad38a31c8fae460028.scope - libcontainer container 1088f3a35b2ae8b84459c74555882be75324b07c9cab2fad38a31c8fae460028. Oct 9 01:10:27.742517 containerd[1453]: time="2024-10-09T01:10:27.742476803Z" level=info msg="StartContainer for \"1088f3a35b2ae8b84459c74555882be75324b07c9cab2fad38a31c8fae460028\" returns successfully" Oct 9 01:10:27.760546 systemd[1]: cri-containerd-1088f3a35b2ae8b84459c74555882be75324b07c9cab2fad38a31c8fae460028.scope: Deactivated successfully. Oct 9 01:10:27.797390 containerd[1453]: time="2024-10-09T01:10:27.797330372Z" level=info msg="shim disconnected" id=1088f3a35b2ae8b84459c74555882be75324b07c9cab2fad38a31c8fae460028 namespace=k8s.io Oct 9 01:10:27.797390 containerd[1453]: time="2024-10-09T01:10:27.797387094Z" level=warning msg="cleaning up after shim disconnected" id=1088f3a35b2ae8b84459c74555882be75324b07c9cab2fad38a31c8fae460028 namespace=k8s.io Oct 9 01:10:27.797390 containerd[1453]: time="2024-10-09T01:10:27.797396375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:10:28.606534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1088f3a35b2ae8b84459c74555882be75324b07c9cab2fad38a31c8fae460028-rootfs.mount: Deactivated successfully. Oct 9 01:10:28.610470 kubelet[2679]: E1009 01:10:28.610413 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:28.610770 kubelet[2679]: E1009 01:10:28.610534 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:28.613834 containerd[1453]: time="2024-10-09T01:10:28.613789028Z" level=info msg="CreateContainer within sandbox \"c47d2e99d7dc436a9dc896b22121077024a0314364120d82c15bf4ab679b796e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 9 01:10:28.641559 containerd[1453]: time="2024-10-09T01:10:28.641510365Z" level=info msg="CreateContainer within sandbox \"c47d2e99d7dc436a9dc896b22121077024a0314364120d82c15bf4ab679b796e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3f0d017dbc477b90710839f35754c4eaa4eb04dae7801fbf294bd962c15aa7b6\"" Oct 9 01:10:28.642056 containerd[1453]: time="2024-10-09T01:10:28.642000706Z" level=info msg="StartContainer for \"3f0d017dbc477b90710839f35754c4eaa4eb04dae7801fbf294bd962c15aa7b6\"" Oct 9 01:10:28.666625 systemd[1]: Started cri-containerd-3f0d017dbc477b90710839f35754c4eaa4eb04dae7801fbf294bd962c15aa7b6.scope - libcontainer container 3f0d017dbc477b90710839f35754c4eaa4eb04dae7801fbf294bd962c15aa7b6. Oct 9 01:10:28.687214 systemd[1]: cri-containerd-3f0d017dbc477b90710839f35754c4eaa4eb04dae7801fbf294bd962c15aa7b6.scope: Deactivated successfully. Oct 9 01:10:28.694370 containerd[1453]: time="2024-10-09T01:10:28.694309927Z" level=info msg="StartContainer for \"3f0d017dbc477b90710839f35754c4eaa4eb04dae7801fbf294bd962c15aa7b6\" returns successfully" Oct 9 01:10:28.713233 containerd[1453]: time="2024-10-09T01:10:28.713153607Z" level=info msg="shim disconnected" id=3f0d017dbc477b90710839f35754c4eaa4eb04dae7801fbf294bd962c15aa7b6 namespace=k8s.io Oct 9 01:10:28.713233 containerd[1453]: time="2024-10-09T01:10:28.713205729Z" level=warning msg="cleaning up after shim disconnected" id=3f0d017dbc477b90710839f35754c4eaa4eb04dae7801fbf294bd962c15aa7b6 namespace=k8s.io Oct 9 01:10:28.713233 containerd[1453]: time="2024-10-09T01:10:28.713214610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:10:29.189353 systemd[1]: Started sshd@7-10.0.0.157:22-10.0.0.1:40388.service - OpenSSH per-connection server daemon (10.0.0.1:40388). Oct 9 01:10:29.233174 sshd[3365]: Accepted publickey for core from 10.0.0.1 port 40388 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:10:29.234340 sshd[3365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:10:29.240245 systemd-logind[1437]: New session 8 of user core. Oct 9 01:10:29.248590 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 01:10:29.370215 sshd[3365]: pam_unix(sshd:session): session closed for user core Oct 9 01:10:29.373327 systemd[1]: sshd@7-10.0.0.157:22-10.0.0.1:40388.service: Deactivated successfully. Oct 9 01:10:29.374958 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 01:10:29.375602 systemd-logind[1437]: Session 8 logged out. Waiting for processes to exit. Oct 9 01:10:29.376366 systemd-logind[1437]: Removed session 8. Oct 9 01:10:29.599549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f0d017dbc477b90710839f35754c4eaa4eb04dae7801fbf294bd962c15aa7b6-rootfs.mount: Deactivated successfully. Oct 9 01:10:29.614890 kubelet[2679]: E1009 01:10:29.614859 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:29.622220 containerd[1453]: time="2024-10-09T01:10:29.622179951Z" level=info msg="CreateContainer within sandbox \"c47d2e99d7dc436a9dc896b22121077024a0314364120d82c15bf4ab679b796e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 9 01:10:29.646677 containerd[1453]: time="2024-10-09T01:10:29.646626653Z" level=info msg="CreateContainer within sandbox \"c47d2e99d7dc436a9dc896b22121077024a0314364120d82c15bf4ab679b796e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f\"" Oct 9 01:10:29.646920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1355835109.mount: Deactivated successfully. Oct 9 01:10:29.647779 containerd[1453]: time="2024-10-09T01:10:29.647745540Z" level=info msg="StartContainer for \"a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f\"" Oct 9 01:10:29.672593 systemd[1]: Started cri-containerd-a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f.scope - libcontainer container a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f. Oct 9 01:10:29.695617 containerd[1453]: time="2024-10-09T01:10:29.695577099Z" level=info msg="StartContainer for \"a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f\" returns successfully" Oct 9 01:10:29.817472 kubelet[2679]: I1009 01:10:29.817431 2679 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 01:10:29.838641 kubelet[2679]: I1009 01:10:29.838295 2679 topology_manager.go:215] "Topology Admit Handler" podUID="83c28971-d92b-4545-af0e-135388f65f50" podNamespace="kube-system" podName="coredns-7db6d8ff4d-z6r5j" Oct 9 01:10:29.840537 kubelet[2679]: I1009 01:10:29.839960 2679 topology_manager.go:215] "Topology Admit Handler" podUID="e68a1461-d580-476c-898a-cbc297e61592" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5cqvg" Oct 9 01:10:29.851333 systemd[1]: Created slice kubepods-burstable-pod83c28971_d92b_4545_af0e_135388f65f50.slice - libcontainer container kubepods-burstable-pod83c28971_d92b_4545_af0e_135388f65f50.slice. Oct 9 01:10:29.859567 systemd[1]: Created slice kubepods-burstable-pode68a1461_d580_476c_898a_cbc297e61592.slice - libcontainer container kubepods-burstable-pode68a1461_d580_476c_898a_cbc297e61592.slice. Oct 9 01:10:30.010075 kubelet[2679]: I1009 01:10:30.009952 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpqj5\" (UniqueName: \"kubernetes.io/projected/83c28971-d92b-4545-af0e-135388f65f50-kube-api-access-xpqj5\") pod \"coredns-7db6d8ff4d-z6r5j\" (UID: \"83c28971-d92b-4545-af0e-135388f65f50\") " pod="kube-system/coredns-7db6d8ff4d-z6r5j" Oct 9 01:10:30.010075 kubelet[2679]: I1009 01:10:30.009994 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfc4t\" (UniqueName: \"kubernetes.io/projected/e68a1461-d580-476c-898a-cbc297e61592-kube-api-access-qfc4t\") pod \"coredns-7db6d8ff4d-5cqvg\" (UID: \"e68a1461-d580-476c-898a-cbc297e61592\") " pod="kube-system/coredns-7db6d8ff4d-5cqvg" Oct 9 01:10:30.010075 kubelet[2679]: I1009 01:10:30.010018 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e68a1461-d580-476c-898a-cbc297e61592-config-volume\") pod \"coredns-7db6d8ff4d-5cqvg\" (UID: \"e68a1461-d580-476c-898a-cbc297e61592\") " pod="kube-system/coredns-7db6d8ff4d-5cqvg" Oct 9 01:10:30.010075 kubelet[2679]: I1009 01:10:30.010040 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83c28971-d92b-4545-af0e-135388f65f50-config-volume\") pod \"coredns-7db6d8ff4d-z6r5j\" (UID: \"83c28971-d92b-4545-af0e-135388f65f50\") " pod="kube-system/coredns-7db6d8ff4d-z6r5j" Oct 9 01:10:30.157525 kubelet[2679]: E1009 01:10:30.157021 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:30.157742 containerd[1453]: time="2024-10-09T01:10:30.157604190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z6r5j,Uid:83c28971-d92b-4545-af0e-135388f65f50,Namespace:kube-system,Attempt:0,}" Oct 9 01:10:30.162788 kubelet[2679]: E1009 01:10:30.162757 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:30.163493 containerd[1453]: time="2024-10-09T01:10:30.163131377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5cqvg,Uid:e68a1461-d580-476c-898a-cbc297e61592,Namespace:kube-system,Attempt:0,}" Oct 9 01:10:30.619812 kubelet[2679]: E1009 01:10:30.619768 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:30.634388 kubelet[2679]: I1009 01:10:30.634326 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xvtvh" podStartSLOduration=6.381774335 podStartE2EDuration="14.634311053s" podCreationTimestamp="2024-10-09 01:10:16 +0000 UTC" firstStartedPulling="2024-10-09 01:10:17.311110708 +0000 UTC m=+15.848709042" lastFinishedPulling="2024-10-09 01:10:25.563647426 +0000 UTC m=+24.101245760" observedRunningTime="2024-10-09 01:10:30.633190127 +0000 UTC m=+29.170788541" watchObservedRunningTime="2024-10-09 01:10:30.634311053 +0000 UTC m=+29.171909387" Oct 9 01:10:31.627216 kubelet[2679]: E1009 01:10:31.627182 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:31.849350 systemd-networkd[1384]: cilium_host: Link UP Oct 9 01:10:31.849671 systemd-networkd[1384]: cilium_net: Link UP Oct 9 01:10:31.849674 systemd-networkd[1384]: cilium_net: Gained carrier Oct 9 01:10:31.849924 systemd-networkd[1384]: cilium_host: Gained carrier Oct 9 01:10:31.850494 systemd-networkd[1384]: cilium_net: Gained IPv6LL Oct 9 01:10:31.851649 systemd-networkd[1384]: cilium_host: Gained IPv6LL Oct 9 01:10:31.933362 systemd-networkd[1384]: cilium_vxlan: Link UP Oct 9 01:10:31.933368 systemd-networkd[1384]: cilium_vxlan: Gained carrier Oct 9 01:10:32.254477 kernel: NET: Registered PF_ALG protocol family Oct 9 01:10:32.630048 kubelet[2679]: E1009 01:10:32.629786 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:32.796259 systemd-networkd[1384]: lxc_health: Link UP Oct 9 01:10:32.807424 systemd-networkd[1384]: lxc_health: Gained carrier Oct 9 01:10:33.249321 systemd-networkd[1384]: lxcb4f3a2231ae3: Link UP Oct 9 01:10:33.252487 kernel: eth0: renamed from tmp96ce1 Oct 9 01:10:33.260525 systemd-networkd[1384]: lxcb03c7e37b944: Link UP Oct 9 01:10:33.270526 systemd-networkd[1384]: lxcb4f3a2231ae3: Gained carrier Oct 9 01:10:33.271503 kernel: eth0: renamed from tmp7fbf4 Oct 9 01:10:33.282587 systemd-networkd[1384]: lxcb03c7e37b944: Gained carrier Oct 9 01:10:33.357662 systemd-networkd[1384]: cilium_vxlan: Gained IPv6LL Oct 9 01:10:33.632911 kubelet[2679]: E1009 01:10:33.632790 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:34.191587 systemd-networkd[1384]: lxc_health: Gained IPv6LL Oct 9 01:10:34.386033 systemd[1]: Started sshd@8-10.0.0.157:22-10.0.0.1:37118.service - OpenSSH per-connection server daemon (10.0.0.1:37118). Oct 9 01:10:34.425850 sshd[3912]: Accepted publickey for core from 10.0.0.1 port 37118 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:10:34.427084 sshd[3912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:10:34.434550 systemd-logind[1437]: New session 9 of user core. Oct 9 01:10:34.444609 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 01:10:34.458412 systemd-networkd[1384]: lxcb03c7e37b944: Gained IPv6LL Oct 9 01:10:34.572707 sshd[3912]: pam_unix(sshd:session): session closed for user core Oct 9 01:10:34.575815 systemd[1]: sshd@8-10.0.0.157:22-10.0.0.1:37118.service: Deactivated successfully. Oct 9 01:10:34.579472 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 01:10:34.580745 systemd-logind[1437]: Session 9 logged out. Waiting for processes to exit. Oct 9 01:10:34.581992 systemd-logind[1437]: Removed session 9. Oct 9 01:10:34.632681 kubelet[2679]: E1009 01:10:34.632644 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:35.212712 systemd-networkd[1384]: lxcb4f3a2231ae3: Gained IPv6LL Oct 9 01:10:36.736841 containerd[1453]: time="2024-10-09T01:10:36.736720356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:10:36.736841 containerd[1453]: time="2024-10-09T01:10:36.736773838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:10:36.737493 containerd[1453]: time="2024-10-09T01:10:36.737327659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:36.737565 containerd[1453]: time="2024-10-09T01:10:36.737498385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:36.738444 containerd[1453]: time="2024-10-09T01:10:36.738333497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:10:36.738444 containerd[1453]: time="2024-10-09T01:10:36.738414660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:10:36.738675 containerd[1453]: time="2024-10-09T01:10:36.738567906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:36.738751 containerd[1453]: time="2024-10-09T01:10:36.738694911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:36.760670 systemd[1]: Started cri-containerd-7fbf479ce9627544f6d4ebbb21bf268ff4ab242f1c00d4a910d1849bef20fe91.scope - libcontainer container 7fbf479ce9627544f6d4ebbb21bf268ff4ab242f1c00d4a910d1849bef20fe91. Oct 9 01:10:36.761780 systemd[1]: Started cri-containerd-96ce13344d6a823a673248bac9428ad2b38261f9d4313ba24ff2d34cb80148ce.scope - libcontainer container 96ce13344d6a823a673248bac9428ad2b38261f9d4313ba24ff2d34cb80148ce. Oct 9 01:10:36.770109 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:10:36.772207 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:10:36.787050 containerd[1453]: time="2024-10-09T01:10:36.787016434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z6r5j,Uid:83c28971-d92b-4545-af0e-135388f65f50,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fbf479ce9627544f6d4ebbb21bf268ff4ab242f1c00d4a910d1849bef20fe91\"" Oct 9 01:10:36.788168 kubelet[2679]: E1009 01:10:36.787821 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:36.789722 containerd[1453]: time="2024-10-09T01:10:36.789630773Z" level=info msg="CreateContainer within sandbox \"7fbf479ce9627544f6d4ebbb21bf268ff4ab242f1c00d4a910d1849bef20fe91\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:10:36.791590 containerd[1453]: time="2024-10-09T01:10:36.791421322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5cqvg,Uid:e68a1461-d580-476c-898a-cbc297e61592,Namespace:kube-system,Attempt:0,} returns sandbox id \"96ce13344d6a823a673248bac9428ad2b38261f9d4313ba24ff2d34cb80148ce\"" Oct 9 01:10:36.792468 kubelet[2679]: E1009 01:10:36.792337 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:36.794482 containerd[1453]: time="2024-10-09T01:10:36.794444797Z" level=info msg="CreateContainer within sandbox \"96ce13344d6a823a673248bac9428ad2b38261f9d4313ba24ff2d34cb80148ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:10:36.806072 containerd[1453]: time="2024-10-09T01:10:36.806039679Z" level=info msg="CreateContainer within sandbox \"7fbf479ce9627544f6d4ebbb21bf268ff4ab242f1c00d4a910d1849bef20fe91\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7cad3b0862e989448499a4f40f3f46adb9a4a30acc3327db38bbb14f76889cf3\"" Oct 9 01:10:36.807263 containerd[1453]: time="2024-10-09T01:10:36.806605141Z" level=info msg="StartContainer for \"7cad3b0862e989448499a4f40f3f46adb9a4a30acc3327db38bbb14f76889cf3\"" Oct 9 01:10:36.808475 containerd[1453]: time="2024-10-09T01:10:36.808412170Z" level=info msg="CreateContainer within sandbox \"96ce13344d6a823a673248bac9428ad2b38261f9d4313ba24ff2d34cb80148ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8290d9efbc5ccfe40348284b96c8bde6eb51ecfb6a7033002fa35d7cb4d5e82d\"" Oct 9 01:10:36.808816 containerd[1453]: time="2024-10-09T01:10:36.808796984Z" level=info msg="StartContainer for \"8290d9efbc5ccfe40348284b96c8bde6eb51ecfb6a7033002fa35d7cb4d5e82d\"" Oct 9 01:10:36.828735 systemd[1]: Started cri-containerd-7cad3b0862e989448499a4f40f3f46adb9a4a30acc3327db38bbb14f76889cf3.scope - libcontainer container 7cad3b0862e989448499a4f40f3f46adb9a4a30acc3327db38bbb14f76889cf3. Oct 9 01:10:36.831749 systemd[1]: Started cri-containerd-8290d9efbc5ccfe40348284b96c8bde6eb51ecfb6a7033002fa35d7cb4d5e82d.scope - libcontainer container 8290d9efbc5ccfe40348284b96c8bde6eb51ecfb6a7033002fa35d7cb4d5e82d. Oct 9 01:10:36.857582 containerd[1453]: time="2024-10-09T01:10:36.857465360Z" level=info msg="StartContainer for \"7cad3b0862e989448499a4f40f3f46adb9a4a30acc3327db38bbb14f76889cf3\" returns successfully" Oct 9 01:10:36.857582 containerd[1453]: time="2024-10-09T01:10:36.857538963Z" level=info msg="StartContainer for \"8290d9efbc5ccfe40348284b96c8bde6eb51ecfb6a7033002fa35d7cb4d5e82d\" returns successfully" Oct 9 01:10:37.638642 kubelet[2679]: E1009 01:10:37.638614 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:37.641725 kubelet[2679]: E1009 01:10:37.641656 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:37.654240 kubelet[2679]: I1009 01:10:37.654167 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5cqvg" podStartSLOduration=21.653580137 podStartE2EDuration="21.653580137s" podCreationTimestamp="2024-10-09 01:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:10:37.653207603 +0000 UTC m=+36.190805937" watchObservedRunningTime="2024-10-09 01:10:37.653580137 +0000 UTC m=+36.191178471" Oct 9 01:10:37.665052 kubelet[2679]: I1009 01:10:37.664992 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-z6r5j" podStartSLOduration=21.664959566 podStartE2EDuration="21.664959566s" podCreationTimestamp="2024-10-09 01:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:10:37.663399388 +0000 UTC m=+36.200997722" watchObservedRunningTime="2024-10-09 01:10:37.664959566 +0000 UTC m=+36.202557940" Oct 9 01:10:38.644068 kubelet[2679]: E1009 01:10:38.643563 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:38.644068 kubelet[2679]: E1009 01:10:38.643933 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:39.587900 systemd[1]: Started sshd@9-10.0.0.157:22-10.0.0.1:37132.service - OpenSSH per-connection server daemon (10.0.0.1:37132). Oct 9 01:10:39.628065 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 37132 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:10:39.629400 sshd[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:10:39.632746 systemd-logind[1437]: New session 10 of user core. Oct 9 01:10:39.645315 kubelet[2679]: E1009 01:10:39.645283 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:39.646642 kubelet[2679]: E1009 01:10:39.645377 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:10:39.646606 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 01:10:39.764546 sshd[4104]: pam_unix(sshd:session): session closed for user core Oct 9 01:10:39.767916 systemd[1]: sshd@9-10.0.0.157:22-10.0.0.1:37132.service: Deactivated successfully. Oct 9 01:10:39.769587 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 01:10:39.770104 systemd-logind[1437]: Session 10 logged out. Waiting for processes to exit. Oct 9 01:10:39.770908 systemd-logind[1437]: Removed session 10. Oct 9 01:10:44.778935 systemd[1]: Started sshd@10-10.0.0.157:22-10.0.0.1:42600.service - OpenSSH per-connection server daemon (10.0.0.1:42600). Oct 9 01:10:44.819289 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 42600 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:10:44.820588 sshd[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:10:44.823964 systemd-logind[1437]: New session 11 of user core. Oct 9 01:10:44.837671 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 01:10:44.946340 sshd[4119]: pam_unix(sshd:session): session closed for user core Oct 9 01:10:44.954800 systemd[1]: sshd@10-10.0.0.157:22-10.0.0.1:42600.service: Deactivated successfully. Oct 9 01:10:44.956302 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 01:10:44.957498 systemd-logind[1437]: Session 11 logged out. Waiting for processes to exit. Oct 9 01:10:44.965703 systemd[1]: Started sshd@11-10.0.0.157:22-10.0.0.1:42608.service - OpenSSH per-connection server daemon (10.0.0.1:42608). Oct 9 01:10:44.966484 systemd-logind[1437]: Removed session 11. Oct 9 01:10:45.000949 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 42608 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:10:45.002141 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:10:45.005490 systemd-logind[1437]: New session 12 of user core. Oct 9 01:10:45.016591 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 01:10:45.161241 sshd[4134]: pam_unix(sshd:session): session closed for user core Oct 9 01:10:45.170592 systemd[1]: sshd@11-10.0.0.157:22-10.0.0.1:42608.service: Deactivated successfully. Oct 9 01:10:45.174158 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 01:10:45.176293 systemd-logind[1437]: Session 12 logged out. Waiting for processes to exit. Oct 9 01:10:45.183716 systemd[1]: Started sshd@12-10.0.0.157:22-10.0.0.1:42618.service - OpenSSH per-connection server daemon (10.0.0.1:42618). Oct 9 01:10:45.184434 systemd-logind[1437]: Removed session 12. Oct 9 01:10:45.223839 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 42618 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:10:45.225101 sshd[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:10:45.231852 systemd-logind[1437]: New session 13 of user core. Oct 9 01:10:45.237584 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 01:10:45.347563 sshd[4147]: pam_unix(sshd:session): session closed for user core Oct 9 01:10:45.350006 systemd[1]: sshd@12-10.0.0.157:22-10.0.0.1:42618.service: Deactivated successfully. Oct 9 01:10:45.351673 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 01:10:45.353733 systemd-logind[1437]: Session 13 logged out. Waiting for processes to exit. Oct 9 01:10:45.355192 systemd-logind[1437]: Removed session 13. Oct 9 01:10:50.358174 systemd[1]: Started sshd@13-10.0.0.157:22-10.0.0.1:42626.service - OpenSSH per-connection server daemon (10.0.0.1:42626). Oct 9 01:10:50.397421 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 42626 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:10:50.398657 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:10:50.401920 systemd-logind[1437]: New session 14 of user core. Oct 9 01:10:50.411597 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 01:10:50.522949 sshd[4166]: pam_unix(sshd:session): session closed for user core Oct 9 01:10:50.526042 systemd[1]: sshd@13-10.0.0.157:22-10.0.0.1:42626.service: Deactivated successfully. Oct 9 01:10:50.528559 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 01:10:50.529373 systemd-logind[1437]: Session 14 logged out. Waiting for processes to exit. Oct 9 01:10:50.530196 systemd-logind[1437]: Removed session 14. Oct 9 01:10:55.534160 systemd[1]: Started sshd@14-10.0.0.157:22-10.0.0.1:57368.service - OpenSSH per-connection server daemon (10.0.0.1:57368). Oct 9 01:10:55.573624 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 57368 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:10:55.574753 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:10:55.578758 systemd-logind[1437]: New session 15 of user core. Oct 9 01:10:55.590597 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 01:10:55.697436 sshd[4180]: pam_unix(sshd:session): session closed for user core Oct 9 01:10:55.707973 systemd[1]: sshd@14-10.0.0.157:22-10.0.0.1:57368.service: Deactivated successfully. Oct 9 01:10:55.709645 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 01:10:55.710957 systemd-logind[1437]: Session 15 logged out. Waiting for processes to exit. Oct 9 01:10:55.725701 systemd[1]: Started sshd@15-10.0.0.157:22-10.0.0.1:57380.service - OpenSSH per-connection server daemon (10.0.0.1:57380). Oct 9 01:10:55.726592 systemd-logind[1437]: Removed session 15. Oct 9 01:10:55.763743 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 57380 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:10:55.764962 sshd[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:10:55.768678 systemd-logind[1437]: New session 16 of user core. Oct 9 01:10:55.778654 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 01:10:55.976293 sshd[4194]: pam_unix(sshd:session): session closed for user core Oct 9 01:10:55.986675 systemd[1]: sshd@15-10.0.0.157:22-10.0.0.1:57380.service: Deactivated successfully. Oct 9 01:10:55.988930 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 01:10:55.989801 systemd-logind[1437]: Session 16 logged out. Waiting for processes to exit. Oct 9 01:10:55.992560 systemd[1]: Started sshd@16-10.0.0.157:22-10.0.0.1:57382.service - OpenSSH per-connection server daemon (10.0.0.1:57382). Oct 9 01:10:55.993565 systemd-logind[1437]: Removed session 16. Oct 9 01:10:56.033969 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 57382 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:10:56.035180 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:10:56.038922 systemd-logind[1437]: New session 17 of user core. Oct 9 01:10:56.054568 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 01:10:57.206481 sshd[4206]: pam_unix(sshd:session): session closed for user core Oct 9 01:10:57.215115 systemd[1]: sshd@16-10.0.0.157:22-10.0.0.1:57382.service: Deactivated successfully. Oct 9 01:10:57.217153 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 01:10:57.219991 systemd-logind[1437]: Session 17 logged out. Waiting for processes to exit. Oct 9 01:10:57.230099 systemd[1]: Started sshd@17-10.0.0.157:22-10.0.0.1:57398.service - OpenSSH per-connection server daemon (10.0.0.1:57398). Oct 9 01:10:57.231160 systemd-logind[1437]: Removed session 17. Oct 9 01:10:57.268333 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 57398 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:10:57.269697 sshd[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:10:57.274225 systemd-logind[1437]: New session 18 of user core. Oct 9 01:10:57.285593 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 01:10:57.489503 sshd[4226]: pam_unix(sshd:session): session closed for user core Oct 9 01:10:57.497987 systemd[1]: sshd@17-10.0.0.157:22-10.0.0.1:57398.service: Deactivated successfully. Oct 9 01:10:57.499744 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 01:10:57.501017 systemd-logind[1437]: Session 18 logged out. Waiting for processes to exit. Oct 9 01:10:57.508733 systemd[1]: Started sshd@18-10.0.0.157:22-10.0.0.1:57402.service - OpenSSH per-connection server daemon (10.0.0.1:57402). Oct 9 01:10:57.509799 systemd-logind[1437]: Removed session 18. Oct 9 01:10:57.544009 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 57402 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:10:57.545254 sshd[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:10:57.548823 systemd-logind[1437]: New session 19 of user core. Oct 9 01:10:57.559656 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 01:10:57.668553 sshd[4238]: pam_unix(sshd:session): session closed for user core Oct 9 01:10:57.671766 systemd[1]: sshd@18-10.0.0.157:22-10.0.0.1:57402.service: Deactivated successfully. Oct 9 01:10:57.674010 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 01:10:57.674676 systemd-logind[1437]: Session 19 logged out. Waiting for processes to exit. Oct 9 01:10:57.675783 systemd-logind[1437]: Removed session 19. Oct 9 01:11:02.679048 systemd[1]: Started sshd@19-10.0.0.157:22-10.0.0.1:53346.service - OpenSSH per-connection server daemon (10.0.0.1:53346). Oct 9 01:11:02.718055 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 53346 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:11:02.719223 sshd[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:11:02.722955 systemd-logind[1437]: New session 20 of user core. Oct 9 01:11:02.735641 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 01:11:02.840589 sshd[4257]: pam_unix(sshd:session): session closed for user core Oct 9 01:11:02.843647 systemd[1]: sshd@19-10.0.0.157:22-10.0.0.1:53346.service: Deactivated successfully. Oct 9 01:11:02.845878 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 01:11:02.846500 systemd-logind[1437]: Session 20 logged out. Waiting for processes to exit. Oct 9 01:11:02.847207 systemd-logind[1437]: Removed session 20. Oct 9 01:11:07.852941 systemd[1]: Started sshd@20-10.0.0.157:22-10.0.0.1:53356.service - OpenSSH per-connection server daemon (10.0.0.1:53356). Oct 9 01:11:07.891813 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 53356 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:11:07.892958 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:11:07.896331 systemd-logind[1437]: New session 21 of user core. Oct 9 01:11:07.902611 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 01:11:08.008499 sshd[4273]: pam_unix(sshd:session): session closed for user core Oct 9 01:11:08.012018 systemd[1]: sshd@20-10.0.0.157:22-10.0.0.1:53356.service: Deactivated successfully. Oct 9 01:11:08.013623 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 01:11:08.014169 systemd-logind[1437]: Session 21 logged out. Waiting for processes to exit. Oct 9 01:11:08.015002 systemd-logind[1437]: Removed session 21. Oct 9 01:11:11.536265 kubelet[2679]: E1009 01:11:11.536170 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:11:13.019047 systemd[1]: Started sshd@21-10.0.0.157:22-10.0.0.1:45366.service - OpenSSH per-connection server daemon (10.0.0.1:45366). Oct 9 01:11:13.057903 sshd[4287]: Accepted publickey for core from 10.0.0.1 port 45366 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:11:13.059117 sshd[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:11:13.062424 systemd-logind[1437]: New session 22 of user core. Oct 9 01:11:13.066635 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 01:11:13.174244 sshd[4287]: pam_unix(sshd:session): session closed for user core Oct 9 01:11:13.182304 systemd[1]: sshd@21-10.0.0.157:22-10.0.0.1:45366.service: Deactivated successfully. Oct 9 01:11:13.184878 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 01:11:13.185579 systemd-logind[1437]: Session 22 logged out. Waiting for processes to exit. Oct 9 01:11:13.196958 systemd[1]: Started sshd@22-10.0.0.157:22-10.0.0.1:45374.service - OpenSSH per-connection server daemon (10.0.0.1:45374). Oct 9 01:11:13.197872 systemd-logind[1437]: Removed session 22. Oct 9 01:11:13.231652 sshd[4301]: Accepted publickey for core from 10.0.0.1 port 45374 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:11:13.232802 sshd[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:11:13.236750 systemd-logind[1437]: New session 23 of user core. Oct 9 01:11:13.242576 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 01:11:15.350885 containerd[1453]: time="2024-10-09T01:11:15.350833081Z" level=info msg="StopContainer for \"008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579\" with timeout 30 (s)" Oct 9 01:11:15.352854 containerd[1453]: time="2024-10-09T01:11:15.351490965Z" level=info msg="Stop container \"008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579\" with signal terminated" Oct 9 01:11:15.360737 systemd[1]: cri-containerd-008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579.scope: Deactivated successfully. Oct 9 01:11:15.377093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579-rootfs.mount: Deactivated successfully. Oct 9 01:11:15.384308 containerd[1453]: time="2024-10-09T01:11:15.384199906Z" level=info msg="shim disconnected" id=008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579 namespace=k8s.io Oct 9 01:11:15.384308 containerd[1453]: time="2024-10-09T01:11:15.384299666Z" level=warning msg="cleaning up after shim disconnected" id=008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579 namespace=k8s.io Oct 9 01:11:15.384308 containerd[1453]: time="2024-10-09T01:11:15.384311066Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:11:15.400884 containerd[1453]: time="2024-10-09T01:11:15.400801838Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 01:11:15.408397 containerd[1453]: time="2024-10-09T01:11:15.408371039Z" level=info msg="StopContainer for \"a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f\" with timeout 2 (s)" Oct 9 01:11:15.408689 containerd[1453]: time="2024-10-09T01:11:15.408667321Z" level=info msg="Stop container \"a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f\" with signal terminated" Oct 9 01:11:15.414281 systemd-networkd[1384]: lxc_health: Link DOWN Oct 9 01:11:15.414289 systemd-networkd[1384]: lxc_health: Lost carrier Oct 9 01:11:15.442393 systemd[1]: cri-containerd-a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f.scope: Deactivated successfully. Oct 9 01:11:15.442664 systemd[1]: cri-containerd-a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f.scope: Consumed 6.336s CPU time. Oct 9 01:11:15.444132 containerd[1453]: time="2024-10-09T01:11:15.443974756Z" level=info msg="StopContainer for \"008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579\" returns successfully" Oct 9 01:11:15.447112 containerd[1453]: time="2024-10-09T01:11:15.446890012Z" level=info msg="StopPodSandbox for \"b2fba8ad0bac54255cf62f9d3792417f6ce35f6d284e6ac9ec4417cd99ac0f16\"" Oct 9 01:11:15.447112 containerd[1453]: time="2024-10-09T01:11:15.446944333Z" level=info msg="Container to stop \"008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 01:11:15.448751 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b2fba8ad0bac54255cf62f9d3792417f6ce35f6d284e6ac9ec4417cd99ac0f16-shm.mount: Deactivated successfully. Oct 9 01:11:15.460203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f-rootfs.mount: Deactivated successfully. Oct 9 01:11:15.461012 systemd[1]: cri-containerd-b2fba8ad0bac54255cf62f9d3792417f6ce35f6d284e6ac9ec4417cd99ac0f16.scope: Deactivated successfully. Oct 9 01:11:15.468861 containerd[1453]: time="2024-10-09T01:11:15.468809533Z" level=info msg="shim disconnected" id=a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f namespace=k8s.io Oct 9 01:11:15.468861 containerd[1453]: time="2024-10-09T01:11:15.468859774Z" level=warning msg="cleaning up after shim disconnected" id=a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f namespace=k8s.io Oct 9 01:11:15.468861 containerd[1453]: time="2024-10-09T01:11:15.468868894Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:11:15.477851 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2fba8ad0bac54255cf62f9d3792417f6ce35f6d284e6ac9ec4417cd99ac0f16-rootfs.mount: Deactivated successfully. Oct 9 01:11:15.480042 containerd[1453]: time="2024-10-09T01:11:15.479939835Z" level=info msg="shim disconnected" id=b2fba8ad0bac54255cf62f9d3792417f6ce35f6d284e6ac9ec4417cd99ac0f16 namespace=k8s.io Oct 9 01:11:15.480042 containerd[1453]: time="2024-10-09T01:11:15.480038956Z" level=warning msg="cleaning up after shim disconnected" id=b2fba8ad0bac54255cf62f9d3792417f6ce35f6d284e6ac9ec4417cd99ac0f16 namespace=k8s.io Oct 9 01:11:15.480042 containerd[1453]: time="2024-10-09T01:11:15.480047316Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:11:15.485142 containerd[1453]: time="2024-10-09T01:11:15.485103384Z" level=info msg="StopContainer for \"a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f\" returns successfully" Oct 9 01:11:15.485600 containerd[1453]: time="2024-10-09T01:11:15.485578066Z" level=info msg="StopPodSandbox for \"c47d2e99d7dc436a9dc896b22121077024a0314364120d82c15bf4ab679b796e\"" Oct 9 01:11:15.485635 containerd[1453]: time="2024-10-09T01:11:15.485612226Z" level=info msg="Container to stop \"1d32d2aa4f8e3acae311b10c5f9bfde30f7c7f648f2349ff383ccbac62ea39ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 01:11:15.485635 containerd[1453]: time="2024-10-09T01:11:15.485623746Z" level=info msg="Container to stop \"1088f3a35b2ae8b84459c74555882be75324b07c9cab2fad38a31c8fae460028\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 01:11:15.485635 containerd[1453]: time="2024-10-09T01:11:15.485632466Z" level=info msg="Container to stop \"7eaf8cfb76a09a00d5535846ed5656a8c29a81b03b336bca2e163445cabe6a08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 01:11:15.485697 containerd[1453]: time="2024-10-09T01:11:15.485641186Z" level=info msg="Container to stop \"3f0d017dbc477b90710839f35754c4eaa4eb04dae7801fbf294bd962c15aa7b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 01:11:15.485697 containerd[1453]: time="2024-10-09T01:11:15.485649787Z" level=info msg="Container to stop \"a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 01:11:15.492214 systemd[1]: cri-containerd-c47d2e99d7dc436a9dc896b22121077024a0314364120d82c15bf4ab679b796e.scope: Deactivated successfully. Oct 9 01:11:15.500694 containerd[1453]: time="2024-10-09T01:11:15.500645629Z" level=info msg="TearDown network for sandbox \"b2fba8ad0bac54255cf62f9d3792417f6ce35f6d284e6ac9ec4417cd99ac0f16\" successfully" Oct 9 01:11:15.500694 containerd[1453]: time="2024-10-09T01:11:15.500680310Z" level=info msg="StopPodSandbox for \"b2fba8ad0bac54255cf62f9d3792417f6ce35f6d284e6ac9ec4417cd99ac0f16\" returns successfully" Oct 9 01:11:15.520360 containerd[1453]: time="2024-10-09T01:11:15.520296738Z" level=info msg="shim disconnected" id=c47d2e99d7dc436a9dc896b22121077024a0314364120d82c15bf4ab679b796e namespace=k8s.io Oct 9 01:11:15.520360 containerd[1453]: time="2024-10-09T01:11:15.520362458Z" level=warning msg="cleaning up after shim disconnected" id=c47d2e99d7dc436a9dc896b22121077024a0314364120d82c15bf4ab679b796e namespace=k8s.io Oct 9 01:11:15.520617 containerd[1453]: time="2024-10-09T01:11:15.520372298Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:11:15.533887 containerd[1453]: time="2024-10-09T01:11:15.533849293Z" level=info msg="TearDown network for sandbox \"c47d2e99d7dc436a9dc896b22121077024a0314364120d82c15bf4ab679b796e\" successfully" Oct 9 01:11:15.534193 containerd[1453]: time="2024-10-09T01:11:15.534001894Z" level=info msg="StopPodSandbox for \"c47d2e99d7dc436a9dc896b22121077024a0314364120d82c15bf4ab679b796e\" returns successfully" Oct 9 01:11:15.664728 kubelet[2679]: I1009 01:11:15.664586 2679 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-cni-path\") pod \"6fb07bd6-6479-486b-92b5-6f919787ac9d\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " Oct 9 01:11:15.664728 kubelet[2679]: I1009 01:11:15.664626 2679 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-cilium-cgroup\") pod \"6fb07bd6-6479-486b-92b5-6f919787ac9d\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " Oct 9 01:11:15.664728 kubelet[2679]: I1009 01:11:15.664645 2679 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-host-proc-sys-kernel\") pod \"6fb07bd6-6479-486b-92b5-6f919787ac9d\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " Oct 9 01:11:15.664728 kubelet[2679]: I1009 01:11:15.664661 2679 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-host-proc-sys-net\") pod \"6fb07bd6-6479-486b-92b5-6f919787ac9d\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " Oct 9 01:11:15.664728 kubelet[2679]: I1009 01:11:15.664681 2679 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6fb07bd6-6479-486b-92b5-6f919787ac9d-hubble-tls\") pod \"6fb07bd6-6479-486b-92b5-6f919787ac9d\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " Oct 9 01:11:15.664728 kubelet[2679]: I1009 01:11:15.664697 2679 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j99nn\" (UniqueName: \"kubernetes.io/projected/1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858-kube-api-access-j99nn\") pod \"1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858\" (UID: \"1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858\") " Oct 9 01:11:15.665264 kubelet[2679]: I1009 01:11:15.664715 2679 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6fb07bd6-6479-486b-92b5-6f919787ac9d-cilium-config-path\") pod \"6fb07bd6-6479-486b-92b5-6f919787ac9d\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " Oct 9 01:11:15.665264 kubelet[2679]: I1009 01:11:15.664737 2679 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-xtables-lock\") pod \"6fb07bd6-6479-486b-92b5-6f919787ac9d\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " Oct 9 01:11:15.665264 kubelet[2679]: I1009 01:11:15.664757 2679 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858-cilium-config-path\") pod \"1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858\" (UID: \"1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858\") " Oct 9 01:11:15.665264 kubelet[2679]: I1009 01:11:15.664772 2679 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-etc-cni-netd\") pod \"6fb07bd6-6479-486b-92b5-6f919787ac9d\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " Oct 9 01:11:15.665264 kubelet[2679]: I1009 01:11:15.664786 2679 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-bpf-maps\") pod \"6fb07bd6-6479-486b-92b5-6f919787ac9d\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " Oct 9 01:11:15.665264 kubelet[2679]: I1009 01:11:15.664801 2679 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-lib-modules\") pod \"6fb07bd6-6479-486b-92b5-6f919787ac9d\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " Oct 9 01:11:15.665391 kubelet[2679]: I1009 01:11:15.664814 2679 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-hostproc\") pod \"6fb07bd6-6479-486b-92b5-6f919787ac9d\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " Oct 9 01:11:15.665391 kubelet[2679]: I1009 01:11:15.664831 2679 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6fb07bd6-6479-486b-92b5-6f919787ac9d-clustermesh-secrets\") pod \"6fb07bd6-6479-486b-92b5-6f919787ac9d\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " Oct 9 01:11:15.665391 kubelet[2679]: I1009 01:11:15.664851 2679 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhblp\" (UniqueName: \"kubernetes.io/projected/6fb07bd6-6479-486b-92b5-6f919787ac9d-kube-api-access-bhblp\") pod \"6fb07bd6-6479-486b-92b5-6f919787ac9d\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " Oct 9 01:11:15.665391 kubelet[2679]: I1009 01:11:15.664867 2679 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-cilium-run\") pod \"6fb07bd6-6479-486b-92b5-6f919787ac9d\" (UID: \"6fb07bd6-6479-486b-92b5-6f919787ac9d\") " Oct 9 01:11:15.668575 kubelet[2679]: I1009 01:11:15.668334 2679 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6fb07bd6-6479-486b-92b5-6f919787ac9d" (UID: "6fb07bd6-6479-486b-92b5-6f919787ac9d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:11:15.668575 kubelet[2679]: I1009 01:11:15.668334 2679 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6fb07bd6-6479-486b-92b5-6f919787ac9d" (UID: "6fb07bd6-6479-486b-92b5-6f919787ac9d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:11:15.668575 kubelet[2679]: I1009 01:11:15.668476 2679 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-cni-path" (OuterVolumeSpecName: "cni-path") pod "6fb07bd6-6479-486b-92b5-6f919787ac9d" (UID: "6fb07bd6-6479-486b-92b5-6f919787ac9d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:11:15.669862 kubelet[2679]: I1009 01:11:15.669830 2679 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6fb07bd6-6479-486b-92b5-6f919787ac9d" (UID: "6fb07bd6-6479-486b-92b5-6f919787ac9d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:11:15.669912 kubelet[2679]: I1009 01:11:15.669878 2679 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6fb07bd6-6479-486b-92b5-6f919787ac9d" (UID: "6fb07bd6-6479-486b-92b5-6f919787ac9d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:11:15.669912 kubelet[2679]: I1009 01:11:15.669894 2679 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6fb07bd6-6479-486b-92b5-6f919787ac9d" (UID: "6fb07bd6-6479-486b-92b5-6f919787ac9d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:11:15.669912 kubelet[2679]: I1009 01:11:15.669909 2679 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6fb07bd6-6479-486b-92b5-6f919787ac9d" (UID: "6fb07bd6-6479-486b-92b5-6f919787ac9d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:11:15.669979 kubelet[2679]: I1009 01:11:15.669922 2679 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-hostproc" (OuterVolumeSpecName: "hostproc") pod "6fb07bd6-6479-486b-92b5-6f919787ac9d" (UID: "6fb07bd6-6479-486b-92b5-6f919787ac9d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:11:15.675237 kubelet[2679]: I1009 01:11:15.674208 2679 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb07bd6-6479-486b-92b5-6f919787ac9d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6fb07bd6-6479-486b-92b5-6f919787ac9d" (UID: "6fb07bd6-6479-486b-92b5-6f919787ac9d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 9 01:11:15.675237 kubelet[2679]: I1009 01:11:15.674326 2679 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858-kube-api-access-j99nn" (OuterVolumeSpecName: "kube-api-access-j99nn") pod "1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858" (UID: "1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858"). InnerVolumeSpecName "kube-api-access-j99nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 01:11:15.675237 kubelet[2679]: I1009 01:11:15.674374 2679 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6fb07bd6-6479-486b-92b5-6f919787ac9d" (UID: "6fb07bd6-6479-486b-92b5-6f919787ac9d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:11:15.675237 kubelet[2679]: I1009 01:11:15.674991 2679 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fb07bd6-6479-486b-92b5-6f919787ac9d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6fb07bd6-6479-486b-92b5-6f919787ac9d" (UID: "6fb07bd6-6479-486b-92b5-6f919787ac9d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 01:11:15.675407 kubelet[2679]: I1009 01:11:15.675038 2679 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6fb07bd6-6479-486b-92b5-6f919787ac9d" (UID: "6fb07bd6-6479-486b-92b5-6f919787ac9d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:11:15.676105 kubelet[2679]: I1009 01:11:15.676067 2679 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb07bd6-6479-486b-92b5-6f919787ac9d-kube-api-access-bhblp" (OuterVolumeSpecName: "kube-api-access-bhblp") pod "6fb07bd6-6479-486b-92b5-6f919787ac9d" (UID: "6fb07bd6-6479-486b-92b5-6f919787ac9d"). InnerVolumeSpecName "kube-api-access-bhblp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 01:11:15.676402 kubelet[2679]: I1009 01:11:15.676372 2679 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb07bd6-6479-486b-92b5-6f919787ac9d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6fb07bd6-6479-486b-92b5-6f919787ac9d" (UID: "6fb07bd6-6479-486b-92b5-6f919787ac9d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 01:11:15.676440 kubelet[2679]: I1009 01:11:15.676379 2679 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858" (UID: "1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 01:11:15.728699 kubelet[2679]: I1009 01:11:15.728672 2679 scope.go:117] "RemoveContainer" containerID="a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f" Oct 9 01:11:15.735178 containerd[1453]: time="2024-10-09T01:11:15.735122406Z" level=info msg="RemoveContainer for \"a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f\"" Oct 9 01:11:15.738540 containerd[1453]: time="2024-10-09T01:11:15.738504744Z" level=info msg="RemoveContainer for \"a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f\" returns successfully" Oct 9 01:11:15.739108 kubelet[2679]: I1009 01:11:15.739016 2679 scope.go:117] "RemoveContainer" containerID="3f0d017dbc477b90710839f35754c4eaa4eb04dae7801fbf294bd962c15aa7b6" Oct 9 01:11:15.739434 systemd[1]: Removed slice kubepods-besteffort-pod1cc1c9eb_0c02_4ec5_a76b_37dffb9c4858.slice - libcontainer container kubepods-besteffort-pod1cc1c9eb_0c02_4ec5_a76b_37dffb9c4858.slice. Oct 9 01:11:15.740640 containerd[1453]: time="2024-10-09T01:11:15.740595996Z" level=info msg="RemoveContainer for \"3f0d017dbc477b90710839f35754c4eaa4eb04dae7801fbf294bd962c15aa7b6\"" Oct 9 01:11:15.741944 systemd[1]: Removed slice kubepods-burstable-pod6fb07bd6_6479_486b_92b5_6f919787ac9d.slice - libcontainer container kubepods-burstable-pod6fb07bd6_6479_486b_92b5_6f919787ac9d.slice. Oct 9 01:11:15.742048 systemd[1]: kubepods-burstable-pod6fb07bd6_6479_486b_92b5_6f919787ac9d.slice: Consumed 6.469s CPU time. Oct 9 01:11:15.743214 containerd[1453]: time="2024-10-09T01:11:15.743148850Z" level=info msg="RemoveContainer for \"3f0d017dbc477b90710839f35754c4eaa4eb04dae7801fbf294bd962c15aa7b6\" returns successfully" Oct 9 01:11:15.743370 kubelet[2679]: I1009 01:11:15.743335 2679 scope.go:117] "RemoveContainer" containerID="1088f3a35b2ae8b84459c74555882be75324b07c9cab2fad38a31c8fae460028" Oct 9 01:11:15.744709 containerd[1453]: time="2024-10-09T01:11:15.744673978Z" level=info msg="RemoveContainer for \"1088f3a35b2ae8b84459c74555882be75324b07c9cab2fad38a31c8fae460028\"" Oct 9 01:11:15.748024 containerd[1453]: time="2024-10-09T01:11:15.747935716Z" level=info msg="RemoveContainer for \"1088f3a35b2ae8b84459c74555882be75324b07c9cab2fad38a31c8fae460028\" returns successfully" Oct 9 01:11:15.748271 kubelet[2679]: I1009 01:11:15.748192 2679 scope.go:117] "RemoveContainer" containerID="1d32d2aa4f8e3acae311b10c5f9bfde30f7c7f648f2349ff383ccbac62ea39ad" Oct 9 01:11:15.749196 containerd[1453]: time="2024-10-09T01:11:15.749155763Z" level=info msg="RemoveContainer for \"1d32d2aa4f8e3acae311b10c5f9bfde30f7c7f648f2349ff383ccbac62ea39ad\"" Oct 9 01:11:15.753299 containerd[1453]: time="2024-10-09T01:11:15.753273426Z" level=info msg="RemoveContainer for \"1d32d2aa4f8e3acae311b10c5f9bfde30f7c7f648f2349ff383ccbac62ea39ad\" returns successfully" Oct 9 01:11:15.753619 kubelet[2679]: I1009 01:11:15.753506 2679 scope.go:117] "RemoveContainer" containerID="7eaf8cfb76a09a00d5535846ed5656a8c29a81b03b336bca2e163445cabe6a08" Oct 9 01:11:15.754861 containerd[1453]: time="2024-10-09T01:11:15.754808434Z" level=info msg="RemoveContainer for \"7eaf8cfb76a09a00d5535846ed5656a8c29a81b03b336bca2e163445cabe6a08\"" Oct 9 01:11:15.757281 containerd[1453]: time="2024-10-09T01:11:15.757255928Z" level=info msg="RemoveContainer for \"7eaf8cfb76a09a00d5535846ed5656a8c29a81b03b336bca2e163445cabe6a08\" returns successfully" Oct 9 01:11:15.757468 kubelet[2679]: I1009 01:11:15.757412 2679 scope.go:117] "RemoveContainer" containerID="a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f" Oct 9 01:11:15.757608 containerd[1453]: time="2024-10-09T01:11:15.757580130Z" level=error msg="ContainerStatus for \"a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f\": not found" Oct 9 01:11:15.765651 kubelet[2679]: I1009 01:11:15.765623 2679 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 9 01:11:15.765651 kubelet[2679]: I1009 01:11:15.765649 2679 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 9 01:11:15.765651 kubelet[2679]: I1009 01:11:15.765660 2679 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 9 01:11:15.765812 kubelet[2679]: I1009 01:11:15.765670 2679 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 9 01:11:15.765812 kubelet[2679]: I1009 01:11:15.765677 2679 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6fb07bd6-6479-486b-92b5-6f919787ac9d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 9 01:11:15.765812 kubelet[2679]: I1009 01:11:15.765685 2679 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-j99nn\" (UniqueName: \"kubernetes.io/projected/1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858-kube-api-access-j99nn\") on node \"localhost\" DevicePath \"\"" Oct 9 01:11:15.765812 kubelet[2679]: I1009 01:11:15.765693 2679 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6fb07bd6-6479-486b-92b5-6f919787ac9d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 9 01:11:15.765812 kubelet[2679]: I1009 01:11:15.765700 2679 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 9 01:11:15.765812 kubelet[2679]: I1009 01:11:15.765707 2679 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 9 01:11:15.765812 kubelet[2679]: I1009 01:11:15.765715 2679 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 9 01:11:15.765812 kubelet[2679]: I1009 01:11:15.765729 2679 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 9 01:11:15.765967 kubelet[2679]: I1009 01:11:15.765738 2679 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 9 01:11:15.765967 kubelet[2679]: I1009 01:11:15.765745 2679 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 9 01:11:15.765967 kubelet[2679]: I1009 01:11:15.765752 2679 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6fb07bd6-6479-486b-92b5-6f919787ac9d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 9 01:11:15.765967 kubelet[2679]: I1009 01:11:15.765759 2679 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bhblp\" (UniqueName: \"kubernetes.io/projected/6fb07bd6-6479-486b-92b5-6f919787ac9d-kube-api-access-bhblp\") on node \"localhost\" DevicePath \"\"" Oct 9 01:11:15.765967 kubelet[2679]: I1009 01:11:15.765767 2679 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6fb07bd6-6479-486b-92b5-6f919787ac9d-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 9 01:11:15.769321 kubelet[2679]: E1009 01:11:15.769284 2679 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f\": not found" containerID="a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f" Oct 9 01:11:15.769404 kubelet[2679]: I1009 01:11:15.769328 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f"} err="failed to get container status \"a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f\": rpc error: code = NotFound desc = an error occurred when try to find container \"a3f7575cb3c75cc9edc6ff2d26a58529361a0ed549889d8c7bd2ca793e8de52f\": not found" Oct 9 01:11:15.769434 kubelet[2679]: I1009 01:11:15.769406 2679 scope.go:117] "RemoveContainer" containerID="3f0d017dbc477b90710839f35754c4eaa4eb04dae7801fbf294bd962c15aa7b6" Oct 9 01:11:15.769694 containerd[1453]: time="2024-10-09T01:11:15.769657396Z" level=error msg="ContainerStatus for \"3f0d017dbc477b90710839f35754c4eaa4eb04dae7801fbf294bd962c15aa7b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3f0d017dbc477b90710839f35754c4eaa4eb04dae7801fbf294bd962c15aa7b6\": not found" Oct 9 01:11:15.769830 kubelet[2679]: E1009 01:11:15.769807 2679 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3f0d017dbc477b90710839f35754c4eaa4eb04dae7801fbf294bd962c15aa7b6\": not found" containerID="3f0d017dbc477b90710839f35754c4eaa4eb04dae7801fbf294bd962c15aa7b6" Oct 9 01:11:15.769865 kubelet[2679]: I1009 01:11:15.769834 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3f0d017dbc477b90710839f35754c4eaa4eb04dae7801fbf294bd962c15aa7b6"} err="failed to get container status \"3f0d017dbc477b90710839f35754c4eaa4eb04dae7801fbf294bd962c15aa7b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"3f0d017dbc477b90710839f35754c4eaa4eb04dae7801fbf294bd962c15aa7b6\": not found" Oct 9 01:11:15.769865 kubelet[2679]: I1009 01:11:15.769851 2679 scope.go:117] "RemoveContainer" containerID="1088f3a35b2ae8b84459c74555882be75324b07c9cab2fad38a31c8fae460028" Oct 9 01:11:15.770060 containerd[1453]: time="2024-10-09T01:11:15.770028838Z" level=error msg="ContainerStatus for \"1088f3a35b2ae8b84459c74555882be75324b07c9cab2fad38a31c8fae460028\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1088f3a35b2ae8b84459c74555882be75324b07c9cab2fad38a31c8fae460028\": not found" Oct 9 01:11:15.770174 kubelet[2679]: E1009 01:11:15.770153 2679 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1088f3a35b2ae8b84459c74555882be75324b07c9cab2fad38a31c8fae460028\": not found" containerID="1088f3a35b2ae8b84459c74555882be75324b07c9cab2fad38a31c8fae460028" Oct 9 01:11:15.770218 kubelet[2679]: I1009 01:11:15.770180 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1088f3a35b2ae8b84459c74555882be75324b07c9cab2fad38a31c8fae460028"} err="failed to get container status \"1088f3a35b2ae8b84459c74555882be75324b07c9cab2fad38a31c8fae460028\": rpc error: code = NotFound desc = an error occurred when try to find container \"1088f3a35b2ae8b84459c74555882be75324b07c9cab2fad38a31c8fae460028\": not found" Oct 9 01:11:15.770218 kubelet[2679]: I1009 01:11:15.770197 2679 scope.go:117] "RemoveContainer" containerID="1d32d2aa4f8e3acae311b10c5f9bfde30f7c7f648f2349ff383ccbac62ea39ad" Oct 9 01:11:15.770545 containerd[1453]: time="2024-10-09T01:11:15.770431241Z" level=error msg="ContainerStatus for \"1d32d2aa4f8e3acae311b10c5f9bfde30f7c7f648f2349ff383ccbac62ea39ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d32d2aa4f8e3acae311b10c5f9bfde30f7c7f648f2349ff383ccbac62ea39ad\": not found" Oct 9 01:11:15.770632 kubelet[2679]: E1009 01:11:15.770578 2679 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d32d2aa4f8e3acae311b10c5f9bfde30f7c7f648f2349ff383ccbac62ea39ad\": not found" containerID="1d32d2aa4f8e3acae311b10c5f9bfde30f7c7f648f2349ff383ccbac62ea39ad" Oct 9 01:11:15.770632 kubelet[2679]: I1009 01:11:15.770623 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d32d2aa4f8e3acae311b10c5f9bfde30f7c7f648f2349ff383ccbac62ea39ad"} err="failed to get container status \"1d32d2aa4f8e3acae311b10c5f9bfde30f7c7f648f2349ff383ccbac62ea39ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d32d2aa4f8e3acae311b10c5f9bfde30f7c7f648f2349ff383ccbac62ea39ad\": not found" Oct 9 01:11:15.770689 kubelet[2679]: I1009 01:11:15.770639 2679 scope.go:117] "RemoveContainer" containerID="7eaf8cfb76a09a00d5535846ed5656a8c29a81b03b336bca2e163445cabe6a08" Oct 9 01:11:15.770848 containerd[1453]: time="2024-10-09T01:11:15.770801923Z" level=error msg="ContainerStatus for \"7eaf8cfb76a09a00d5535846ed5656a8c29a81b03b336bca2e163445cabe6a08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7eaf8cfb76a09a00d5535846ed5656a8c29a81b03b336bca2e163445cabe6a08\": not found" Oct 9 01:11:15.770978 kubelet[2679]: E1009 01:11:15.770905 2679 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7eaf8cfb76a09a00d5535846ed5656a8c29a81b03b336bca2e163445cabe6a08\": not found" containerID="7eaf8cfb76a09a00d5535846ed5656a8c29a81b03b336bca2e163445cabe6a08" Oct 9 01:11:15.770978 kubelet[2679]: I1009 01:11:15.770926 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7eaf8cfb76a09a00d5535846ed5656a8c29a81b03b336bca2e163445cabe6a08"} err="failed to get container status \"7eaf8cfb76a09a00d5535846ed5656a8c29a81b03b336bca2e163445cabe6a08\": rpc error: code = NotFound desc = an error occurred when try to find container \"7eaf8cfb76a09a00d5535846ed5656a8c29a81b03b336bca2e163445cabe6a08\": not found" Oct 9 01:11:15.770978 kubelet[2679]: I1009 01:11:15.770942 2679 scope.go:117] "RemoveContainer" containerID="008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579" Oct 9 01:11:15.771891 containerd[1453]: time="2024-10-09T01:11:15.771858049Z" level=info msg="RemoveContainer for \"008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579\"" Oct 9 01:11:15.775908 containerd[1453]: time="2024-10-09T01:11:15.775849711Z" level=info msg="RemoveContainer for \"008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579\" returns successfully" Oct 9 01:11:15.776043 kubelet[2679]: I1009 01:11:15.775998 2679 scope.go:117] "RemoveContainer" containerID="008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579" Oct 9 01:11:15.776278 containerd[1453]: time="2024-10-09T01:11:15.776243913Z" level=error msg="ContainerStatus for \"008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579\": not found" Oct 9 01:11:15.776608 kubelet[2679]: E1009 01:11:15.776586 2679 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579\": not found" containerID="008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579" Oct 9 01:11:15.776663 kubelet[2679]: I1009 01:11:15.776612 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579"} err="failed to get container status \"008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579\": rpc error: code = NotFound desc = an error occurred when try to find container \"008de04506699220024c74b1ff2c2fb4a7b95b702230efeb6378a262a1107579\": not found" Oct 9 01:11:16.371910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c47d2e99d7dc436a9dc896b22121077024a0314364120d82c15bf4ab679b796e-rootfs.mount: Deactivated successfully. Oct 9 01:11:16.372004 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c47d2e99d7dc436a9dc896b22121077024a0314364120d82c15bf4ab679b796e-shm.mount: Deactivated successfully. Oct 9 01:11:16.372062 systemd[1]: var-lib-kubelet-pods-1cc1c9eb\x2d0c02\x2d4ec5\x2da76b\x2d37dffb9c4858-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj99nn.mount: Deactivated successfully. Oct 9 01:11:16.372125 systemd[1]: var-lib-kubelet-pods-6fb07bd6\x2d6479\x2d486b\x2d92b5\x2d6f919787ac9d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbhblp.mount: Deactivated successfully. Oct 9 01:11:16.372181 systemd[1]: var-lib-kubelet-pods-6fb07bd6\x2d6479\x2d486b\x2d92b5\x2d6f919787ac9d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 9 01:11:16.372228 systemd[1]: var-lib-kubelet-pods-6fb07bd6\x2d6479\x2d486b\x2d92b5\x2d6f919787ac9d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 9 01:11:16.590201 kubelet[2679]: E1009 01:11:16.590154 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 9 01:11:17.314399 sshd[4301]: pam_unix(sshd:session): session closed for user core Oct 9 01:11:17.323154 systemd[1]: sshd@22-10.0.0.157:22-10.0.0.1:45374.service: Deactivated successfully. Oct 9 01:11:17.324821 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 01:11:17.325028 systemd[1]: session-23.scope: Consumed 1.445s CPU time. Oct 9 01:11:17.326040 systemd-logind[1437]: Session 23 logged out. Waiting for processes to exit. Oct 9 01:11:17.336801 systemd[1]: Started sshd@23-10.0.0.157:22-10.0.0.1:45382.service - OpenSSH per-connection server daemon (10.0.0.1:45382). Oct 9 01:11:17.337619 systemd-logind[1437]: Removed session 23. Oct 9 01:11:17.373051 sshd[4462]: Accepted publickey for core from 10.0.0.1 port 45382 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:11:17.374576 sshd[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:11:17.377858 systemd-logind[1437]: New session 24 of user core. Oct 9 01:11:17.393569 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 01:11:17.538508 kubelet[2679]: I1009 01:11:17.538132 2679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858" path="/var/lib/kubelet/pods/1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858/volumes" Oct 9 01:11:17.538827 kubelet[2679]: I1009 01:11:17.538520 2679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fb07bd6-6479-486b-92b5-6f919787ac9d" path="/var/lib/kubelet/pods/6fb07bd6-6479-486b-92b5-6f919787ac9d/volumes" Oct 9 01:11:18.244896 sshd[4462]: pam_unix(sshd:session): session closed for user core Oct 9 01:11:18.255224 systemd[1]: sshd@23-10.0.0.157:22-10.0.0.1:45382.service: Deactivated successfully. Oct 9 01:11:18.257353 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 01:11:18.258874 systemd-logind[1437]: Session 24 logged out. Waiting for processes to exit. Oct 9 01:11:18.266979 kubelet[2679]: I1009 01:11:18.266926 2679 topology_manager.go:215] "Topology Admit Handler" podUID="a284fd9b-66b3-4c59-a37f-d1beb7fb1378" podNamespace="kube-system" podName="cilium-x6mgk" Oct 9 01:11:18.267091 kubelet[2679]: E1009 01:11:18.267049 2679 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6fb07bd6-6479-486b-92b5-6f919787ac9d" containerName="apply-sysctl-overwrites" Oct 9 01:11:18.267091 kubelet[2679]: E1009 01:11:18.267059 2679 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6fb07bd6-6479-486b-92b5-6f919787ac9d" containerName="mount-bpf-fs" Oct 9 01:11:18.267091 kubelet[2679]: E1009 01:11:18.267065 2679 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6fb07bd6-6479-486b-92b5-6f919787ac9d" containerName="cilium-agent" Oct 9 01:11:18.267091 kubelet[2679]: E1009 01:11:18.267072 2679 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6fb07bd6-6479-486b-92b5-6f919787ac9d" containerName="mount-cgroup" Oct 9 01:11:18.267091 kubelet[2679]: E1009 01:11:18.267078 2679 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858" containerName="cilium-operator" Oct 9 01:11:18.267091 kubelet[2679]: E1009 01:11:18.267083 2679 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6fb07bd6-6479-486b-92b5-6f919787ac9d" containerName="clean-cilium-state" Oct 9 01:11:18.267218 kubelet[2679]: I1009 01:11:18.267104 2679 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cc1c9eb-0c02-4ec5-a76b-37dffb9c4858" containerName="cilium-operator" Oct 9 01:11:18.267218 kubelet[2679]: I1009 01:11:18.267111 2679 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb07bd6-6479-486b-92b5-6f919787ac9d" containerName="cilium-agent" Oct 9 01:11:18.269747 systemd[1]: Started sshd@24-10.0.0.157:22-10.0.0.1:45386.service - OpenSSH per-connection server daemon (10.0.0.1:45386). Oct 9 01:11:18.277535 systemd-logind[1437]: Removed session 24. Oct 9 01:11:18.285953 systemd[1]: Created slice kubepods-burstable-poda284fd9b_66b3_4c59_a37f_d1beb7fb1378.slice - libcontainer container kubepods-burstable-poda284fd9b_66b3_4c59_a37f_d1beb7fb1378.slice. Oct 9 01:11:18.324393 sshd[4477]: Accepted publickey for core from 10.0.0.1 port 45386 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:11:18.326028 sshd[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:11:18.330007 systemd-logind[1437]: New session 25 of user core. Oct 9 01:11:18.341603 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 01:11:18.379652 kubelet[2679]: I1009 01:11:18.379606 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a284fd9b-66b3-4c59-a37f-d1beb7fb1378-cilium-run\") pod \"cilium-x6mgk\" (UID: \"a284fd9b-66b3-4c59-a37f-d1beb7fb1378\") " pod="kube-system/cilium-x6mgk" Oct 9 01:11:18.379652 kubelet[2679]: I1009 01:11:18.379649 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a284fd9b-66b3-4c59-a37f-d1beb7fb1378-hubble-tls\") pod \"cilium-x6mgk\" (UID: \"a284fd9b-66b3-4c59-a37f-d1beb7fb1378\") " pod="kube-system/cilium-x6mgk" Oct 9 01:11:18.379851 kubelet[2679]: I1009 01:11:18.379667 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a284fd9b-66b3-4c59-a37f-d1beb7fb1378-host-proc-sys-net\") pod \"cilium-x6mgk\" (UID: \"a284fd9b-66b3-4c59-a37f-d1beb7fb1378\") " pod="kube-system/cilium-x6mgk" Oct 9 01:11:18.379851 kubelet[2679]: I1009 01:11:18.379759 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a284fd9b-66b3-4c59-a37f-d1beb7fb1378-cilium-ipsec-secrets\") pod \"cilium-x6mgk\" (UID: \"a284fd9b-66b3-4c59-a37f-d1beb7fb1378\") " pod="kube-system/cilium-x6mgk" Oct 9 01:11:18.379851 kubelet[2679]: I1009 01:11:18.379808 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a284fd9b-66b3-4c59-a37f-d1beb7fb1378-bpf-maps\") pod \"cilium-x6mgk\" (UID: \"a284fd9b-66b3-4c59-a37f-d1beb7fb1378\") " pod="kube-system/cilium-x6mgk" Oct 9 01:11:18.379851 kubelet[2679]: I1009 01:11:18.379847 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a284fd9b-66b3-4c59-a37f-d1beb7fb1378-clustermesh-secrets\") pod \"cilium-x6mgk\" (UID: \"a284fd9b-66b3-4c59-a37f-d1beb7fb1378\") " pod="kube-system/cilium-x6mgk" Oct 9 01:11:18.380218 kubelet[2679]: I1009 01:11:18.379879 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a284fd9b-66b3-4c59-a37f-d1beb7fb1378-hostproc\") pod \"cilium-x6mgk\" (UID: \"a284fd9b-66b3-4c59-a37f-d1beb7fb1378\") " pod="kube-system/cilium-x6mgk" Oct 9 01:11:18.380218 kubelet[2679]: I1009 01:11:18.379907 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a284fd9b-66b3-4c59-a37f-d1beb7fb1378-cilium-cgroup\") pod \"cilium-x6mgk\" (UID: \"a284fd9b-66b3-4c59-a37f-d1beb7fb1378\") " pod="kube-system/cilium-x6mgk" Oct 9 01:11:18.380218 kubelet[2679]: I1009 01:11:18.379935 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a284fd9b-66b3-4c59-a37f-d1beb7fb1378-lib-modules\") pod \"cilium-x6mgk\" (UID: \"a284fd9b-66b3-4c59-a37f-d1beb7fb1378\") " pod="kube-system/cilium-x6mgk" Oct 9 01:11:18.380218 kubelet[2679]: I1009 01:11:18.379987 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a284fd9b-66b3-4c59-a37f-d1beb7fb1378-cni-path\") pod \"cilium-x6mgk\" (UID: \"a284fd9b-66b3-4c59-a37f-d1beb7fb1378\") " pod="kube-system/cilium-x6mgk" Oct 9 01:11:18.380218 kubelet[2679]: I1009 01:11:18.380027 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a284fd9b-66b3-4c59-a37f-d1beb7fb1378-etc-cni-netd\") pod \"cilium-x6mgk\" (UID: \"a284fd9b-66b3-4c59-a37f-d1beb7fb1378\") " pod="kube-system/cilium-x6mgk" Oct 9 01:11:18.380218 kubelet[2679]: I1009 01:11:18.380057 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a284fd9b-66b3-4c59-a37f-d1beb7fb1378-xtables-lock\") pod \"cilium-x6mgk\" (UID: \"a284fd9b-66b3-4c59-a37f-d1beb7fb1378\") " pod="kube-system/cilium-x6mgk" Oct 9 01:11:18.380343 kubelet[2679]: I1009 01:11:18.380074 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a284fd9b-66b3-4c59-a37f-d1beb7fb1378-cilium-config-path\") pod \"cilium-x6mgk\" (UID: \"a284fd9b-66b3-4c59-a37f-d1beb7fb1378\") " pod="kube-system/cilium-x6mgk" Oct 9 01:11:18.380343 kubelet[2679]: I1009 01:11:18.380094 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a284fd9b-66b3-4c59-a37f-d1beb7fb1378-host-proc-sys-kernel\") pod \"cilium-x6mgk\" (UID: \"a284fd9b-66b3-4c59-a37f-d1beb7fb1378\") " pod="kube-system/cilium-x6mgk" Oct 9 01:11:18.380343 kubelet[2679]: I1009 01:11:18.380110 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4jx6\" (UniqueName: \"kubernetes.io/projected/a284fd9b-66b3-4c59-a37f-d1beb7fb1378-kube-api-access-f4jx6\") pod \"cilium-x6mgk\" (UID: \"a284fd9b-66b3-4c59-a37f-d1beb7fb1378\") " pod="kube-system/cilium-x6mgk" Oct 9 01:11:18.392816 sshd[4477]: pam_unix(sshd:session): session closed for user core Oct 9 01:11:18.400818 systemd[1]: sshd@24-10.0.0.157:22-10.0.0.1:45386.service: Deactivated successfully. Oct 9 01:11:18.402636 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 01:11:18.404111 systemd-logind[1437]: Session 25 logged out. Waiting for processes to exit. Oct 9 01:11:18.405514 systemd[1]: Started sshd@25-10.0.0.157:22-10.0.0.1:45402.service - OpenSSH per-connection server daemon (10.0.0.1:45402). Oct 9 01:11:18.406291 systemd-logind[1437]: Removed session 25. Oct 9 01:11:18.443969 sshd[4485]: Accepted publickey for core from 10.0.0.1 port 45402 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:11:18.445055 sshd[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:11:18.448498 systemd-logind[1437]: New session 26 of user core. Oct 9 01:11:18.457885 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 9 01:11:18.590759 kubelet[2679]: E1009 01:11:18.590610 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:11:18.591986 containerd[1453]: time="2024-10-09T01:11:18.591942592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x6mgk,Uid:a284fd9b-66b3-4c59-a37f-d1beb7fb1378,Namespace:kube-system,Attempt:0,}" Oct 9 01:11:18.609386 containerd[1453]: time="2024-10-09T01:11:18.609225998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:11:18.609386 containerd[1453]: time="2024-10-09T01:11:18.609361799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:11:18.609598 containerd[1453]: time="2024-10-09T01:11:18.609379959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:11:18.609598 containerd[1453]: time="2024-10-09T01:11:18.609510120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:11:18.630628 systemd[1]: Started cri-containerd-e2dd5beac4c389de9b195b2adef2f1d440e749fca2ca06aec057678ad9e6238b.scope - libcontainer container e2dd5beac4c389de9b195b2adef2f1d440e749fca2ca06aec057678ad9e6238b. Oct 9 01:11:18.647806 containerd[1453]: time="2024-10-09T01:11:18.647755120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x6mgk,Uid:a284fd9b-66b3-4c59-a37f-d1beb7fb1378,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2dd5beac4c389de9b195b2adef2f1d440e749fca2ca06aec057678ad9e6238b\"" Oct 9 01:11:18.648394 kubelet[2679]: E1009 01:11:18.648374 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:11:18.650222 containerd[1453]: time="2024-10-09T01:11:18.650192257Z" level=info msg="CreateContainer within sandbox \"e2dd5beac4c389de9b195b2adef2f1d440e749fca2ca06aec057678ad9e6238b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 9 01:11:18.660459 containerd[1453]: time="2024-10-09T01:11:18.660400052Z" level=info msg="CreateContainer within sandbox \"e2dd5beac4c389de9b195b2adef2f1d440e749fca2ca06aec057678ad9e6238b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"813bf15adc5f9f67b44858f69ed93b181f9cefd26a2903219e6722e69cc6f43b\"" Oct 9 01:11:18.661091 containerd[1453]: time="2024-10-09T01:11:18.660968496Z" level=info msg="StartContainer for \"813bf15adc5f9f67b44858f69ed93b181f9cefd26a2903219e6722e69cc6f43b\"" Oct 9 01:11:18.683627 systemd[1]: Started cri-containerd-813bf15adc5f9f67b44858f69ed93b181f9cefd26a2903219e6722e69cc6f43b.scope - libcontainer container 813bf15adc5f9f67b44858f69ed93b181f9cefd26a2903219e6722e69cc6f43b. Oct 9 01:11:18.701997 containerd[1453]: time="2024-10-09T01:11:18.701961036Z" level=info msg="StartContainer for \"813bf15adc5f9f67b44858f69ed93b181f9cefd26a2903219e6722e69cc6f43b\" returns successfully" Oct 9 01:11:18.710268 systemd[1]: cri-containerd-813bf15adc5f9f67b44858f69ed93b181f9cefd26a2903219e6722e69cc6f43b.scope: Deactivated successfully. Oct 9 01:11:18.737777 containerd[1453]: time="2024-10-09T01:11:18.737605857Z" level=info msg="shim disconnected" id=813bf15adc5f9f67b44858f69ed93b181f9cefd26a2903219e6722e69cc6f43b namespace=k8s.io Oct 9 01:11:18.737777 containerd[1453]: time="2024-10-09T01:11:18.737654497Z" level=warning msg="cleaning up after shim disconnected" id=813bf15adc5f9f67b44858f69ed93b181f9cefd26a2903219e6722e69cc6f43b namespace=k8s.io Oct 9 01:11:18.737777 containerd[1453]: time="2024-10-09T01:11:18.737663177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:11:18.740068 kubelet[2679]: E1009 01:11:18.740039 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:11:19.746353 kubelet[2679]: E1009 01:11:19.746313 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:11:19.749870 containerd[1453]: time="2024-10-09T01:11:19.749827714Z" level=info msg="CreateContainer within sandbox \"e2dd5beac4c389de9b195b2adef2f1d440e749fca2ca06aec057678ad9e6238b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 9 01:11:19.760122 containerd[1453]: time="2024-10-09T01:11:19.760068035Z" level=info msg="CreateContainer within sandbox \"e2dd5beac4c389de9b195b2adef2f1d440e749fca2ca06aec057678ad9e6238b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4e837e4b34faf5115509769b063415cf9201eb03164470a7bc530c6ee64beaaf\"" Oct 9 01:11:19.760501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3411775266.mount: Deactivated successfully. Oct 9 01:11:19.762034 containerd[1453]: time="2024-10-09T01:11:19.761217804Z" level=info msg="StartContainer for \"4e837e4b34faf5115509769b063415cf9201eb03164470a7bc530c6ee64beaaf\"" Oct 9 01:11:19.788705 systemd[1]: Started cri-containerd-4e837e4b34faf5115509769b063415cf9201eb03164470a7bc530c6ee64beaaf.scope - libcontainer container 4e837e4b34faf5115509769b063415cf9201eb03164470a7bc530c6ee64beaaf. Oct 9 01:11:19.808438 containerd[1453]: time="2024-10-09T01:11:19.808384375Z" level=info msg="StartContainer for \"4e837e4b34faf5115509769b063415cf9201eb03164470a7bc530c6ee64beaaf\" returns successfully" Oct 9 01:11:19.814269 systemd[1]: cri-containerd-4e837e4b34faf5115509769b063415cf9201eb03164470a7bc530c6ee64beaaf.scope: Deactivated successfully. Oct 9 01:11:19.840778 containerd[1453]: time="2024-10-09T01:11:19.840650709Z" level=info msg="shim disconnected" id=4e837e4b34faf5115509769b063415cf9201eb03164470a7bc530c6ee64beaaf namespace=k8s.io Oct 9 01:11:19.840778 containerd[1453]: time="2024-10-09T01:11:19.840701989Z" level=warning msg="cleaning up after shim disconnected" id=4e837e4b34faf5115509769b063415cf9201eb03164470a7bc530c6ee64beaaf namespace=k8s.io Oct 9 01:11:19.840778 containerd[1453]: time="2024-10-09T01:11:19.840711110Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:11:20.485839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e837e4b34faf5115509769b063415cf9201eb03164470a7bc530c6ee64beaaf-rootfs.mount: Deactivated successfully. Oct 9 01:11:20.749983 kubelet[2679]: E1009 01:11:20.749623 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:11:20.752240 containerd[1453]: time="2024-10-09T01:11:20.752019006Z" level=info msg="CreateContainer within sandbox \"e2dd5beac4c389de9b195b2adef2f1d440e749fca2ca06aec057678ad9e6238b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 9 01:11:20.764706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2966050945.mount: Deactivated successfully. Oct 9 01:11:20.766006 containerd[1453]: time="2024-10-09T01:11:20.765969964Z" level=info msg="CreateContainer within sandbox \"e2dd5beac4c389de9b195b2adef2f1d440e749fca2ca06aec057678ad9e6238b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8a4f294111afc53be6612b6ca3241cf66cdae0a5afbb0644df4f32f1cdce2ac4\"" Oct 9 01:11:20.766519 containerd[1453]: time="2024-10-09T01:11:20.766488488Z" level=info msg="StartContainer for \"8a4f294111afc53be6612b6ca3241cf66cdae0a5afbb0644df4f32f1cdce2ac4\"" Oct 9 01:11:20.794647 systemd[1]: Started cri-containerd-8a4f294111afc53be6612b6ca3241cf66cdae0a5afbb0644df4f32f1cdce2ac4.scope - libcontainer container 8a4f294111afc53be6612b6ca3241cf66cdae0a5afbb0644df4f32f1cdce2ac4. Oct 9 01:11:20.820136 containerd[1453]: time="2024-10-09T01:11:20.820101059Z" level=info msg="StartContainer for \"8a4f294111afc53be6612b6ca3241cf66cdae0a5afbb0644df4f32f1cdce2ac4\" returns successfully" Oct 9 01:11:20.821267 systemd[1]: cri-containerd-8a4f294111afc53be6612b6ca3241cf66cdae0a5afbb0644df4f32f1cdce2ac4.scope: Deactivated successfully. Oct 9 01:11:20.840203 containerd[1453]: time="2024-10-09T01:11:20.840153588Z" level=info msg="shim disconnected" id=8a4f294111afc53be6612b6ca3241cf66cdae0a5afbb0644df4f32f1cdce2ac4 namespace=k8s.io Oct 9 01:11:20.840203 containerd[1453]: time="2024-10-09T01:11:20.840199668Z" level=warning msg="cleaning up after shim disconnected" id=8a4f294111afc53be6612b6ca3241cf66cdae0a5afbb0644df4f32f1cdce2ac4 namespace=k8s.io Oct 9 01:11:20.840365 containerd[1453]: time="2024-10-09T01:11:20.840209748Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:11:21.485481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a4f294111afc53be6612b6ca3241cf66cdae0a5afbb0644df4f32f1cdce2ac4-rootfs.mount: Deactivated successfully. Oct 9 01:11:21.591122 kubelet[2679]: E1009 01:11:21.591097 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 9 01:11:21.753213 kubelet[2679]: E1009 01:11:21.753097 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:11:21.755390 containerd[1453]: time="2024-10-09T01:11:21.755353719Z" level=info msg="CreateContainer within sandbox \"e2dd5beac4c389de9b195b2adef2f1d440e749fca2ca06aec057678ad9e6238b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 9 01:11:21.767888 containerd[1453]: time="2024-10-09T01:11:21.767833790Z" level=info msg="CreateContainer within sandbox \"e2dd5beac4c389de9b195b2adef2f1d440e749fca2ca06aec057678ad9e6238b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"110328cd42ada918783c2622793196c5a9959868108aa2cf23b417fb3315300f\"" Oct 9 01:11:21.769355 containerd[1453]: time="2024-10-09T01:11:21.769327683Z" level=info msg="StartContainer for \"110328cd42ada918783c2622793196c5a9959868108aa2cf23b417fb3315300f\"" Oct 9 01:11:21.793588 systemd[1]: Started cri-containerd-110328cd42ada918783c2622793196c5a9959868108aa2cf23b417fb3315300f.scope - libcontainer container 110328cd42ada918783c2622793196c5a9959868108aa2cf23b417fb3315300f. Oct 9 01:11:21.811912 systemd[1]: cri-containerd-110328cd42ada918783c2622793196c5a9959868108aa2cf23b417fb3315300f.scope: Deactivated successfully. Oct 9 01:11:21.813955 containerd[1453]: time="2024-10-09T01:11:21.813925562Z" level=info msg="StartContainer for \"110328cd42ada918783c2622793196c5a9959868108aa2cf23b417fb3315300f\" returns successfully" Oct 9 01:11:21.831687 containerd[1453]: time="2024-10-09T01:11:21.831617320Z" level=info msg="shim disconnected" id=110328cd42ada918783c2622793196c5a9959868108aa2cf23b417fb3315300f namespace=k8s.io Oct 9 01:11:21.831687 containerd[1453]: time="2024-10-09T01:11:21.831672520Z" level=warning msg="cleaning up after shim disconnected" id=110328cd42ada918783c2622793196c5a9959868108aa2cf23b417fb3315300f namespace=k8s.io Oct 9 01:11:21.831906 containerd[1453]: time="2024-10-09T01:11:21.831680840Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:11:22.485418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-110328cd42ada918783c2622793196c5a9959868108aa2cf23b417fb3315300f-rootfs.mount: Deactivated successfully. Oct 9 01:11:22.757322 kubelet[2679]: E1009 01:11:22.757225 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:11:22.760842 containerd[1453]: time="2024-10-09T01:11:22.760806245Z" level=info msg="CreateContainer within sandbox \"e2dd5beac4c389de9b195b2adef2f1d440e749fca2ca06aec057678ad9e6238b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 9 01:11:22.775655 containerd[1453]: time="2024-10-09T01:11:22.775620145Z" level=info msg="CreateContainer within sandbox \"e2dd5beac4c389de9b195b2adef2f1d440e749fca2ca06aec057678ad9e6238b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f2356a9247b26d28f49411318e272f6bfe1da8868ad892824e2cdd6fcf774cea\"" Oct 9 01:11:22.776312 containerd[1453]: time="2024-10-09T01:11:22.776286471Z" level=info msg="StartContainer for \"f2356a9247b26d28f49411318e272f6bfe1da8868ad892824e2cdd6fcf774cea\"" Oct 9 01:11:22.804605 systemd[1]: Started cri-containerd-f2356a9247b26d28f49411318e272f6bfe1da8868ad892824e2cdd6fcf774cea.scope - libcontainer container f2356a9247b26d28f49411318e272f6bfe1da8868ad892824e2cdd6fcf774cea. Oct 9 01:11:22.828372 containerd[1453]: time="2024-10-09T01:11:22.828312602Z" level=info msg="StartContainer for \"f2356a9247b26d28f49411318e272f6bfe1da8868ad892824e2cdd6fcf774cea\" returns successfully" Oct 9 01:11:23.081520 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Oct 9 01:11:23.231804 kubelet[2679]: I1009 01:11:23.231756 2679 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-10-09T01:11:23Z","lastTransitionTime":"2024-10-09T01:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 9 01:11:23.762609 kubelet[2679]: E1009 01:11:23.762502 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:11:23.774497 kubelet[2679]: I1009 01:11:23.774428 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x6mgk" podStartSLOduration=5.774411753 podStartE2EDuration="5.774411753s" podCreationTimestamp="2024-10-09 01:11:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:11:23.77417859 +0000 UTC m=+82.311776884" watchObservedRunningTime="2024-10-09 01:11:23.774411753 +0000 UTC m=+82.312010087" Oct 9 01:11:24.763781 kubelet[2679]: E1009 01:11:24.763726 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:11:25.850969 systemd-networkd[1384]: lxc_health: Link UP Oct 9 01:11:25.855989 systemd-networkd[1384]: lxc_health: Gained carrier Oct 9 01:11:26.537115 kubelet[2679]: E1009 01:11:26.536136 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:11:26.537115 kubelet[2679]: E1009 01:11:26.537030 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:11:26.595406 kubelet[2679]: E1009 01:11:26.594675 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:11:26.776595 kubelet[2679]: E1009 01:11:26.776567 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:11:27.628625 systemd-networkd[1384]: lxc_health: Gained IPv6LL Oct 9 01:11:29.047444 kubelet[2679]: E1009 01:11:29.047404 2679 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:35442->127.0.0.1:41415: write tcp 127.0.0.1:35442->127.0.0.1:41415: write: broken pipe Oct 9 01:11:31.110861 systemd[1]: run-containerd-runc-k8s.io-f2356a9247b26d28f49411318e272f6bfe1da8868ad892824e2cdd6fcf774cea-runc.pz41BS.mount: Deactivated successfully. Oct 9 01:11:31.149604 sshd[4485]: pam_unix(sshd:session): session closed for user core Oct 9 01:11:31.152975 systemd[1]: sshd@25-10.0.0.157:22-10.0.0.1:45402.service: Deactivated successfully. Oct 9 01:11:31.154641 systemd[1]: session-26.scope: Deactivated successfully. Oct 9 01:11:31.155195 systemd-logind[1437]: Session 26 logged out. Waiting for processes to exit. Oct 9 01:11:31.156226 systemd-logind[1437]: Removed session 26.