Nov 12 22:29:33.895716 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 12 22:29:33.895736 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Tue Nov 12 21:07:55 -00 2024 Nov 12 22:29:33.895747 kernel: KASLR enabled Nov 12 22:29:33.895761 kernel: efi: EFI v2.7 by EDK II Nov 12 22:29:33.895768 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Nov 12 22:29:33.895774 kernel: random: crng init done Nov 12 22:29:33.895790 kernel: secureboot: Secure boot disabled Nov 12 22:29:33.895797 kernel: ACPI: Early table checksum verification disabled Nov 12 22:29:33.895803 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Nov 12 22:29:33.895812 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 12 22:29:33.895818 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:29:33.895844 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:29:33.895850 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:29:33.895857 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:29:33.895865 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:29:33.895873 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:29:33.895880 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:29:33.895887 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:29:33.895894 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:29:33.895901 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 12 22:29:33.895907 kernel: NUMA: Failed to initialise from firmware Nov 12 22:29:33.895914 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 22:29:33.895921 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Nov 12 22:29:33.895927 kernel: Zone ranges: Nov 12 22:29:33.895934 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 22:29:33.895942 kernel: DMA32 empty Nov 12 22:29:33.895949 kernel: Normal empty Nov 12 22:29:33.895956 kernel: Movable zone start for each node Nov 12 22:29:33.895963 kernel: Early memory node ranges Nov 12 22:29:33.895969 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Nov 12 22:29:33.895976 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Nov 12 22:29:33.895983 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Nov 12 22:29:33.895989 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 12 22:29:33.895996 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 12 22:29:33.896003 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 12 22:29:33.896009 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 12 22:29:33.896017 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 22:29:33.896024 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 12 22:29:33.896031 kernel: psci: probing for conduit method from ACPI. Nov 12 22:29:33.896038 kernel: psci: PSCIv1.1 detected in firmware. Nov 12 22:29:33.896047 kernel: psci: Using standard PSCI v0.2 function IDs Nov 12 22:29:33.896055 kernel: psci: Trusted OS migration not required Nov 12 22:29:33.896062 kernel: psci: SMC Calling Convention v1.1 Nov 12 22:29:33.896070 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 12 22:29:33.896078 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Nov 12 22:29:33.896085 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Nov 12 22:29:33.896092 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 12 22:29:33.896099 kernel: Detected PIPT I-cache on CPU0 Nov 12 22:29:33.896106 kernel: CPU features: detected: GIC system register CPU interface Nov 12 22:29:33.896114 kernel: CPU features: detected: Hardware dirty bit management Nov 12 22:29:33.896121 kernel: CPU features: detected: Spectre-v4 Nov 12 22:29:33.896128 kernel: CPU features: detected: Spectre-BHB Nov 12 22:29:33.896135 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 12 22:29:33.896143 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 12 22:29:33.896150 kernel: CPU features: detected: ARM erratum 1418040 Nov 12 22:29:33.896158 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 12 22:29:33.896165 kernel: alternatives: applying boot alternatives Nov 12 22:29:33.896173 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=054b3f497d0699ec5dd6f755e221ed9e2d4f35054d20dd4fb5abe997efb88cfb Nov 12 22:29:33.896180 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 22:29:33.896188 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 22:29:33.896195 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 22:29:33.896202 kernel: Fallback order for Node 0: 0 Nov 12 22:29:33.896209 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Nov 12 22:29:33.896216 kernel: Policy zone: DMA Nov 12 22:29:33.896225 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 22:29:33.896232 kernel: software IO TLB: area num 4. Nov 12 22:29:33.896239 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Nov 12 22:29:33.896247 kernel: Memory: 2386324K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 185964K reserved, 0K cma-reserved) Nov 12 22:29:33.896254 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 22:29:33.896261 kernel: trace event string verifier disabled Nov 12 22:29:33.896272 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 22:29:33.896280 kernel: rcu: RCU event tracing is enabled. Nov 12 22:29:33.896288 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 22:29:33.896295 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 22:29:33.896302 kernel: Tracing variant of Tasks RCU enabled. Nov 12 22:29:33.896310 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 22:29:33.896318 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 22:29:33.896326 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 12 22:29:33.896333 kernel: GICv3: 256 SPIs implemented Nov 12 22:29:33.896340 kernel: GICv3: 0 Extended SPIs implemented Nov 12 22:29:33.896347 kernel: Root IRQ handler: gic_handle_irq Nov 12 22:29:33.896354 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 12 22:29:33.896361 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 12 22:29:33.896369 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 12 22:29:33.896376 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Nov 12 22:29:33.896383 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Nov 12 22:29:33.896391 kernel: GICv3: using LPI property table @0x00000000400f0000 Nov 12 22:29:33.896399 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Nov 12 22:29:33.896407 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 22:29:33.896414 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 22:29:33.896421 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 12 22:29:33.896428 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 12 22:29:33.896436 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 12 22:29:33.896443 kernel: arm-pv: using stolen time PV Nov 12 22:29:33.896450 kernel: Console: colour dummy device 80x25 Nov 12 22:29:33.896458 kernel: ACPI: Core revision 20230628 Nov 12 22:29:33.896465 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 12 22:29:33.896473 kernel: pid_max: default: 32768 minimum: 301 Nov 12 22:29:33.896481 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 22:29:33.896489 kernel: landlock: Up and running. Nov 12 22:29:33.896496 kernel: SELinux: Initializing. Nov 12 22:29:33.896504 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 22:29:33.896511 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 22:29:33.896519 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 22:29:33.896526 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 22:29:33.896534 kernel: rcu: Hierarchical SRCU implementation. Nov 12 22:29:33.896541 kernel: rcu: Max phase no-delay instances is 400. Nov 12 22:29:33.896550 kernel: Platform MSI: ITS@0x8080000 domain created Nov 12 22:29:33.896557 kernel: PCI/MSI: ITS@0x8080000 domain created Nov 12 22:29:33.896565 kernel: Remapping and enabling EFI services. Nov 12 22:29:33.896572 kernel: smp: Bringing up secondary CPUs ... Nov 12 22:29:33.896579 kernel: Detected PIPT I-cache on CPU1 Nov 12 22:29:33.896587 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 12 22:29:33.896594 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Nov 12 22:29:33.896602 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 22:29:33.896609 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 12 22:29:33.896617 kernel: Detected PIPT I-cache on CPU2 Nov 12 22:29:33.896625 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 12 22:29:33.896640 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Nov 12 22:29:33.896649 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 22:29:33.896657 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 12 22:29:33.896664 kernel: Detected PIPT I-cache on CPU3 Nov 12 22:29:33.896672 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 12 22:29:33.896680 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Nov 12 22:29:33.896688 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 22:29:33.896697 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 12 22:29:33.896705 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 22:29:33.896712 kernel: SMP: Total of 4 processors activated. Nov 12 22:29:33.896720 kernel: CPU features: detected: 32-bit EL0 Support Nov 12 22:29:33.896728 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 12 22:29:33.896736 kernel: CPU features: detected: Common not Private translations Nov 12 22:29:33.896743 kernel: CPU features: detected: CRC32 instructions Nov 12 22:29:33.896751 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 12 22:29:33.896765 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 12 22:29:33.896772 kernel: CPU features: detected: LSE atomic instructions Nov 12 22:29:33.896800 kernel: CPU features: detected: Privileged Access Never Nov 12 22:29:33.896810 kernel: CPU features: detected: RAS Extension Support Nov 12 22:29:33.896818 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 12 22:29:33.896825 kernel: CPU: All CPU(s) started at EL1 Nov 12 22:29:33.896833 kernel: alternatives: applying system-wide alternatives Nov 12 22:29:33.896841 kernel: devtmpfs: initialized Nov 12 22:29:33.896849 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 22:29:33.896859 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 22:29:33.896866 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 22:29:33.896874 kernel: SMBIOS 3.0.0 present. Nov 12 22:29:33.896882 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Nov 12 22:29:33.896890 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 22:29:33.896897 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 12 22:29:33.896906 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 12 22:29:33.896914 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 12 22:29:33.896922 kernel: audit: initializing netlink subsys (disabled) Nov 12 22:29:33.896931 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Nov 12 22:29:33.896938 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 22:29:33.896946 kernel: cpuidle: using governor menu Nov 12 22:29:33.896954 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 12 22:29:33.896962 kernel: ASID allocator initialised with 32768 entries Nov 12 22:29:33.896970 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 22:29:33.896977 kernel: Serial: AMBA PL011 UART driver Nov 12 22:29:33.896985 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 12 22:29:33.896993 kernel: Modules: 0 pages in range for non-PLT usage Nov 12 22:29:33.897002 kernel: Modules: 508960 pages in range for PLT usage Nov 12 22:29:33.897010 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 22:29:33.897018 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 22:29:33.897026 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 12 22:29:33.897034 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 12 22:29:33.897041 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 22:29:33.897049 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 22:29:33.897057 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 12 22:29:33.897065 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 12 22:29:33.897072 kernel: ACPI: Added _OSI(Module Device) Nov 12 22:29:33.897081 kernel: ACPI: Added _OSI(Processor Device) Nov 12 22:29:33.897089 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 22:29:33.897097 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 22:29:33.897104 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 22:29:33.897112 kernel: ACPI: Interpreter enabled Nov 12 22:29:33.897120 kernel: ACPI: Using GIC for interrupt routing Nov 12 22:29:33.897128 kernel: ACPI: MCFG table detected, 1 entries Nov 12 22:29:33.897135 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 12 22:29:33.897143 kernel: printk: console [ttyAMA0] enabled Nov 12 22:29:33.897152 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 22:29:33.897275 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 22:29:33.897349 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 12 22:29:33.897418 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 12 22:29:33.897483 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 12 22:29:33.897546 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 12 22:29:33.897556 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 12 22:29:33.897566 kernel: PCI host bridge to bus 0000:00 Nov 12 22:29:33.897632 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 12 22:29:33.897691 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 12 22:29:33.897749 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 12 22:29:33.897850 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 22:29:33.897933 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Nov 12 22:29:33.898016 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 22:29:33.898099 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Nov 12 22:29:33.898169 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Nov 12 22:29:33.898236 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Nov 12 22:29:33.898303 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Nov 12 22:29:33.898368 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Nov 12 22:29:33.898433 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Nov 12 22:29:33.898494 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 12 22:29:33.898552 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 12 22:29:33.898610 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 12 22:29:33.898620 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 12 22:29:33.898628 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 12 22:29:33.898636 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 12 22:29:33.898643 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 12 22:29:33.898651 kernel: iommu: Default domain type: Translated Nov 12 22:29:33.898660 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 12 22:29:33.898668 kernel: efivars: Registered efivars operations Nov 12 22:29:33.898676 kernel: vgaarb: loaded Nov 12 22:29:33.898683 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 12 22:29:33.898691 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 22:29:33.898699 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 22:29:33.898706 kernel: pnp: PnP ACPI init Nov 12 22:29:33.898833 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 12 22:29:33.898851 kernel: pnp: PnP ACPI: found 1 devices Nov 12 22:29:33.898861 kernel: NET: Registered PF_INET protocol family Nov 12 22:29:33.898869 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 22:29:33.898876 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 22:29:33.898884 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 22:29:33.898892 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 22:29:33.898900 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 22:29:33.898907 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 22:29:33.898915 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 22:29:33.898925 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 22:29:33.898943 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 22:29:33.898951 kernel: PCI: CLS 0 bytes, default 64 Nov 12 22:29:33.898958 kernel: kvm [1]: HYP mode not available Nov 12 22:29:33.898966 kernel: Initialise system trusted keyrings Nov 12 22:29:33.898974 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 22:29:33.898981 kernel: Key type asymmetric registered Nov 12 22:29:33.898989 kernel: Asymmetric key parser 'x509' registered Nov 12 22:29:33.898996 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 12 22:29:33.899005 kernel: io scheduler mq-deadline registered Nov 12 22:29:33.899013 kernel: io scheduler kyber registered Nov 12 22:29:33.899020 kernel: io scheduler bfq registered Nov 12 22:29:33.899028 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 12 22:29:33.899035 kernel: ACPI: button: Power Button [PWRB] Nov 12 22:29:33.899043 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 12 22:29:33.899116 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 12 22:29:33.899127 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 22:29:33.899135 kernel: thunder_xcv, ver 1.0 Nov 12 22:29:33.899142 kernel: thunder_bgx, ver 1.0 Nov 12 22:29:33.899151 kernel: nicpf, ver 1.0 Nov 12 22:29:33.899159 kernel: nicvf, ver 1.0 Nov 12 22:29:33.899232 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 12 22:29:33.899295 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-11-12T22:29:33 UTC (1731450573) Nov 12 22:29:33.899305 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 12 22:29:33.899326 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Nov 12 22:29:33.899336 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 12 22:29:33.899343 kernel: watchdog: Hard watchdog permanently disabled Nov 12 22:29:33.899353 kernel: NET: Registered PF_INET6 protocol family Nov 12 22:29:33.899360 kernel: Segment Routing with IPv6 Nov 12 22:29:33.899368 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 22:29:33.899375 kernel: NET: Registered PF_PACKET protocol family Nov 12 22:29:33.899383 kernel: Key type dns_resolver registered Nov 12 22:29:33.899390 kernel: registered taskstats version 1 Nov 12 22:29:33.899398 kernel: Loading compiled-in X.509 certificates Nov 12 22:29:33.899406 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 27dd0d090d7a0971a24582c9198f7e80123ea69f' Nov 12 22:29:33.899413 kernel: Key type .fscrypt registered Nov 12 22:29:33.899422 kernel: Key type fscrypt-provisioning registered Nov 12 22:29:33.899430 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 22:29:33.899437 kernel: ima: Allocated hash algorithm: sha1 Nov 12 22:29:33.899445 kernel: ima: No architecture policies found Nov 12 22:29:33.899452 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 12 22:29:33.899460 kernel: clk: Disabling unused clocks Nov 12 22:29:33.899468 kernel: Freeing unused kernel memory: 39680K Nov 12 22:29:33.899475 kernel: Run /init as init process Nov 12 22:29:33.899484 kernel: with arguments: Nov 12 22:29:33.899491 kernel: /init Nov 12 22:29:33.899498 kernel: with environment: Nov 12 22:29:33.899506 kernel: HOME=/ Nov 12 22:29:33.899513 kernel: TERM=linux Nov 12 22:29:33.899521 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 22:29:33.899530 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 22:29:33.899539 systemd[1]: Detected virtualization kvm. Nov 12 22:29:33.899549 systemd[1]: Detected architecture arm64. Nov 12 22:29:33.899557 systemd[1]: Running in initrd. Nov 12 22:29:33.899565 systemd[1]: No hostname configured, using default hostname. Nov 12 22:29:33.899572 systemd[1]: Hostname set to . Nov 12 22:29:33.899581 systemd[1]: Initializing machine ID from VM UUID. Nov 12 22:29:33.899589 systemd[1]: Queued start job for default target initrd.target. Nov 12 22:29:33.899597 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:29:33.899605 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:29:33.899615 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 22:29:33.899623 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 22:29:33.899631 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 22:29:33.899639 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 22:29:33.899649 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 22:29:33.899658 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 22:29:33.899666 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:29:33.899675 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:29:33.899683 systemd[1]: Reached target paths.target - Path Units. Nov 12 22:29:33.899691 systemd[1]: Reached target slices.target - Slice Units. Nov 12 22:29:33.899699 systemd[1]: Reached target swap.target - Swaps. Nov 12 22:29:33.899708 systemd[1]: Reached target timers.target - Timer Units. Nov 12 22:29:33.899716 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 22:29:33.899724 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 22:29:33.899732 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 22:29:33.899740 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 22:29:33.899749 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:29:33.899768 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 22:29:33.899776 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:29:33.899797 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 22:29:33.899805 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 22:29:33.899813 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 22:29:33.899821 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 22:29:33.899829 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 22:29:33.899840 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 22:29:33.899848 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 22:29:33.899856 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:29:33.899868 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 22:29:33.899876 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:29:33.899884 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 22:29:33.899911 systemd-journald[239]: Collecting audit messages is disabled. Nov 12 22:29:33.899930 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 22:29:33.899939 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:29:33.899949 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:29:33.899957 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 22:29:33.899966 systemd-journald[239]: Journal started Nov 12 22:29:33.899984 systemd-journald[239]: Runtime Journal (/run/log/journal/47dd1251ca79441eb858a5f201e3ac0a) is 5.9M, max 47.3M, 41.4M free. Nov 12 22:29:33.882828 systemd-modules-load[240]: Inserted module 'overlay' Nov 12 22:29:33.902229 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 22:29:33.902252 kernel: Bridge firewalling registered Nov 12 22:29:33.902604 systemd-modules-load[240]: Inserted module 'br_netfilter' Nov 12 22:29:33.902933 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 22:29:33.904379 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 22:29:33.918933 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:29:33.920182 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 22:29:33.921636 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 22:29:33.922848 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:29:33.926512 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 22:29:33.928881 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:29:33.930799 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:29:33.933666 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:29:33.936789 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 22:29:33.942566 dracut-cmdline[273]: dracut-dracut-053 Nov 12 22:29:33.944746 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=054b3f497d0699ec5dd6f755e221ed9e2d4f35054d20dd4fb5abe997efb88cfb Nov 12 22:29:33.961993 systemd-resolved[281]: Positive Trust Anchors: Nov 12 22:29:33.962067 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 22:29:33.962102 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 22:29:33.966663 systemd-resolved[281]: Defaulting to hostname 'linux'. Nov 12 22:29:33.968082 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 22:29:33.968905 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:29:34.005806 kernel: SCSI subsystem initialized Nov 12 22:29:34.011799 kernel: Loading iSCSI transport class v2.0-870. Nov 12 22:29:34.019806 kernel: iscsi: registered transport (tcp) Nov 12 22:29:34.031812 kernel: iscsi: registered transport (qla4xxx) Nov 12 22:29:34.031829 kernel: QLogic iSCSI HBA Driver Nov 12 22:29:34.070990 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 22:29:34.081906 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 22:29:34.097867 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 22:29:34.097909 kernel: device-mapper: uevent: version 1.0.3 Nov 12 22:29:34.098799 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 22:29:34.142798 kernel: raid6: neonx8 gen() 15741 MB/s Nov 12 22:29:34.159807 kernel: raid6: neonx4 gen() 15637 MB/s Nov 12 22:29:34.176792 kernel: raid6: neonx2 gen() 13158 MB/s Nov 12 22:29:34.193803 kernel: raid6: neonx1 gen() 10467 MB/s Nov 12 22:29:34.210794 kernel: raid6: int64x8 gen() 6947 MB/s Nov 12 22:29:34.227792 kernel: raid6: int64x4 gen() 7341 MB/s Nov 12 22:29:34.244793 kernel: raid6: int64x2 gen() 6115 MB/s Nov 12 22:29:34.261802 kernel: raid6: int64x1 gen() 5049 MB/s Nov 12 22:29:34.261826 kernel: raid6: using algorithm neonx8 gen() 15741 MB/s Nov 12 22:29:34.278805 kernel: raid6: .... xor() 11897 MB/s, rmw enabled Nov 12 22:29:34.278819 kernel: raid6: using neon recovery algorithm Nov 12 22:29:34.284164 kernel: xor: measuring software checksum speed Nov 12 22:29:34.284192 kernel: 8regs : 19831 MB/sec Nov 12 22:29:34.284211 kernel: 32regs : 19231 MB/sec Nov 12 22:29:34.285088 kernel: arm64_neon : 26936 MB/sec Nov 12 22:29:34.285105 kernel: xor: using function: arm64_neon (26936 MB/sec) Nov 12 22:29:34.333807 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 22:29:34.343862 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 22:29:34.353931 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:29:34.365023 systemd-udevd[461]: Using default interface naming scheme 'v255'. Nov 12 22:29:34.368080 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:29:34.370422 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 22:29:34.384217 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Nov 12 22:29:34.408412 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 22:29:34.419951 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 22:29:34.458051 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:29:34.464912 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 22:29:34.478836 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 22:29:34.479999 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 22:29:34.481380 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:29:34.483192 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 22:29:34.490142 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 22:29:34.498339 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 22:29:34.511339 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 12 22:29:34.518882 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 22:29:34.518987 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 22:29:34.518999 kernel: GPT:9289727 != 19775487 Nov 12 22:29:34.519008 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 22:29:34.519023 kernel: GPT:9289727 != 19775487 Nov 12 22:29:34.519034 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 22:29:34.519044 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:29:34.511981 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 22:29:34.512089 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:29:34.513212 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:29:34.516005 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 22:29:34.516155 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:29:34.517908 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:29:34.530087 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:29:34.532293 kernel: BTRFS: device fsid 337794e4-53df-462b-aefc-e93e6a958f34 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (512) Nov 12 22:29:34.533828 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (510) Nov 12 22:29:34.541054 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 22:29:34.542125 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:29:34.549767 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 22:29:34.556428 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 22:29:34.559899 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 22:29:34.560727 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 22:29:34.576004 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 22:29:34.577869 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:29:34.583038 disk-uuid[552]: Primary Header is updated. Nov 12 22:29:34.583038 disk-uuid[552]: Secondary Entries is updated. Nov 12 22:29:34.583038 disk-uuid[552]: Secondary Header is updated. Nov 12 22:29:34.586801 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:29:34.600512 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:29:35.593809 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:29:35.594290 disk-uuid[553]: The operation has completed successfully. Nov 12 22:29:35.612558 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 22:29:35.612653 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 22:29:35.633923 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 22:29:35.636601 sh[572]: Success Nov 12 22:29:35.650816 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 12 22:29:35.675831 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 22:29:35.690089 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 22:29:35.692807 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 22:29:35.699867 kernel: BTRFS info (device dm-0): first mount of filesystem 337794e4-53df-462b-aefc-e93e6a958f34 Nov 12 22:29:35.699899 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 12 22:29:35.699910 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 22:29:35.701161 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 22:29:35.701796 kernel: BTRFS info (device dm-0): using free space tree Nov 12 22:29:35.704515 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 22:29:35.705594 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 22:29:35.713988 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 22:29:35.715246 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 22:29:35.721391 kernel: BTRFS info (device vda6): first mount of filesystem e7e17182-4510-4c0b-82ae-ebdf6a7625d9 Nov 12 22:29:35.721432 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 22:29:35.721443 kernel: BTRFS info (device vda6): using free space tree Nov 12 22:29:35.723808 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 22:29:35.730210 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 22:29:35.731821 kernel: BTRFS info (device vda6): last unmount of filesystem e7e17182-4510-4c0b-82ae-ebdf6a7625d9 Nov 12 22:29:35.736958 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 22:29:35.743938 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 22:29:35.806655 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 22:29:35.818947 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 22:29:35.833286 ignition[662]: Ignition 2.20.0 Nov 12 22:29:35.833296 ignition[662]: Stage: fetch-offline Nov 12 22:29:35.833327 ignition[662]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:29:35.833334 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:29:35.833487 ignition[662]: parsed url from cmdline: "" Nov 12 22:29:35.833490 ignition[662]: no config URL provided Nov 12 22:29:35.833495 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 22:29:35.833502 ignition[662]: no config at "/usr/lib/ignition/user.ign" Nov 12 22:29:35.833526 ignition[662]: op(1): [started] loading QEMU firmware config module Nov 12 22:29:35.833530 ignition[662]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 22:29:35.841609 systemd-networkd[763]: lo: Link UP Nov 12 22:29:35.841622 systemd-networkd[763]: lo: Gained carrier Nov 12 22:29:35.842520 systemd-networkd[763]: Enumeration completed Nov 12 22:29:35.843030 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 22:29:35.843059 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:29:35.843062 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:29:35.847630 ignition[662]: op(1): [finished] loading QEMU firmware config module Nov 12 22:29:35.843895 systemd-networkd[763]: eth0: Link UP Nov 12 22:29:35.843898 systemd-networkd[763]: eth0: Gained carrier Nov 12 22:29:35.843905 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:29:35.844613 systemd[1]: Reached target network.target - Network. Nov 12 22:29:35.862849 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 22:29:35.888214 ignition[662]: parsing config with SHA512: dc73b04cdcee06f9b40887a99ef8707032069655f164aa7bbf646269292fbad48ea7fb04b1e3d639e20f9c831833a255c188b1971e62a9ca444930ec1ef5c9e3 Nov 12 22:29:35.892625 unknown[662]: fetched base config from "system" Nov 12 22:29:35.892634 unknown[662]: fetched user config from "qemu" Nov 12 22:29:35.893070 ignition[662]: fetch-offline: fetch-offline passed Nov 12 22:29:35.894557 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 22:29:35.893141 ignition[662]: Ignition finished successfully Nov 12 22:29:35.896140 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 22:29:35.902985 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 22:29:35.913127 ignition[769]: Ignition 2.20.0 Nov 12 22:29:35.913137 ignition[769]: Stage: kargs Nov 12 22:29:35.913284 ignition[769]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:29:35.913293 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:29:35.914207 ignition[769]: kargs: kargs passed Nov 12 22:29:35.914249 ignition[769]: Ignition finished successfully Nov 12 22:29:35.916410 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 22:29:35.925943 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 22:29:35.934374 ignition[777]: Ignition 2.20.0 Nov 12 22:29:35.934384 ignition[777]: Stage: disks Nov 12 22:29:35.934529 ignition[777]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:29:35.934537 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:29:35.935422 ignition[777]: disks: disks passed Nov 12 22:29:35.935461 ignition[777]: Ignition finished successfully Nov 12 22:29:35.938355 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 22:29:35.939406 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 22:29:35.940656 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 22:29:35.942261 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 22:29:35.943724 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 22:29:35.945237 systemd[1]: Reached target basic.target - Basic System. Nov 12 22:29:35.953900 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 22:29:35.964844 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 22:29:35.968239 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 22:29:35.979923 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 22:29:36.023497 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 22:29:36.024628 kernel: EXT4-fs (vda9): mounted filesystem be7e07bb-77fc-4aec-a4f6-d76dc4498784 r/w with ordered data mode. Quota mode: none. Nov 12 22:29:36.024486 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 22:29:36.037885 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 22:29:36.039275 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 22:29:36.040252 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 22:29:36.040340 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 22:29:36.040390 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 22:29:36.047686 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (797) Nov 12 22:29:36.047705 kernel: BTRFS info (device vda6): first mount of filesystem e7e17182-4510-4c0b-82ae-ebdf6a7625d9 Nov 12 22:29:36.047716 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 22:29:36.047726 kernel: BTRFS info (device vda6): using free space tree Nov 12 22:29:36.045808 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 22:29:36.050030 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 22:29:36.049636 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 22:29:36.052915 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 22:29:36.096353 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 22:29:36.099267 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Nov 12 22:29:36.102656 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 22:29:36.106164 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 22:29:36.176275 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 22:29:36.185897 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 22:29:36.187243 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 22:29:36.191801 kernel: BTRFS info (device vda6): last unmount of filesystem e7e17182-4510-4c0b-82ae-ebdf6a7625d9 Nov 12 22:29:36.207069 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 22:29:36.208905 ignition[912]: INFO : Ignition 2.20.0 Nov 12 22:29:36.208905 ignition[912]: INFO : Stage: mount Nov 12 22:29:36.210096 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:29:36.210096 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:29:36.210096 ignition[912]: INFO : mount: mount passed Nov 12 22:29:36.212110 ignition[912]: INFO : Ignition finished successfully Nov 12 22:29:36.211933 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 22:29:36.224930 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 22:29:36.699476 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 22:29:36.710937 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 22:29:36.715799 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (926) Nov 12 22:29:36.717266 kernel: BTRFS info (device vda6): first mount of filesystem e7e17182-4510-4c0b-82ae-ebdf6a7625d9 Nov 12 22:29:36.717281 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 22:29:36.717292 kernel: BTRFS info (device vda6): using free space tree Nov 12 22:29:36.719805 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 22:29:36.720551 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 22:29:36.735744 ignition[943]: INFO : Ignition 2.20.0 Nov 12 22:29:36.735744 ignition[943]: INFO : Stage: files Nov 12 22:29:36.736949 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:29:36.736949 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:29:36.736949 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Nov 12 22:29:36.739444 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 22:29:36.739444 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 22:29:36.742301 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 22:29:36.743338 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 22:29:36.743338 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 22:29:36.742795 unknown[943]: wrote ssh authorized keys file for user: core Nov 12 22:29:36.746261 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 22:29:36.746261 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Nov 12 22:29:36.794730 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 22:29:37.484069 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 22:29:37.485704 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 22:29:37.485704 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Nov 12 22:29:37.783938 systemd-networkd[763]: eth0: Gained IPv6LL Nov 12 22:29:37.832174 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 22:29:37.963513 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 22:29:37.965013 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 12 22:29:37.965013 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 22:29:37.965013 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 22:29:37.965013 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 22:29:37.965013 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 22:29:37.965013 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 22:29:37.965013 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 22:29:37.965013 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 22:29:37.965013 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 22:29:37.965013 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 22:29:37.965013 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Nov 12 22:29:37.965013 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Nov 12 22:29:37.965013 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Nov 12 22:29:37.965013 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Nov 12 22:29:38.188655 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 12 22:29:38.474904 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Nov 12 22:29:38.474904 ignition[943]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 12 22:29:38.477750 ignition[943]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 22:29:38.477750 ignition[943]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 22:29:38.477750 ignition[943]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 12 22:29:38.477750 ignition[943]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 12 22:29:38.477750 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 22:29:38.477750 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 22:29:38.477750 ignition[943]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 12 22:29:38.477750 ignition[943]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 22:29:38.498330 ignition[943]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 22:29:38.501694 ignition[943]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 22:29:38.503892 ignition[943]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 22:29:38.503892 ignition[943]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 12 22:29:38.503892 ignition[943]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 22:29:38.503892 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 22:29:38.503892 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 22:29:38.503892 ignition[943]: INFO : files: files passed Nov 12 22:29:38.503892 ignition[943]: INFO : Ignition finished successfully Nov 12 22:29:38.504606 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 22:29:38.512913 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 22:29:38.515080 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 22:29:38.516247 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 22:29:38.516326 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 22:29:38.521752 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 22:29:38.524772 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:29:38.524772 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:29:38.527098 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:29:38.526569 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 22:29:38.528339 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 22:29:38.538910 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 22:29:38.567697 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 22:29:38.567811 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 22:29:38.569441 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 22:29:38.570737 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 22:29:38.572039 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 22:29:38.572725 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 22:29:38.586811 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 22:29:38.602985 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 22:29:38.610233 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:29:38.611131 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:29:38.612586 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 22:29:38.613829 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 22:29:38.613932 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 22:29:38.615715 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 22:29:38.617223 systemd[1]: Stopped target basic.target - Basic System. Nov 12 22:29:38.618388 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 22:29:38.619574 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 22:29:38.621027 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 22:29:38.622448 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 22:29:38.623721 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 22:29:38.625304 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 22:29:38.626650 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 22:29:38.627872 systemd[1]: Stopped target swap.target - Swaps. Nov 12 22:29:38.629058 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 22:29:38.629156 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 22:29:38.630820 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:29:38.632198 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:29:38.633560 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 22:29:38.633652 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:29:38.635043 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 22:29:38.635140 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 22:29:38.637278 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 22:29:38.637380 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 22:29:38.638691 systemd[1]: Stopped target paths.target - Path Units. Nov 12 22:29:38.639790 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 22:29:38.639883 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:29:38.641274 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 22:29:38.642497 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 22:29:38.643567 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 22:29:38.643642 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 22:29:38.644988 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 22:29:38.645066 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 22:29:38.646553 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 22:29:38.646646 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 22:29:38.647838 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 22:29:38.647930 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 22:29:38.670022 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 22:29:38.670707 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 22:29:38.670849 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:29:38.673002 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 22:29:38.673613 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 22:29:38.673726 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:29:38.674701 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 22:29:38.674816 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 22:29:38.678808 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 22:29:38.683506 ignition[998]: INFO : Ignition 2.20.0 Nov 12 22:29:38.683506 ignition[998]: INFO : Stage: umount Nov 12 22:29:38.683506 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:29:38.683506 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:29:38.683506 ignition[998]: INFO : umount: umount passed Nov 12 22:29:38.683506 ignition[998]: INFO : Ignition finished successfully Nov 12 22:29:38.681061 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 22:29:38.688970 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 22:29:38.689069 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 22:29:38.690448 systemd[1]: Stopped target network.target - Network. Nov 12 22:29:38.691718 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 22:29:38.691796 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 22:29:38.693158 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 22:29:38.693194 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 22:29:38.694618 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 22:29:38.694655 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 22:29:38.696011 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 22:29:38.696049 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 22:29:38.697612 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 22:29:38.698767 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 22:29:38.701692 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 22:29:38.702879 systemd-networkd[763]: eth0: DHCPv6 lease lost Nov 12 22:29:38.705117 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 22:29:38.705217 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 22:29:38.708021 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 22:29:38.708400 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 22:29:38.711470 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 22:29:38.711523 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:29:38.723895 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 22:29:38.725237 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 22:29:38.725299 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 22:29:38.726879 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 22:29:38.726923 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:29:38.728337 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 22:29:38.728376 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 22:29:38.729676 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 22:29:38.729709 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:29:38.731199 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:29:38.739616 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 22:29:38.740523 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 22:29:38.741562 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 22:29:38.741677 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:29:38.743373 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 22:29:38.743447 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 22:29:38.744412 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 22:29:38.744450 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:29:38.746030 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 22:29:38.746076 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 22:29:38.748113 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 22:29:38.748157 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 22:29:38.750218 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 22:29:38.750269 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:29:38.764934 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 22:29:38.765686 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 22:29:38.765747 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:29:38.767342 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 22:29:38.767381 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:29:38.768975 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 22:29:38.769081 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 22:29:38.772096 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 22:29:38.772182 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 22:29:38.773631 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 22:29:38.774426 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 22:29:38.774485 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 22:29:38.776499 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 22:29:38.785156 systemd[1]: Switching root. Nov 12 22:29:38.816565 systemd-journald[239]: Journal stopped Nov 12 22:29:39.460087 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Nov 12 22:29:39.460145 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 22:29:39.460157 kernel: SELinux: policy capability open_perms=1 Nov 12 22:29:39.460171 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 22:29:39.460180 kernel: SELinux: policy capability always_check_network=0 Nov 12 22:29:39.460189 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 22:29:39.460200 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 22:29:39.460211 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 22:29:39.460221 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 22:29:39.460230 kernel: audit: type=1403 audit(1731450578.961:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 22:29:39.460241 systemd[1]: Successfully loaded SELinux policy in 31.924ms. Nov 12 22:29:39.460259 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.956ms. Nov 12 22:29:39.460270 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 22:29:39.460284 systemd[1]: Detected virtualization kvm. Nov 12 22:29:39.460294 systemd[1]: Detected architecture arm64. Nov 12 22:29:39.460304 systemd[1]: Detected first boot. Nov 12 22:29:39.460316 systemd[1]: Initializing machine ID from VM UUID. Nov 12 22:29:39.460327 zram_generator::config[1044]: No configuration found. Nov 12 22:29:39.460338 systemd[1]: Populated /etc with preset unit settings. Nov 12 22:29:39.460348 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 22:29:39.460358 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 22:29:39.460368 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 22:29:39.460381 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 22:29:39.460392 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 22:29:39.460404 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 22:29:39.460414 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 22:29:39.460424 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 22:29:39.460435 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 22:29:39.460445 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 22:29:39.460456 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 22:29:39.460466 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:29:39.460477 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:29:39.460487 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 22:29:39.460499 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 22:29:39.460510 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 22:29:39.460520 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 22:29:39.460530 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 12 22:29:39.460541 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:29:39.460551 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 22:29:39.460562 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 22:29:39.460572 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 22:29:39.460584 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 22:29:39.460594 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:29:39.460606 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 22:29:39.460616 systemd[1]: Reached target slices.target - Slice Units. Nov 12 22:29:39.460630 systemd[1]: Reached target swap.target - Swaps. Nov 12 22:29:39.460641 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 22:29:39.460651 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 22:29:39.460661 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:29:39.460672 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 22:29:39.460684 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:29:39.460694 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 22:29:39.460705 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 22:29:39.460715 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 22:29:39.460733 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 22:29:39.460747 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 22:29:39.460758 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 22:29:39.460768 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 22:29:39.460796 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 22:29:39.460808 systemd[1]: Reached target machines.target - Containers. Nov 12 22:29:39.460819 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 22:29:39.460830 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:29:39.460841 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 22:29:39.460851 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 22:29:39.460863 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:29:39.460873 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 22:29:39.460883 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:29:39.460895 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 22:29:39.460906 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:29:39.460916 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 22:29:39.460927 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 22:29:39.460937 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 22:29:39.460947 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 22:29:39.460957 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 22:29:39.460966 kernel: loop: module loaded Nov 12 22:29:39.460978 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 22:29:39.460988 kernel: fuse: init (API version 7.39) Nov 12 22:29:39.460997 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 22:29:39.461007 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 22:29:39.461017 kernel: ACPI: bus type drm_connector registered Nov 12 22:29:39.461027 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 22:29:39.461037 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 22:29:39.461048 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 22:29:39.461058 systemd[1]: Stopped verity-setup.service. Nov 12 22:29:39.461070 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 22:29:39.461081 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 22:29:39.461091 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 22:29:39.461101 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 22:29:39.461112 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 22:29:39.461142 systemd-journald[1115]: Collecting audit messages is disabled. Nov 12 22:29:39.461163 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 22:29:39.461174 systemd-journald[1115]: Journal started Nov 12 22:29:39.461195 systemd-journald[1115]: Runtime Journal (/run/log/journal/47dd1251ca79441eb858a5f201e3ac0a) is 5.9M, max 47.3M, 41.4M free. Nov 12 22:29:39.297699 systemd[1]: Queued start job for default target multi-user.target. Nov 12 22:29:39.311327 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 22:29:39.311663 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 22:29:39.462862 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 22:29:39.464263 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 22:29:39.465839 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:29:39.467015 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 22:29:39.467214 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 22:29:39.468412 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:29:39.468623 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:29:39.469756 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 22:29:39.470002 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 22:29:39.471039 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:29:39.471242 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:29:39.472350 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 22:29:39.472478 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 22:29:39.473502 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:29:39.473633 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:29:39.474647 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 22:29:39.475699 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 22:29:39.476995 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 22:29:39.487899 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 22:29:39.497880 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 22:29:39.499590 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 22:29:39.500425 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 22:29:39.500456 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 22:29:39.502064 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 22:29:39.503889 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 22:29:39.505575 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 22:29:39.506424 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:29:39.507559 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 22:29:39.509185 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 22:29:39.510058 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 22:29:39.513925 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 22:29:39.515510 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 22:29:39.517102 systemd-journald[1115]: Time spent on flushing to /var/log/journal/47dd1251ca79441eb858a5f201e3ac0a is 30.409ms for 857 entries. Nov 12 22:29:39.517102 systemd-journald[1115]: System Journal (/var/log/journal/47dd1251ca79441eb858a5f201e3ac0a) is 8.0M, max 195.6M, 187.6M free. Nov 12 22:29:39.556230 systemd-journald[1115]: Received client request to flush runtime journal. Nov 12 22:29:39.556283 kernel: loop0: detected capacity change from 0 to 189592 Nov 12 22:29:39.519018 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:29:39.520953 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 22:29:39.523300 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 22:29:39.527299 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:29:39.528364 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 22:29:39.529399 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 22:29:39.530614 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 22:29:39.531790 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 22:29:39.536557 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 22:29:39.552948 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 22:29:39.556038 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 22:29:39.557392 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:29:39.558811 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 22:29:39.560029 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 22:29:39.567799 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 22:29:39.574066 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 22:29:39.577330 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 22:29:39.577898 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 22:29:39.579817 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 12 22:29:39.593390 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Nov 12 22:29:39.593406 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Nov 12 22:29:39.594800 kernel: loop1: detected capacity change from 0 to 113536 Nov 12 22:29:39.596623 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:29:39.621810 kernel: loop2: detected capacity change from 0 to 116808 Nov 12 22:29:39.653810 kernel: loop3: detected capacity change from 0 to 189592 Nov 12 22:29:39.658808 kernel: loop4: detected capacity change from 0 to 113536 Nov 12 22:29:39.662806 kernel: loop5: detected capacity change from 0 to 116808 Nov 12 22:29:39.665894 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 22:29:39.666238 (sd-merge)[1180]: Merged extensions into '/usr'. Nov 12 22:29:39.669947 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 22:29:39.669960 systemd[1]: Reloading... Nov 12 22:29:39.734808 zram_generator::config[1210]: No configuration found. Nov 12 22:29:39.780651 ldconfig[1150]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 22:29:39.812628 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:29:39.847324 systemd[1]: Reloading finished in 177 ms. Nov 12 22:29:39.883813 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 22:29:39.884986 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 22:29:39.893000 systemd[1]: Starting ensure-sysext.service... Nov 12 22:29:39.894555 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 22:29:39.906386 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Nov 12 22:29:39.906401 systemd[1]: Reloading... Nov 12 22:29:39.916699 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 22:29:39.917004 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 22:29:39.917623 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 22:29:39.917984 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Nov 12 22:29:39.918040 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Nov 12 22:29:39.919997 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 22:29:39.920012 systemd-tmpfiles[1241]: Skipping /boot Nov 12 22:29:39.926686 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 22:29:39.926706 systemd-tmpfiles[1241]: Skipping /boot Nov 12 22:29:39.949805 zram_generator::config[1268]: No configuration found. Nov 12 22:29:40.033172 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:29:40.068417 systemd[1]: Reloading finished in 161 ms. Nov 12 22:29:40.082645 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 22:29:40.099269 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:29:40.106713 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 12 22:29:40.108779 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 22:29:40.110680 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 22:29:40.115071 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 22:29:40.122531 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:29:40.124541 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 22:29:40.127319 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:29:40.131034 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:29:40.133340 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:29:40.138356 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:29:40.139290 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:29:40.143121 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 22:29:40.144686 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 22:29:40.147271 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:29:40.147390 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:29:40.149312 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:29:40.149437 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:29:40.153476 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:29:40.153618 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:29:40.157557 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 22:29:40.157850 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 22:29:40.160141 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 22:29:40.163901 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 22:29:40.169137 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Nov 12 22:29:40.170119 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:29:40.173921 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:29:40.178219 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:29:40.181046 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:29:40.183461 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:29:40.184307 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 22:29:40.185872 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 22:29:40.187855 augenrules[1342]: No rules Nov 12 22:29:40.188154 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:29:40.188319 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:29:40.189700 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 22:29:40.189920 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 12 22:29:40.191082 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:29:40.192508 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:29:40.192628 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:29:40.194054 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:29:40.194170 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:29:40.203945 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 22:29:40.216249 systemd[1]: Finished ensure-sysext.service. Nov 12 22:29:40.224687 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 12 22:29:40.232808 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1350) Nov 12 22:29:40.236089 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 12 22:29:40.236882 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:29:40.242033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:29:40.244546 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 22:29:40.248266 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:29:40.251079 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:29:40.251916 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:29:40.253778 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 22:29:40.256232 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 22:29:40.257058 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 22:29:40.258201 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:29:40.259823 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:29:40.261063 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:29:40.261190 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:29:40.262257 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 22:29:40.262392 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 22:29:40.265739 augenrules[1378]: /sbin/augenrules: No change Nov 12 22:29:40.270698 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 22:29:40.279229 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1367) Nov 12 22:29:40.278884 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:29:40.279052 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:29:40.279366 augenrules[1409]: No rules Nov 12 22:29:40.280441 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 22:29:40.280635 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 12 22:29:40.282898 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 22:29:40.284824 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1367) Nov 12 22:29:40.299544 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 22:29:40.304242 systemd-resolved[1307]: Positive Trust Anchors: Nov 12 22:29:40.304314 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 22:29:40.304344 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 22:29:40.311959 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 22:29:40.316364 systemd-resolved[1307]: Defaulting to hostname 'linux'. Nov 12 22:29:40.319746 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 22:29:40.320695 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:29:40.328441 systemd-networkd[1393]: lo: Link UP Nov 12 22:29:40.328935 systemd-networkd[1393]: lo: Gained carrier Nov 12 22:29:40.331939 systemd-networkd[1393]: Enumeration completed Nov 12 22:29:40.332053 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 22:29:40.332941 systemd[1]: Reached target network.target - Network. Nov 12 22:29:40.334163 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:29:40.334170 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:29:40.334702 systemd-networkd[1393]: eth0: Link UP Nov 12 22:29:40.334705 systemd-networkd[1393]: eth0: Gained carrier Nov 12 22:29:40.334718 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:29:40.345039 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 22:29:40.346680 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 22:29:40.347978 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 22:29:40.349563 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 22:29:40.352148 systemd-networkd[1393]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 22:29:40.352585 systemd-timesyncd[1395]: Network configuration changed, trying to establish connection. Nov 12 22:29:40.777294 systemd-resolved[1307]: Clock change detected. Flushing caches. Nov 12 22:29:40.777404 systemd-timesyncd[1395]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 22:29:40.777457 systemd-timesyncd[1395]: Initial clock synchronization to Tue 2024-11-12 22:29:40.777244 UTC. Nov 12 22:29:40.802802 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:29:40.808607 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 22:29:40.827736 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 22:29:40.839989 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 22:29:40.843638 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:29:40.870647 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 22:29:40.871773 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:29:40.872570 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 22:29:40.873364 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 22:29:40.874282 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 22:29:40.875338 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 22:29:40.876242 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 22:29:40.877179 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 22:29:40.878058 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 22:29:40.878087 systemd[1]: Reached target paths.target - Path Units. Nov 12 22:29:40.878740 systemd[1]: Reached target timers.target - Timer Units. Nov 12 22:29:40.880074 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 22:29:40.882026 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 22:29:40.894556 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 22:29:40.896728 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 22:29:40.898135 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 22:29:40.899044 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 22:29:40.899745 systemd[1]: Reached target basic.target - Basic System. Nov 12 22:29:40.900411 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 22:29:40.900437 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 22:29:40.901305 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 22:29:40.903065 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 22:29:40.905677 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 22:29:40.906190 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 22:29:40.908682 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 22:29:40.909498 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 22:29:40.910822 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 22:29:40.913410 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 22:29:40.915240 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 22:29:40.922700 jq[1439]: false Nov 12 22:29:40.920134 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 22:29:40.923638 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 22:29:40.925170 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 22:29:40.925583 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 22:29:40.927725 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 22:29:40.931836 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 22:29:40.934646 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 22:29:40.938261 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 22:29:40.938427 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 22:29:40.940619 jq[1451]: true Nov 12 22:29:40.941335 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 22:29:40.941808 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 22:29:40.943333 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 22:29:40.944677 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 22:29:40.952516 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 22:29:40.954076 extend-filesystems[1440]: Found loop3 Nov 12 22:29:40.954742 extend-filesystems[1440]: Found loop4 Nov 12 22:29:40.954742 extend-filesystems[1440]: Found loop5 Nov 12 22:29:40.954742 extend-filesystems[1440]: Found vda Nov 12 22:29:40.954742 extend-filesystems[1440]: Found vda1 Nov 12 22:29:40.954742 extend-filesystems[1440]: Found vda2 Nov 12 22:29:40.954742 extend-filesystems[1440]: Found vda3 Nov 12 22:29:40.954742 extend-filesystems[1440]: Found usr Nov 12 22:29:40.954742 extend-filesystems[1440]: Found vda4 Nov 12 22:29:40.954742 extend-filesystems[1440]: Found vda6 Nov 12 22:29:40.954742 extend-filesystems[1440]: Found vda7 Nov 12 22:29:40.954742 extend-filesystems[1440]: Found vda9 Nov 12 22:29:40.954742 extend-filesystems[1440]: Checking size of /dev/vda9 Nov 12 22:29:40.971464 dbus-daemon[1438]: [system] SELinux support is enabled Nov 12 22:29:40.971694 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 22:29:40.978211 jq[1458]: true Nov 12 22:29:40.981214 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 22:29:40.981274 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 22:29:40.983683 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 22:29:40.987674 update_engine[1449]: I20241112 22:29:40.983703 1449 main.cc:92] Flatcar Update Engine starting Nov 12 22:29:40.983712 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 22:29:40.989252 systemd[1]: Started update-engine.service - Update Engine. Nov 12 22:29:40.989827 update_engine[1449]: I20241112 22:29:40.989774 1449 update_check_scheduler.cc:74] Next update check in 10m40s Nov 12 22:29:40.999921 extend-filesystems[1440]: Resized partition /dev/vda9 Nov 12 22:29:41.000697 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 22:29:41.008815 extend-filesystems[1487]: resize2fs 1.47.1 (20-May-2024) Nov 12 22:29:41.012722 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1350) Nov 12 22:29:41.012751 tar[1457]: linux-arm64/helm Nov 12 22:29:41.019877 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (Power Button) Nov 12 22:29:41.020523 systemd-logind[1446]: New seat seat0. Nov 12 22:29:41.022266 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 22:29:41.027576 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 22:29:41.066700 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 22:29:41.076288 extend-filesystems[1487]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 22:29:41.076288 extend-filesystems[1487]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 22:29:41.076288 extend-filesystems[1487]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 22:29:41.079659 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Nov 12 22:29:41.079004 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 22:29:41.079621 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 22:29:41.086622 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Nov 12 22:29:41.088189 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 22:29:41.090633 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 22:29:41.091671 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 22:29:41.175982 containerd[1459]: time="2024-11-12T22:29:41.175897157Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Nov 12 22:29:41.211192 containerd[1459]: time="2024-11-12T22:29:41.210974637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:29:41.212434 containerd[1459]: time="2024-11-12T22:29:41.212398157Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:29:41.213499 containerd[1459]: time="2024-11-12T22:29:41.212512237Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 22:29:41.213499 containerd[1459]: time="2024-11-12T22:29:41.212536037Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 22:29:41.213499 containerd[1459]: time="2024-11-12T22:29:41.212701397Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 22:29:41.213499 containerd[1459]: time="2024-11-12T22:29:41.212720797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 22:29:41.213499 containerd[1459]: time="2024-11-12T22:29:41.212774197Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:29:41.213499 containerd[1459]: time="2024-11-12T22:29:41.212784997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:29:41.213499 containerd[1459]: time="2024-11-12T22:29:41.212931637Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:29:41.213499 containerd[1459]: time="2024-11-12T22:29:41.212946077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 22:29:41.213499 containerd[1459]: time="2024-11-12T22:29:41.212958477Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:29:41.213499 containerd[1459]: time="2024-11-12T22:29:41.212968197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 22:29:41.213499 containerd[1459]: time="2024-11-12T22:29:41.213036997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:29:41.213499 containerd[1459]: time="2024-11-12T22:29:41.213212757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:29:41.213760 containerd[1459]: time="2024-11-12T22:29:41.213303637Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:29:41.213760 containerd[1459]: time="2024-11-12T22:29:41.213317237Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 22:29:41.213760 containerd[1459]: time="2024-11-12T22:29:41.213383677Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 22:29:41.213760 containerd[1459]: time="2024-11-12T22:29:41.213422957Z" level=info msg="metadata content store policy set" policy=shared Nov 12 22:29:41.216766 containerd[1459]: time="2024-11-12T22:29:41.216737837Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 22:29:41.216881 containerd[1459]: time="2024-11-12T22:29:41.216865317Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 22:29:41.216962 containerd[1459]: time="2024-11-12T22:29:41.216947957Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 22:29:41.217018 containerd[1459]: time="2024-11-12T22:29:41.217006677Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 22:29:41.217072 containerd[1459]: time="2024-11-12T22:29:41.217061517Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 22:29:41.217260 containerd[1459]: time="2024-11-12T22:29:41.217235997Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 22:29:41.217565 containerd[1459]: time="2024-11-12T22:29:41.217529797Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 22:29:41.217740 containerd[1459]: time="2024-11-12T22:29:41.217716957Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 22:29:41.217843 containerd[1459]: time="2024-11-12T22:29:41.217810277Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 22:29:41.217875 containerd[1459]: time="2024-11-12T22:29:41.217854477Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 22:29:41.217875 containerd[1459]: time="2024-11-12T22:29:41.217870997Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 22:29:41.217910 containerd[1459]: time="2024-11-12T22:29:41.217884277Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 22:29:41.217910 containerd[1459]: time="2024-11-12T22:29:41.217896837Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 22:29:41.217959 containerd[1459]: time="2024-11-12T22:29:41.217909917Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 22:29:41.217959 containerd[1459]: time="2024-11-12T22:29:41.217923957Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 22:29:41.217959 containerd[1459]: time="2024-11-12T22:29:41.217936317Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 22:29:41.217959 containerd[1459]: time="2024-11-12T22:29:41.217948437Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 22:29:41.218023 containerd[1459]: time="2024-11-12T22:29:41.217959517Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 22:29:41.218023 containerd[1459]: time="2024-11-12T22:29:41.217979717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 22:29:41.218023 containerd[1459]: time="2024-11-12T22:29:41.218000237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 22:29:41.218023 containerd[1459]: time="2024-11-12T22:29:41.218011997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 22:29:41.218093 containerd[1459]: time="2024-11-12T22:29:41.218024437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 22:29:41.218093 containerd[1459]: time="2024-11-12T22:29:41.218036757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 22:29:41.218093 containerd[1459]: time="2024-11-12T22:29:41.218048757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 22:29:41.218093 containerd[1459]: time="2024-11-12T22:29:41.218059477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 22:29:41.218093 containerd[1459]: time="2024-11-12T22:29:41.218071437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 22:29:41.218093 containerd[1459]: time="2024-11-12T22:29:41.218083597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 22:29:41.218194 containerd[1459]: time="2024-11-12T22:29:41.218097877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 22:29:41.218194 containerd[1459]: time="2024-11-12T22:29:41.218109317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 22:29:41.218194 containerd[1459]: time="2024-11-12T22:29:41.218120157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 22:29:41.218194 containerd[1459]: time="2024-11-12T22:29:41.218131357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 22:29:41.218194 containerd[1459]: time="2024-11-12T22:29:41.218146357Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 22:29:41.218194 containerd[1459]: time="2024-11-12T22:29:41.218168397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 22:29:41.218194 containerd[1459]: time="2024-11-12T22:29:41.218180637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 22:29:41.218194 containerd[1459]: time="2024-11-12T22:29:41.218191317Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 22:29:41.218384 containerd[1459]: time="2024-11-12T22:29:41.218369917Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 22:29:41.218411 containerd[1459]: time="2024-11-12T22:29:41.218391517Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 22:29:41.218411 containerd[1459]: time="2024-11-12T22:29:41.218402557Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 22:29:41.218458 containerd[1459]: time="2024-11-12T22:29:41.218417877Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 22:29:41.218458 containerd[1459]: time="2024-11-12T22:29:41.218427997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 22:29:41.218458 containerd[1459]: time="2024-11-12T22:29:41.218439277Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 22:29:41.218458 containerd[1459]: time="2024-11-12T22:29:41.218448637Z" level=info msg="NRI interface is disabled by configuration." Nov 12 22:29:41.218458 containerd[1459]: time="2024-11-12T22:29:41.218458877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 22:29:41.218849 containerd[1459]: time="2024-11-12T22:29:41.218798717Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 22:29:41.218849 containerd[1459]: time="2024-11-12T22:29:41.218850317Z" level=info msg="Connect containerd service" Nov 12 22:29:41.218979 containerd[1459]: time="2024-11-12T22:29:41.218883757Z" level=info msg="using legacy CRI server" Nov 12 22:29:41.218979 containerd[1459]: time="2024-11-12T22:29:41.218890597Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 22:29:41.219131 containerd[1459]: time="2024-11-12T22:29:41.219112757Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 22:29:41.219825 containerd[1459]: time="2024-11-12T22:29:41.219796357Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 22:29:41.220131 containerd[1459]: time="2024-11-12T22:29:41.220073877Z" level=info msg="Start subscribing containerd event" Nov 12 22:29:41.220298 containerd[1459]: time="2024-11-12T22:29:41.220118837Z" level=info msg="Start recovering state" Nov 12 22:29:41.220356 containerd[1459]: time="2024-11-12T22:29:41.220331917Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 22:29:41.220399 containerd[1459]: time="2024-11-12T22:29:41.220378597Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 22:29:41.220512 containerd[1459]: time="2024-11-12T22:29:41.220472357Z" level=info msg="Start event monitor" Nov 12 22:29:41.220636 containerd[1459]: time="2024-11-12T22:29:41.220608717Z" level=info msg="Start snapshots syncer" Nov 12 22:29:41.221140 containerd[1459]: time="2024-11-12T22:29:41.221102437Z" level=info msg="Start cni network conf syncer for default" Nov 12 22:29:41.221140 containerd[1459]: time="2024-11-12T22:29:41.221136797Z" level=info msg="Start streaming server" Nov 12 22:29:41.223151 containerd[1459]: time="2024-11-12T22:29:41.221282717Z" level=info msg="containerd successfully booted in 0.046796s" Nov 12 22:29:41.222056 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 22:29:41.378797 tar[1457]: linux-arm64/LICENSE Nov 12 22:29:41.379613 tar[1457]: linux-arm64/README.md Nov 12 22:29:41.391730 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 22:29:42.298642 sshd_keygen[1465]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 22:29:42.316849 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 22:29:42.331919 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 22:29:42.338074 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 22:29:42.338277 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 22:29:42.340657 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 22:29:42.351823 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 22:29:42.355121 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 22:29:42.357658 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 12 22:29:42.359055 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 22:29:42.687262 systemd-networkd[1393]: eth0: Gained IPv6LL Nov 12 22:29:42.689894 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 22:29:42.691470 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 22:29:42.700893 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 22:29:42.703209 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:29:42.705195 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 22:29:42.718544 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 22:29:42.718801 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 22:29:42.719960 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 22:29:42.724778 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 22:29:43.205725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:29:43.206980 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 22:29:43.209042 (kubelet)[1550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:29:43.212678 systemd[1]: Startup finished in 520ms (kernel) + 5.262s (initrd) + 3.862s (userspace) = 9.645s. Nov 12 22:29:43.634489 kubelet[1550]: E1112 22:29:43.634380 1550 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:29:43.636412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:29:43.636592 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:29:46.775185 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 22:29:46.776275 systemd[1]: Started sshd@0-10.0.0.65:22-10.0.0.1:43904.service - OpenSSH per-connection server daemon (10.0.0.1:43904). Nov 12 22:29:46.842506 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 43904 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:29:46.844044 sshd-session[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:29:46.853852 systemd-logind[1446]: New session 1 of user core. Nov 12 22:29:46.854843 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 22:29:46.861791 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 22:29:46.870781 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 22:29:46.873856 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 22:29:46.879365 (systemd)[1567]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 22:29:46.948086 systemd[1567]: Queued start job for default target default.target. Nov 12 22:29:46.956666 systemd[1567]: Created slice app.slice - User Application Slice. Nov 12 22:29:46.956708 systemd[1567]: Reached target paths.target - Paths. Nov 12 22:29:46.956720 systemd[1567]: Reached target timers.target - Timers. Nov 12 22:29:46.957940 systemd[1567]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 22:29:46.967190 systemd[1567]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 22:29:46.967254 systemd[1567]: Reached target sockets.target - Sockets. Nov 12 22:29:46.967265 systemd[1567]: Reached target basic.target - Basic System. Nov 12 22:29:46.967299 systemd[1567]: Reached target default.target - Main User Target. Nov 12 22:29:46.967324 systemd[1567]: Startup finished in 82ms. Nov 12 22:29:46.967654 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 22:29:46.968885 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 22:29:47.032100 systemd[1]: Started sshd@1-10.0.0.65:22-10.0.0.1:43912.service - OpenSSH per-connection server daemon (10.0.0.1:43912). Nov 12 22:29:47.070994 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 43912 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:29:47.072188 sshd-session[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:29:47.076313 systemd-logind[1446]: New session 2 of user core. Nov 12 22:29:47.086743 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 22:29:47.137782 sshd[1580]: Connection closed by 10.0.0.1 port 43912 Nov 12 22:29:47.137722 sshd-session[1578]: pam_unix(sshd:session): session closed for user core Nov 12 22:29:47.143676 systemd[1]: sshd@1-10.0.0.65:22-10.0.0.1:43912.service: Deactivated successfully. Nov 12 22:29:47.145081 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 22:29:47.147508 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Nov 12 22:29:47.159064 systemd[1]: Started sshd@2-10.0.0.65:22-10.0.0.1:43928.service - OpenSSH per-connection server daemon (10.0.0.1:43928). Nov 12 22:29:47.160372 systemd-logind[1446]: Removed session 2. Nov 12 22:29:47.194417 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 43928 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:29:47.195495 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:29:47.198902 systemd-logind[1446]: New session 3 of user core. Nov 12 22:29:47.218699 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 22:29:47.265734 sshd[1587]: Connection closed by 10.0.0.1 port 43928 Nov 12 22:29:47.266021 sshd-session[1585]: pam_unix(sshd:session): session closed for user core Nov 12 22:29:47.280835 systemd[1]: sshd@2-10.0.0.65:22-10.0.0.1:43928.service: Deactivated successfully. Nov 12 22:29:47.282253 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 22:29:47.283454 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Nov 12 22:29:47.284850 systemd[1]: Started sshd@3-10.0.0.65:22-10.0.0.1:43936.service - OpenSSH per-connection server daemon (10.0.0.1:43936). Nov 12 22:29:47.285598 systemd-logind[1446]: Removed session 3. Nov 12 22:29:47.323370 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 43936 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:29:47.324418 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:29:47.328192 systemd-logind[1446]: New session 4 of user core. Nov 12 22:29:47.336682 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 22:29:47.387092 sshd[1594]: Connection closed by 10.0.0.1 port 43936 Nov 12 22:29:47.387585 sshd-session[1592]: pam_unix(sshd:session): session closed for user core Nov 12 22:29:47.403148 systemd[1]: sshd@3-10.0.0.65:22-10.0.0.1:43936.service: Deactivated successfully. Nov 12 22:29:47.404571 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 22:29:47.406083 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Nov 12 22:29:47.407787 systemd[1]: Started sshd@4-10.0.0.65:22-10.0.0.1:43950.service - OpenSSH per-connection server daemon (10.0.0.1:43950). Nov 12 22:29:47.408513 systemd-logind[1446]: Removed session 4. Nov 12 22:29:47.446169 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 43950 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:29:47.447292 sshd-session[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:29:47.451121 systemd-logind[1446]: New session 5 of user core. Nov 12 22:29:47.461700 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 22:29:47.526286 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 22:29:47.526608 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:29:47.540426 sudo[1602]: pam_unix(sudo:session): session closed for user root Nov 12 22:29:47.541865 sshd[1601]: Connection closed by 10.0.0.1 port 43950 Nov 12 22:29:47.542308 sshd-session[1599]: pam_unix(sshd:session): session closed for user core Nov 12 22:29:47.554970 systemd[1]: sshd@4-10.0.0.65:22-10.0.0.1:43950.service: Deactivated successfully. Nov 12 22:29:47.557805 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 22:29:47.559074 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Nov 12 22:29:47.560273 systemd[1]: Started sshd@5-10.0.0.65:22-10.0.0.1:43952.service - OpenSSH per-connection server daemon (10.0.0.1:43952). Nov 12 22:29:47.561126 systemd-logind[1446]: Removed session 5. Nov 12 22:29:47.599041 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 43952 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:29:47.600216 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:29:47.603631 systemd-logind[1446]: New session 6 of user core. Nov 12 22:29:47.612684 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 22:29:47.662735 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 22:29:47.662996 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:29:47.666205 sudo[1611]: pam_unix(sudo:session): session closed for user root Nov 12 22:29:47.670478 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 12 22:29:47.670763 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:29:47.692898 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 12 22:29:47.714576 augenrules[1633]: No rules Nov 12 22:29:47.715719 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 22:29:47.715917 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 12 22:29:47.716858 sudo[1610]: pam_unix(sudo:session): session closed for user root Nov 12 22:29:47.718448 sshd[1609]: Connection closed by 10.0.0.1 port 43952 Nov 12 22:29:47.718379 sshd-session[1607]: pam_unix(sshd:session): session closed for user core Nov 12 22:29:47.728716 systemd[1]: sshd@5-10.0.0.65:22-10.0.0.1:43952.service: Deactivated successfully. Nov 12 22:29:47.729925 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 22:29:47.731149 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Nov 12 22:29:47.743785 systemd[1]: Started sshd@6-10.0.0.65:22-10.0.0.1:43958.service - OpenSSH per-connection server daemon (10.0.0.1:43958). Nov 12 22:29:47.744559 systemd-logind[1446]: Removed session 6. Nov 12 22:29:47.778147 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 43958 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:29:47.779215 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:29:47.782820 systemd-logind[1446]: New session 7 of user core. Nov 12 22:29:47.794697 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 22:29:47.844072 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 22:29:47.844325 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:29:48.150797 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 22:29:48.150875 (dockerd)[1665]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 22:29:48.387718 dockerd[1665]: time="2024-11-12T22:29:48.387661517Z" level=info msg="Starting up" Nov 12 22:29:48.529967 dockerd[1665]: time="2024-11-12T22:29:48.529918277Z" level=info msg="Loading containers: start." Nov 12 22:29:48.659582 kernel: Initializing XFRM netlink socket Nov 12 22:29:48.721833 systemd-networkd[1393]: docker0: Link UP Nov 12 22:29:48.772751 dockerd[1665]: time="2024-11-12T22:29:48.772686677Z" level=info msg="Loading containers: done." Nov 12 22:29:48.787266 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1311915447-merged.mount: Deactivated successfully. Nov 12 22:29:48.789229 dockerd[1665]: time="2024-11-12T22:29:48.788866317Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 22:29:48.789229 dockerd[1665]: time="2024-11-12T22:29:48.788954237Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Nov 12 22:29:48.789229 dockerd[1665]: time="2024-11-12T22:29:48.789051517Z" level=info msg="Daemon has completed initialization" Nov 12 22:29:48.817537 dockerd[1665]: time="2024-11-12T22:29:48.817447797Z" level=info msg="API listen on /run/docker.sock" Nov 12 22:29:48.817642 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 22:29:49.191365 containerd[1459]: time="2024-11-12T22:29:49.191318397Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\"" Nov 12 22:29:49.861838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1430889511.mount: Deactivated successfully. Nov 12 22:29:51.238894 containerd[1459]: time="2024-11-12T22:29:51.238831517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:29:51.239410 containerd[1459]: time="2024-11-12T22:29:51.239357637Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.2: active requests=0, bytes read=25616007" Nov 12 22:29:51.240204 containerd[1459]: time="2024-11-12T22:29:51.240148277Z" level=info msg="ImageCreate event name:\"sha256:f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:29:51.244801 containerd[1459]: time="2024-11-12T22:29:51.244751917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:29:51.247822 containerd[1459]: time="2024-11-12T22:29:51.247697117Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.2\" with image id \"sha256:f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\", size \"25612805\" in 2.05633768s" Nov 12 22:29:51.247822 containerd[1459]: time="2024-11-12T22:29:51.247744957Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\" returns image reference \"sha256:f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270\"" Nov 12 22:29:51.248373 containerd[1459]: time="2024-11-12T22:29:51.248346597Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\"" Nov 12 22:29:52.949087 containerd[1459]: time="2024-11-12T22:29:52.948865277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:29:52.949980 containerd[1459]: time="2024-11-12T22:29:52.949722597Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.2: active requests=0, bytes read=22469649" Nov 12 22:29:52.950645 containerd[1459]: time="2024-11-12T22:29:52.950613757Z" level=info msg="ImageCreate event name:\"sha256:9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:29:52.953651 containerd[1459]: time="2024-11-12T22:29:52.953591157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:29:52.954725 containerd[1459]: time="2024-11-12T22:29:52.954686637Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.2\" with image id \"sha256:9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\", size \"23872272\" in 1.70630444s" Nov 12 22:29:52.954725 containerd[1459]: time="2024-11-12T22:29:52.954720517Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\" returns image reference \"sha256:9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba\"" Nov 12 22:29:52.955401 containerd[1459]: time="2024-11-12T22:29:52.955225077Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\"" Nov 12 22:29:53.685638 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 22:29:53.697172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:29:53.794134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:29:53.797647 (kubelet)[1929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:29:53.834493 kubelet[1929]: E1112 22:29:53.834411 1929 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:29:53.836804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:29:53.836935 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:29:54.756382 containerd[1459]: time="2024-11-12T22:29:54.756327077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:29:54.757325 containerd[1459]: time="2024-11-12T22:29:54.757066877Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.2: active requests=0, bytes read=17027038" Nov 12 22:29:54.758078 containerd[1459]: time="2024-11-12T22:29:54.758020197Z" level=info msg="ImageCreate event name:\"sha256:d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:29:54.763248 containerd[1459]: time="2024-11-12T22:29:54.763205477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:29:54.764759 containerd[1459]: time="2024-11-12T22:29:54.764718917Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.2\" with image id \"sha256:d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\", size \"18429679\" in 1.80946396s" Nov 12 22:29:54.764759 containerd[1459]: time="2024-11-12T22:29:54.764754637Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\" returns image reference \"sha256:d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a\"" Nov 12 22:29:54.765214 containerd[1459]: time="2024-11-12T22:29:54.765184477Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\"" Nov 12 22:29:55.802960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1012794557.mount: Deactivated successfully. Nov 12 22:29:56.010008 containerd[1459]: time="2024-11-12T22:29:56.009951877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:29:56.010919 containerd[1459]: time="2024-11-12T22:29:56.010871637Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.2: active requests=0, bytes read=26769666" Nov 12 22:29:56.011539 containerd[1459]: time="2024-11-12T22:29:56.011505837Z" level=info msg="ImageCreate event name:\"sha256:021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:29:56.013564 containerd[1459]: time="2024-11-12T22:29:56.013515917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:29:56.014663 containerd[1459]: time="2024-11-12T22:29:56.014628277Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.2\" with image id \"sha256:021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba\", repo tag \"registry.k8s.io/kube-proxy:v1.31.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\", size \"26768683\" in 1.24940896s" Nov 12 22:29:56.014691 containerd[1459]: time="2024-11-12T22:29:56.014668237Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\" returns image reference \"sha256:021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba\"" Nov 12 22:29:56.016859 containerd[1459]: time="2024-11-12T22:29:56.016829357Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 22:29:56.629604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3461846703.mount: Deactivated successfully. Nov 12 22:29:57.528873 containerd[1459]: time="2024-11-12T22:29:57.528809837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:29:57.530063 containerd[1459]: time="2024-11-12T22:29:57.529781477Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Nov 12 22:29:57.530736 containerd[1459]: time="2024-11-12T22:29:57.530692597Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:29:57.533590 containerd[1459]: time="2024-11-12T22:29:57.533543717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:29:57.534876 containerd[1459]: time="2024-11-12T22:29:57.534844877Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.51798112s" Nov 12 22:29:57.534978 containerd[1459]: time="2024-11-12T22:29:57.534962077Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Nov 12 22:29:57.535598 containerd[1459]: time="2024-11-12T22:29:57.535571077Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 12 22:29:57.924321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3821939416.mount: Deactivated successfully. Nov 12 22:29:57.928894 containerd[1459]: time="2024-11-12T22:29:57.928855117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:29:57.929504 containerd[1459]: time="2024-11-12T22:29:57.929286917Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Nov 12 22:29:57.930190 containerd[1459]: time="2024-11-12T22:29:57.930151757Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:29:57.932469 containerd[1459]: time="2024-11-12T22:29:57.932440397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:29:57.933230 containerd[1459]: time="2024-11-12T22:29:57.933200397Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 397.51208ms" Nov 12 22:29:57.933230 containerd[1459]: time="2024-11-12T22:29:57.933229877Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 12 22:29:57.933654 containerd[1459]: time="2024-11-12T22:29:57.933633117Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Nov 12 22:29:58.547855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2551296406.mount: Deactivated successfully. Nov 12 22:30:01.703987 containerd[1459]: time="2024-11-12T22:30:01.703713677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:30:01.704418 containerd[1459]: time="2024-11-12T22:30:01.704214077Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406104" Nov 12 22:30:01.705216 containerd[1459]: time="2024-11-12T22:30:01.705188917Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:30:01.708475 containerd[1459]: time="2024-11-12T22:30:01.708442117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:30:01.709783 containerd[1459]: time="2024-11-12T22:30:01.709742677Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.77607956s" Nov 12 22:30:01.709831 containerd[1459]: time="2024-11-12T22:30:01.709782157Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Nov 12 22:30:03.935496 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 22:30:03.945743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:30:04.067992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:30:04.071357 (kubelet)[2081]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:30:04.103135 kubelet[2081]: E1112 22:30:04.103052 2081 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:30:04.105376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:30:04.105533 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:30:06.029260 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:30:06.039756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:30:06.060193 systemd[1]: Reloading requested from client PID 2097 ('systemctl') (unit session-7.scope)... Nov 12 22:30:06.060208 systemd[1]: Reloading... Nov 12 22:30:06.123677 zram_generator::config[2136]: No configuration found. Nov 12 22:30:06.262977 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:30:06.313974 systemd[1]: Reloading finished in 253 ms. Nov 12 22:30:06.350288 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:30:06.353465 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 22:30:06.353672 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:30:06.355115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:30:06.441932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:30:06.446159 (kubelet)[2183]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 22:30:06.483902 kubelet[2183]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:30:06.483902 kubelet[2183]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 22:30:06.483902 kubelet[2183]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:30:06.484202 kubelet[2183]: I1112 22:30:06.484048 2183 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 22:30:06.952398 kubelet[2183]: I1112 22:30:06.952344 2183 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Nov 12 22:30:06.952398 kubelet[2183]: I1112 22:30:06.952388 2183 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 22:30:06.952691 kubelet[2183]: I1112 22:30:06.952666 2183 server.go:929] "Client rotation is on, will bootstrap in background" Nov 12 22:30:06.981547 kubelet[2183]: E1112 22:30:06.981505 2183 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Nov 12 22:30:06.982544 kubelet[2183]: I1112 22:30:06.982376 2183 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 22:30:06.987799 kubelet[2183]: E1112 22:30:06.987640 2183 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 12 22:30:06.987799 kubelet[2183]: I1112 22:30:06.987667 2183 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 12 22:30:06.990951 kubelet[2183]: I1112 22:30:06.990928 2183 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 22:30:06.991714 kubelet[2183]: I1112 22:30:06.991689 2183 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 12 22:30:06.991852 kubelet[2183]: I1112 22:30:06.991816 2183 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 22:30:06.992001 kubelet[2183]: I1112 22:30:06.991841 2183 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 12 22:30:06.992133 kubelet[2183]: I1112 22:30:06.992122 2183 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 22:30:06.992133 kubelet[2183]: I1112 22:30:06.992134 2183 container_manager_linux.go:300] "Creating device plugin manager" Nov 12 22:30:06.992310 kubelet[2183]: I1112 22:30:06.992289 2183 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:30:06.993867 kubelet[2183]: I1112 22:30:06.993817 2183 kubelet.go:408] "Attempting to sync node with API server" Nov 12 22:30:06.993867 kubelet[2183]: I1112 22:30:06.993858 2183 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 22:30:06.994348 kubelet[2183]: I1112 22:30:06.993950 2183 kubelet.go:314] "Adding apiserver pod source" Nov 12 22:30:06.994348 kubelet[2183]: I1112 22:30:06.993962 2183 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 22:30:07.000397 kubelet[2183]: W1112 22:30:06.998922 2183 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Nov 12 22:30:07.000397 kubelet[2183]: E1112 22:30:06.998987 2183 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Nov 12 22:30:07.000397 kubelet[2183]: W1112 22:30:06.999140 2183 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Nov 12 22:30:07.000397 kubelet[2183]: E1112 22:30:06.999191 2183 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Nov 12 22:30:07.001178 kubelet[2183]: I1112 22:30:07.001070 2183 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 12 22:30:07.003000 kubelet[2183]: I1112 22:30:07.002975 2183 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 22:30:07.004145 kubelet[2183]: W1112 22:30:07.004118 2183 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 22:30:07.004787 kubelet[2183]: I1112 22:30:07.004761 2183 server.go:1269] "Started kubelet" Nov 12 22:30:07.005438 kubelet[2183]: I1112 22:30:07.005397 2183 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 22:30:07.007836 kubelet[2183]: I1112 22:30:07.007035 2183 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 22:30:07.007836 kubelet[2183]: I1112 22:30:07.007355 2183 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 22:30:07.007836 kubelet[2183]: I1112 22:30:07.007414 2183 server.go:460] "Adding debug handlers to kubelet server" Nov 12 22:30:07.007836 kubelet[2183]: I1112 22:30:07.007715 2183 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 22:30:07.008698 kubelet[2183]: I1112 22:30:07.008673 2183 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 12 22:30:07.010937 kubelet[2183]: I1112 22:30:07.009493 2183 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 12 22:30:07.010937 kubelet[2183]: I1112 22:30:07.009615 2183 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 12 22:30:07.010937 kubelet[2183]: I1112 22:30:07.009683 2183 reconciler.go:26] "Reconciler: start to sync state" Nov 12 22:30:07.010937 kubelet[2183]: W1112 22:30:07.009985 2183 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Nov 12 22:30:07.010937 kubelet[2183]: E1112 22:30:07.010038 2183 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 22:30:07.010937 kubelet[2183]: E1112 22:30:07.010279 2183 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Nov 12 22:30:07.010937 kubelet[2183]: E1112 22:30:07.010398 2183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="200ms" Nov 12 22:30:07.010937 kubelet[2183]: I1112 22:30:07.010574 2183 factory.go:221] Registration of the systemd container factory successfully Nov 12 22:30:07.010937 kubelet[2183]: I1112 22:30:07.010663 2183 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 22:30:07.012724 kubelet[2183]: E1112 22:30:07.012691 2183 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 22:30:07.012724 kubelet[2183]: E1112 22:30:07.011436 2183 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.65:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.65:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1807592748a6e17d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 22:30:07.004737917 +0000 UTC m=+0.555584241,LastTimestamp:2024-11-12 22:30:07.004737917 +0000 UTC m=+0.555584241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 22:30:07.012878 kubelet[2183]: I1112 22:30:07.012860 2183 factory.go:221] Registration of the containerd container factory successfully Nov 12 22:30:07.023430 kubelet[2183]: I1112 22:30:07.023398 2183 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 22:30:07.023721 kubelet[2183]: I1112 22:30:07.023628 2183 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 22:30:07.023721 kubelet[2183]: I1112 22:30:07.023642 2183 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 22:30:07.023721 kubelet[2183]: I1112 22:30:07.023656 2183 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:30:07.025471 kubelet[2183]: I1112 22:30:07.025354 2183 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 22:30:07.025471 kubelet[2183]: I1112 22:30:07.025393 2183 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 22:30:07.025471 kubelet[2183]: I1112 22:30:07.025408 2183 kubelet.go:2321] "Starting kubelet main sync loop" Nov 12 22:30:07.025471 kubelet[2183]: E1112 22:30:07.025455 2183 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 22:30:07.026053 kubelet[2183]: W1112 22:30:07.025938 2183 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Nov 12 22:30:07.026053 kubelet[2183]: E1112 22:30:07.025994 2183 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Nov 12 22:30:07.113141 kubelet[2183]: E1112 22:30:07.113107 2183 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 22:30:07.123420 kubelet[2183]: I1112 22:30:07.123386 2183 policy_none.go:49] "None policy: Start" Nov 12 22:30:07.124141 kubelet[2183]: I1112 22:30:07.124125 2183 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 22:30:07.124212 kubelet[2183]: I1112 22:30:07.124151 2183 state_mem.go:35] "Initializing new in-memory state store" Nov 12 22:30:07.125678 kubelet[2183]: E1112 22:30:07.125659 2183 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 22:30:07.130988 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 22:30:07.145675 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 22:30:07.150310 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 22:30:07.165201 kubelet[2183]: I1112 22:30:07.165174 2183 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 22:30:07.165423 kubelet[2183]: I1112 22:30:07.165358 2183 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 12 22:30:07.165423 kubelet[2183]: I1112 22:30:07.165383 2183 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 22:30:07.166038 kubelet[2183]: I1112 22:30:07.165887 2183 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 22:30:07.166444 kubelet[2183]: E1112 22:30:07.166420 2183 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 22:30:07.211703 kubelet[2183]: E1112 22:30:07.211607 2183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="400ms" Nov 12 22:30:07.266770 kubelet[2183]: I1112 22:30:07.266731 2183 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 22:30:07.267079 kubelet[2183]: E1112 22:30:07.267057 2183 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Nov 12 22:30:07.333106 systemd[1]: Created slice kubepods-burstable-pod870406dd146c1baadf35a334b48befbc.slice - libcontainer container kubepods-burstable-pod870406dd146c1baadf35a334b48befbc.slice. Nov 12 22:30:07.359543 systemd[1]: Created slice kubepods-burstable-pod2bd0c21dd05cc63bc1db25732dedb07c.slice - libcontainer container kubepods-burstable-pod2bd0c21dd05cc63bc1db25732dedb07c.slice. Nov 12 22:30:07.371744 systemd[1]: Created slice kubepods-burstable-pod33673bc39d15d92b38b41cdd12700fe3.slice - libcontainer container kubepods-burstable-pod33673bc39d15d92b38b41cdd12700fe3.slice. Nov 12 22:30:07.412436 kubelet[2183]: I1112 22:30:07.412408 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:30:07.412436 kubelet[2183]: I1112 22:30:07.412446 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:30:07.412663 kubelet[2183]: I1112 22:30:07.412466 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33673bc39d15d92b38b41cdd12700fe3-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33673bc39d15d92b38b41cdd12700fe3\") " pod="kube-system/kube-scheduler-localhost" Nov 12 22:30:07.412663 kubelet[2183]: I1112 22:30:07.412483 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/870406dd146c1baadf35a334b48befbc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"870406dd146c1baadf35a334b48befbc\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:30:07.412663 kubelet[2183]: I1112 22:30:07.412497 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/870406dd146c1baadf35a334b48befbc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"870406dd146c1baadf35a334b48befbc\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:30:07.412663 kubelet[2183]: I1112 22:30:07.412513 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/870406dd146c1baadf35a334b48befbc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"870406dd146c1baadf35a334b48befbc\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:30:07.412663 kubelet[2183]: I1112 22:30:07.412530 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:30:07.412811 kubelet[2183]: I1112 22:30:07.412564 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:30:07.412811 kubelet[2183]: I1112 22:30:07.412583 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:30:07.468684 kubelet[2183]: I1112 22:30:07.468583 2183 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 22:30:07.469444 kubelet[2183]: E1112 22:30:07.469400 2183 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Nov 12 22:30:07.612169 kubelet[2183]: E1112 22:30:07.612111 2183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="800ms" Nov 12 22:30:07.656539 kubelet[2183]: E1112 22:30:07.656500 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:07.657235 containerd[1459]: time="2024-11-12T22:30:07.657194317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:870406dd146c1baadf35a334b48befbc,Namespace:kube-system,Attempt:0,}" Nov 12 22:30:07.670568 kubelet[2183]: E1112 22:30:07.670451 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:07.671631 containerd[1459]: time="2024-11-12T22:30:07.671352877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:2bd0c21dd05cc63bc1db25732dedb07c,Namespace:kube-system,Attempt:0,}" Nov 12 22:30:07.673690 kubelet[2183]: E1112 22:30:07.673669 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:07.674078 containerd[1459]: time="2024-11-12T22:30:07.674041957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33673bc39d15d92b38b41cdd12700fe3,Namespace:kube-system,Attempt:0,}" Nov 12 22:30:07.855814 kubelet[2183]: W1112 22:30:07.855678 2183 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Nov 12 22:30:07.855814 kubelet[2183]: E1112 22:30:07.855748 2183 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Nov 12 22:30:07.871030 kubelet[2183]: I1112 22:30:07.870992 2183 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 22:30:07.871314 kubelet[2183]: E1112 22:30:07.871278 2183 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Nov 12 22:30:07.907904 kubelet[2183]: W1112 22:30:07.907836 2183 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Nov 12 22:30:07.907904 kubelet[2183]: E1112 22:30:07.907902 2183 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Nov 12 22:30:07.981901 kubelet[2183]: W1112 22:30:07.981797 2183 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Nov 12 22:30:07.981901 kubelet[2183]: E1112 22:30:07.981868 2183 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Nov 12 22:30:08.142825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3404288056.mount: Deactivated successfully. Nov 12 22:30:08.147426 containerd[1459]: time="2024-11-12T22:30:08.147361397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:30:08.149614 containerd[1459]: time="2024-11-12T22:30:08.149567237Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Nov 12 22:30:08.150129 containerd[1459]: time="2024-11-12T22:30:08.150097117Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:30:08.151036 containerd[1459]: time="2024-11-12T22:30:08.150986797Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:30:08.152091 containerd[1459]: time="2024-11-12T22:30:08.152058157Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 22:30:08.152732 containerd[1459]: time="2024-11-12T22:30:08.152699197Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:30:08.153541 containerd[1459]: time="2024-11-12T22:30:08.153384237Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 22:30:08.154985 containerd[1459]: time="2024-11-12T22:30:08.154923477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:30:08.158090 containerd[1459]: time="2024-11-12T22:30:08.158007157Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 500.72836ms" Nov 12 22:30:08.159644 containerd[1459]: time="2024-11-12T22:30:08.159490877Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 485.38268ms" Nov 12 22:30:08.161506 containerd[1459]: time="2024-11-12T22:30:08.161440797Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 489.98604ms" Nov 12 22:30:08.303271 containerd[1459]: time="2024-11-12T22:30:08.303054477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:30:08.303271 containerd[1459]: time="2024-11-12T22:30:08.303119717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:30:08.303271 containerd[1459]: time="2024-11-12T22:30:08.303135837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:30:08.303271 containerd[1459]: time="2024-11-12T22:30:08.303221357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:30:08.303529 containerd[1459]: time="2024-11-12T22:30:08.303477877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:30:08.303582 containerd[1459]: time="2024-11-12T22:30:08.303524117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:30:08.303582 containerd[1459]: time="2024-11-12T22:30:08.303537997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:30:08.304169 containerd[1459]: time="2024-11-12T22:30:08.304095117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:30:08.310046 containerd[1459]: time="2024-11-12T22:30:08.306999997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:30:08.310046 containerd[1459]: time="2024-11-12T22:30:08.309724677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:30:08.310046 containerd[1459]: time="2024-11-12T22:30:08.309737957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:30:08.310046 containerd[1459]: time="2024-11-12T22:30:08.309824357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:30:08.322759 systemd[1]: Started cri-containerd-84b1c9bb6d3d279dd3791af9c7b59350e8f288b66063e984d4296e6f604308e6.scope - libcontainer container 84b1c9bb6d3d279dd3791af9c7b59350e8f288b66063e984d4296e6f604308e6. Nov 12 22:30:08.325779 systemd[1]: Started cri-containerd-769628334446f6b553b53d21bc2b20797a804e95ad4ce9101680016e956757be.scope - libcontainer container 769628334446f6b553b53d21bc2b20797a804e95ad4ce9101680016e956757be. Nov 12 22:30:08.326860 systemd[1]: Started cri-containerd-cc5087d79508ed31d4c2618679f14f8541ae7b794bdbe2accf35b8154f9791d3.scope - libcontainer container cc5087d79508ed31d4c2618679f14f8541ae7b794bdbe2accf35b8154f9791d3. Nov 12 22:30:08.352506 containerd[1459]: time="2024-11-12T22:30:08.352455117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:2bd0c21dd05cc63bc1db25732dedb07c,Namespace:kube-system,Attempt:0,} returns sandbox id \"84b1c9bb6d3d279dd3791af9c7b59350e8f288b66063e984d4296e6f604308e6\"" Nov 12 22:30:08.354617 kubelet[2183]: E1112 22:30:08.353850 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:08.356820 containerd[1459]: time="2024-11-12T22:30:08.356767077Z" level=info msg="CreateContainer within sandbox \"84b1c9bb6d3d279dd3791af9c7b59350e8f288b66063e984d4296e6f604308e6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 22:30:08.359051 containerd[1459]: time="2024-11-12T22:30:08.359021997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:870406dd146c1baadf35a334b48befbc,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc5087d79508ed31d4c2618679f14f8541ae7b794bdbe2accf35b8154f9791d3\"" Nov 12 22:30:08.360027 kubelet[2183]: E1112 22:30:08.360005 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:08.361792 containerd[1459]: time="2024-11-12T22:30:08.361767357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33673bc39d15d92b38b41cdd12700fe3,Namespace:kube-system,Attempt:0,} returns sandbox id \"769628334446f6b553b53d21bc2b20797a804e95ad4ce9101680016e956757be\"" Nov 12 22:30:08.362186 kubelet[2183]: W1112 22:30:08.362146 2183 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Nov 12 22:30:08.362283 kubelet[2183]: E1112 22:30:08.362201 2183 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Nov 12 22:30:08.362799 kubelet[2183]: E1112 22:30:08.362666 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:08.363718 containerd[1459]: time="2024-11-12T22:30:08.363695957Z" level=info msg="CreateContainer within sandbox \"cc5087d79508ed31d4c2618679f14f8541ae7b794bdbe2accf35b8154f9791d3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 22:30:08.364472 containerd[1459]: time="2024-11-12T22:30:08.364450397Z" level=info msg="CreateContainer within sandbox \"769628334446f6b553b53d21bc2b20797a804e95ad4ce9101680016e956757be\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 22:30:08.371746 containerd[1459]: time="2024-11-12T22:30:08.371706117Z" level=info msg="CreateContainer within sandbox \"84b1c9bb6d3d279dd3791af9c7b59350e8f288b66063e984d4296e6f604308e6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4beda4368abc6bcf83a31a98d413522a5e4a492ecc01c826c26db7de49744065\"" Nov 12 22:30:08.372431 containerd[1459]: time="2024-11-12T22:30:08.372394117Z" level=info msg="StartContainer for \"4beda4368abc6bcf83a31a98d413522a5e4a492ecc01c826c26db7de49744065\"" Nov 12 22:30:08.376888 containerd[1459]: time="2024-11-12T22:30:08.376858397Z" level=info msg="CreateContainer within sandbox \"cc5087d79508ed31d4c2618679f14f8541ae7b794bdbe2accf35b8154f9791d3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4ead6438592fb06a805640a138cac1fc3c2c2df721ae553a41bbbfe64514192d\"" Nov 12 22:30:08.377290 containerd[1459]: time="2024-11-12T22:30:08.377237557Z" level=info msg="StartContainer for \"4ead6438592fb06a805640a138cac1fc3c2c2df721ae553a41bbbfe64514192d\"" Nov 12 22:30:08.379060 containerd[1459]: time="2024-11-12T22:30:08.378989477Z" level=info msg="CreateContainer within sandbox \"769628334446f6b553b53d21bc2b20797a804e95ad4ce9101680016e956757be\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"aa3c046d474b25dc3fa619039bb5bd9c8cf51a4825b3b20257038f7b18d954f0\"" Nov 12 22:30:08.379404 containerd[1459]: time="2024-11-12T22:30:08.379352357Z" level=info msg="StartContainer for \"aa3c046d474b25dc3fa619039bb5bd9c8cf51a4825b3b20257038f7b18d954f0\"" Nov 12 22:30:08.401713 systemd[1]: Started cri-containerd-4beda4368abc6bcf83a31a98d413522a5e4a492ecc01c826c26db7de49744065.scope - libcontainer container 4beda4368abc6bcf83a31a98d413522a5e4a492ecc01c826c26db7de49744065. Nov 12 22:30:08.405456 systemd[1]: Started cri-containerd-4ead6438592fb06a805640a138cac1fc3c2c2df721ae553a41bbbfe64514192d.scope - libcontainer container 4ead6438592fb06a805640a138cac1fc3c2c2df721ae553a41bbbfe64514192d. Nov 12 22:30:08.406380 systemd[1]: Started cri-containerd-aa3c046d474b25dc3fa619039bb5bd9c8cf51a4825b3b20257038f7b18d954f0.scope - libcontainer container aa3c046d474b25dc3fa619039bb5bd9c8cf51a4825b3b20257038f7b18d954f0. Nov 12 22:30:08.412783 kubelet[2183]: E1112 22:30:08.412722 2183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="1.6s" Nov 12 22:30:08.438393 containerd[1459]: time="2024-11-12T22:30:08.438348437Z" level=info msg="StartContainer for \"4ead6438592fb06a805640a138cac1fc3c2c2df721ae553a41bbbfe64514192d\" returns successfully" Nov 12 22:30:08.438671 containerd[1459]: time="2024-11-12T22:30:08.438463437Z" level=info msg="StartContainer for \"4beda4368abc6bcf83a31a98d413522a5e4a492ecc01c826c26db7de49744065\" returns successfully" Nov 12 22:30:08.457758 containerd[1459]: time="2024-11-12T22:30:08.457729757Z" level=info msg="StartContainer for \"aa3c046d474b25dc3fa619039bb5bd9c8cf51a4825b3b20257038f7b18d954f0\" returns successfully" Nov 12 22:30:08.674623 kubelet[2183]: I1112 22:30:08.673743 2183 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 22:30:09.032152 kubelet[2183]: E1112 22:30:09.032055 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:09.036580 kubelet[2183]: E1112 22:30:09.034896 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:09.038035 kubelet[2183]: E1112 22:30:09.038008 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:10.037280 kubelet[2183]: E1112 22:30:10.037252 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:10.131435 kubelet[2183]: E1112 22:30:10.127808 2183 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 12 22:30:10.234656 kubelet[2183]: I1112 22:30:10.234614 2183 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Nov 12 22:30:10.270624 kubelet[2183]: E1112 22:30:10.270518 2183 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1807592748a6e17d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 22:30:07.004737917 +0000 UTC m=+0.555584241,LastTimestamp:2024-11-12 22:30:07.004737917 +0000 UTC m=+0.555584241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 22:30:10.325713 kubelet[2183]: E1112 22:30:10.325466 2183 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1807592748f79a7d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 22:30:07.010028157 +0000 UTC m=+0.560874441,LastTimestamp:2024-11-12 22:30:07.010028157 +0000 UTC m=+0.560874441,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 22:30:10.379303 kubelet[2183]: E1112 22:30:10.379209 2183 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1807592749bdc865 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 22:30:07.023016037 +0000 UTC m=+0.573862361,LastTimestamp:2024-11-12 22:30:07.023016037 +0000 UTC m=+0.573862361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 22:30:11.000161 kubelet[2183]: I1112 22:30:11.000115 2183 apiserver.go:52] "Watching apiserver" Nov 12 22:30:11.010428 kubelet[2183]: I1112 22:30:11.010396 2183 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 12 22:30:12.411958 systemd[1]: Reloading requested from client PID 2467 ('systemctl') (unit session-7.scope)... Nov 12 22:30:12.411975 systemd[1]: Reloading... Nov 12 22:30:12.481593 zram_generator::config[2509]: No configuration found. Nov 12 22:30:12.561036 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:30:12.623982 systemd[1]: Reloading finished in 211 ms. Nov 12 22:30:12.653472 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:30:12.670819 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 22:30:12.672606 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:30:12.684920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:30:12.772194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:30:12.777591 (kubelet)[2548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 22:30:12.810941 kubelet[2548]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:30:12.810941 kubelet[2548]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 22:30:12.810941 kubelet[2548]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:30:12.811264 kubelet[2548]: I1112 22:30:12.810945 2548 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 22:30:12.816789 kubelet[2548]: I1112 22:30:12.816755 2548 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Nov 12 22:30:12.817591 kubelet[2548]: I1112 22:30:12.816894 2548 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 22:30:12.817591 kubelet[2548]: I1112 22:30:12.817092 2548 server.go:929] "Client rotation is on, will bootstrap in background" Nov 12 22:30:12.818650 kubelet[2548]: I1112 22:30:12.818598 2548 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 22:30:12.821017 kubelet[2548]: I1112 22:30:12.820705 2548 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 22:30:12.825110 kubelet[2548]: E1112 22:30:12.825074 2548 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 12 22:30:12.825110 kubelet[2548]: I1112 22:30:12.825104 2548 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 12 22:30:12.827845 kubelet[2548]: I1112 22:30:12.827826 2548 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 22:30:12.827968 kubelet[2548]: I1112 22:30:12.827956 2548 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 12 22:30:12.828086 kubelet[2548]: I1112 22:30:12.828064 2548 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 22:30:12.828278 kubelet[2548]: I1112 22:30:12.828088 2548 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 12 22:30:12.828354 kubelet[2548]: I1112 22:30:12.828289 2548 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 22:30:12.828354 kubelet[2548]: I1112 22:30:12.828299 2548 container_manager_linux.go:300] "Creating device plugin manager" Nov 12 22:30:12.828354 kubelet[2548]: I1112 22:30:12.828327 2548 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:30:12.828463 kubelet[2548]: I1112 22:30:12.828449 2548 kubelet.go:408] "Attempting to sync node with API server" Nov 12 22:30:12.828486 kubelet[2548]: I1112 22:30:12.828466 2548 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 22:30:12.828486 kubelet[2548]: I1112 22:30:12.828486 2548 kubelet.go:314] "Adding apiserver pod source" Nov 12 22:30:12.828525 kubelet[2548]: I1112 22:30:12.828496 2548 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 22:30:12.829854 kubelet[2548]: I1112 22:30:12.829828 2548 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 12 22:30:12.831452 kubelet[2548]: I1112 22:30:12.831383 2548 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 22:30:12.831825 kubelet[2548]: I1112 22:30:12.831795 2548 server.go:1269] "Started kubelet" Nov 12 22:30:12.834530 kubelet[2548]: I1112 22:30:12.834480 2548 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 22:30:12.834751 kubelet[2548]: I1112 22:30:12.834730 2548 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 22:30:12.834810 kubelet[2548]: I1112 22:30:12.834790 2548 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 22:30:12.836596 kubelet[2548]: I1112 22:30:12.836544 2548 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 22:30:12.840340 kubelet[2548]: I1112 22:30:12.840306 2548 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 12 22:30:12.843820 kubelet[2548]: I1112 22:30:12.843538 2548 server.go:460] "Adding debug handlers to kubelet server" Nov 12 22:30:12.849616 kubelet[2548]: E1112 22:30:12.846826 2548 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 22:30:12.849616 kubelet[2548]: I1112 22:30:12.847291 2548 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 12 22:30:12.849616 kubelet[2548]: I1112 22:30:12.847413 2548 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 12 22:30:12.849616 kubelet[2548]: I1112 22:30:12.847532 2548 reconciler.go:26] "Reconciler: start to sync state" Nov 12 22:30:12.849616 kubelet[2548]: E1112 22:30:12.848181 2548 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 22:30:12.849616 kubelet[2548]: I1112 22:30:12.848392 2548 factory.go:221] Registration of the systemd container factory successfully Nov 12 22:30:12.849616 kubelet[2548]: I1112 22:30:12.848473 2548 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 22:30:12.850690 kubelet[2548]: I1112 22:30:12.850663 2548 factory.go:221] Registration of the containerd container factory successfully Nov 12 22:30:12.854378 kubelet[2548]: I1112 22:30:12.854335 2548 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 22:30:12.855207 kubelet[2548]: I1112 22:30:12.855184 2548 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 22:30:12.855244 kubelet[2548]: I1112 22:30:12.855210 2548 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 22:30:12.855244 kubelet[2548]: I1112 22:30:12.855226 2548 kubelet.go:2321] "Starting kubelet main sync loop" Nov 12 22:30:12.856228 kubelet[2548]: E1112 22:30:12.855270 2548 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 22:30:12.882756 kubelet[2548]: I1112 22:30:12.882726 2548 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 22:30:12.882756 kubelet[2548]: I1112 22:30:12.882745 2548 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 22:30:12.882756 kubelet[2548]: I1112 22:30:12.882763 2548 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:30:12.882901 kubelet[2548]: I1112 22:30:12.882893 2548 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 22:30:12.882927 kubelet[2548]: I1112 22:30:12.882904 2548 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 22:30:12.882927 kubelet[2548]: I1112 22:30:12.882920 2548 policy_none.go:49] "None policy: Start" Nov 12 22:30:12.883512 kubelet[2548]: I1112 22:30:12.883494 2548 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 22:30:12.883600 kubelet[2548]: I1112 22:30:12.883519 2548 state_mem.go:35] "Initializing new in-memory state store" Nov 12 22:30:12.883686 kubelet[2548]: I1112 22:30:12.883668 2548 state_mem.go:75] "Updated machine memory state" Nov 12 22:30:12.887308 kubelet[2548]: I1112 22:30:12.887279 2548 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 22:30:12.887665 kubelet[2548]: I1112 22:30:12.887437 2548 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 12 22:30:12.887665 kubelet[2548]: I1112 22:30:12.887456 2548 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 22:30:12.887665 kubelet[2548]: I1112 22:30:12.887650 2548 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 22:30:12.991607 kubelet[2548]: I1112 22:30:12.991572 2548 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 22:30:12.997293 kubelet[2548]: I1112 22:30:12.997269 2548 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Nov 12 22:30:12.997397 kubelet[2548]: I1112 22:30:12.997351 2548 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Nov 12 22:30:13.048984 kubelet[2548]: I1112 22:30:13.048949 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/870406dd146c1baadf35a334b48befbc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"870406dd146c1baadf35a334b48befbc\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:30:13.049073 kubelet[2548]: I1112 22:30:13.048986 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/870406dd146c1baadf35a334b48befbc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"870406dd146c1baadf35a334b48befbc\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:30:13.049073 kubelet[2548]: I1112 22:30:13.049051 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:30:13.049118 kubelet[2548]: I1112 22:30:13.049069 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:30:13.049118 kubelet[2548]: I1112 22:30:13.049109 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:30:13.049168 kubelet[2548]: I1112 22:30:13.049124 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33673bc39d15d92b38b41cdd12700fe3-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33673bc39d15d92b38b41cdd12700fe3\") " pod="kube-system/kube-scheduler-localhost" Nov 12 22:30:13.049168 kubelet[2548]: I1112 22:30:13.049138 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/870406dd146c1baadf35a334b48befbc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"870406dd146c1baadf35a334b48befbc\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:30:13.049206 kubelet[2548]: I1112 22:30:13.049178 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:30:13.049206 kubelet[2548]: I1112 22:30:13.049195 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:30:13.264143 kubelet[2548]: E1112 22:30:13.264030 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:13.264543 kubelet[2548]: E1112 22:30:13.264416 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:13.264543 kubelet[2548]: E1112 22:30:13.264495 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:13.417879 sudo[2585]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 12 22:30:13.418140 sudo[2585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 12 22:30:13.829591 kubelet[2548]: I1112 22:30:13.829322 2548 apiserver.go:52] "Watching apiserver" Nov 12 22:30:13.835081 sudo[2585]: pam_unix(sudo:session): session closed for user root Nov 12 22:30:13.848057 kubelet[2548]: I1112 22:30:13.847985 2548 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 12 22:30:13.870923 kubelet[2548]: E1112 22:30:13.870873 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:13.875103 kubelet[2548]: E1112 22:30:13.871254 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:13.880221 kubelet[2548]: E1112 22:30:13.880185 2548 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 22:30:13.880338 kubelet[2548]: E1112 22:30:13.880316 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:13.891546 kubelet[2548]: I1112 22:30:13.891463 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.891450674 podStartE2EDuration="1.891450674s" podCreationTimestamp="2024-11-12 22:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:30:13.891303113 +0000 UTC m=+1.110728317" watchObservedRunningTime="2024-11-12 22:30:13.891450674 +0000 UTC m=+1.110875918" Nov 12 22:30:13.912885 kubelet[2548]: I1112 22:30:13.912786 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.912770955 podStartE2EDuration="1.912770955s" podCreationTimestamp="2024-11-12 22:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:30:13.899722496 +0000 UTC m=+1.119147740" watchObservedRunningTime="2024-11-12 22:30:13.912770955 +0000 UTC m=+1.132196199" Nov 12 22:30:13.933944 kubelet[2548]: I1112 22:30:13.933873 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.933857035 podStartE2EDuration="1.933857035s" podCreationTimestamp="2024-11-12 22:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:30:13.9133356 +0000 UTC m=+1.132760844" watchObservedRunningTime="2024-11-12 22:30:13.933857035 +0000 UTC m=+1.153282319" Nov 12 22:30:14.871167 kubelet[2548]: E1112 22:30:14.871074 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:14.871167 kubelet[2548]: E1112 22:30:14.871160 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:15.418795 sudo[1645]: pam_unix(sudo:session): session closed for user root Nov 12 22:30:15.419843 sshd[1644]: Connection closed by 10.0.0.1 port 43958 Nov 12 22:30:15.420723 sshd-session[1641]: pam_unix(sshd:session): session closed for user core Nov 12 22:30:15.423923 systemd[1]: sshd@6-10.0.0.65:22-10.0.0.1:43958.service: Deactivated successfully. Nov 12 22:30:15.425594 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 22:30:15.425761 systemd[1]: session-7.scope: Consumed 6.621s CPU time, 150.6M memory peak, 0B memory swap peak. Nov 12 22:30:15.426206 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Nov 12 22:30:15.426974 systemd-logind[1446]: Removed session 7. Nov 12 22:30:15.873080 kubelet[2548]: E1112 22:30:15.872815 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:17.341579 kubelet[2548]: E1112 22:30:17.341509 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:18.211997 kubelet[2548]: I1112 22:30:18.211951 2548 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 22:30:18.212266 containerd[1459]: time="2024-11-12T22:30:18.212233629Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 22:30:18.212576 kubelet[2548]: I1112 22:30:18.212416 2548 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 22:30:19.082950 systemd[1]: Created slice kubepods-besteffort-pod1e95a344_9122_444b_b417_82ca9b32c9cc.slice - libcontainer container kubepods-besteffort-pod1e95a344_9122_444b_b417_82ca9b32c9cc.slice. Nov 12 22:30:19.089329 kubelet[2548]: I1112 22:30:19.089244 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-lib-modules\") pod \"cilium-lmtd8\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " pod="kube-system/cilium-lmtd8" Nov 12 22:30:19.089329 kubelet[2548]: I1112 22:30:19.089290 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-hostproc\") pod \"cilium-lmtd8\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " pod="kube-system/cilium-lmtd8" Nov 12 22:30:19.090023 kubelet[2548]: I1112 22:30:19.089338 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-cilium-cgroup\") pod \"cilium-lmtd8\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " pod="kube-system/cilium-lmtd8" Nov 12 22:30:19.090023 kubelet[2548]: I1112 22:30:19.089400 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e95a344-9122-444b-b417-82ca9b32c9cc-lib-modules\") pod \"kube-proxy-sdsnc\" (UID: \"1e95a344-9122-444b-b417-82ca9b32c9cc\") " pod="kube-system/kube-proxy-sdsnc" Nov 12 22:30:19.090023 kubelet[2548]: I1112 22:30:19.089447 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e95a344-9122-444b-b417-82ca9b32c9cc-xtables-lock\") pod \"kube-proxy-sdsnc\" (UID: \"1e95a344-9122-444b-b417-82ca9b32c9cc\") " pod="kube-system/kube-proxy-sdsnc" Nov 12 22:30:19.090023 kubelet[2548]: I1112 22:30:19.089476 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86fc4af8-f7fc-4739-a290-c0210a79f843-clustermesh-secrets\") pod \"cilium-lmtd8\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " pod="kube-system/cilium-lmtd8" Nov 12 22:30:19.090023 kubelet[2548]: I1112 22:30:19.089499 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-host-proc-sys-net\") pod \"cilium-lmtd8\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " pod="kube-system/cilium-lmtd8" Nov 12 22:30:19.090023 kubelet[2548]: I1112 22:30:19.089537 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-xtables-lock\") pod \"cilium-lmtd8\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " pod="kube-system/cilium-lmtd8" Nov 12 22:30:19.090168 kubelet[2548]: I1112 22:30:19.089574 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86fc4af8-f7fc-4739-a290-c0210a79f843-cilium-config-path\") pod \"cilium-lmtd8\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " pod="kube-system/cilium-lmtd8" Nov 12 22:30:19.090168 kubelet[2548]: I1112 22:30:19.089599 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-host-proc-sys-kernel\") pod \"cilium-lmtd8\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " pod="kube-system/cilium-lmtd8" Nov 12 22:30:19.090168 kubelet[2548]: I1112 22:30:19.089635 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-cilium-run\") pod \"cilium-lmtd8\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " pod="kube-system/cilium-lmtd8" Nov 12 22:30:19.090168 kubelet[2548]: I1112 22:30:19.089658 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86fc4af8-f7fc-4739-a290-c0210a79f843-hubble-tls\") pod \"cilium-lmtd8\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " pod="kube-system/cilium-lmtd8" Nov 12 22:30:19.090168 kubelet[2548]: I1112 22:30:19.089677 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzjpx\" (UniqueName: \"kubernetes.io/projected/1e95a344-9122-444b-b417-82ca9b32c9cc-kube-api-access-qzjpx\") pod \"kube-proxy-sdsnc\" (UID: \"1e95a344-9122-444b-b417-82ca9b32c9cc\") " pod="kube-system/kube-proxy-sdsnc" Nov 12 22:30:19.090168 kubelet[2548]: I1112 22:30:19.089696 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-cni-path\") pod \"cilium-lmtd8\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " pod="kube-system/cilium-lmtd8" Nov 12 22:30:19.090294 kubelet[2548]: I1112 22:30:19.089719 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-etc-cni-netd\") pod \"cilium-lmtd8\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " pod="kube-system/cilium-lmtd8" Nov 12 22:30:19.090294 kubelet[2548]: I1112 22:30:19.089735 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1e95a344-9122-444b-b417-82ca9b32c9cc-kube-proxy\") pod \"kube-proxy-sdsnc\" (UID: \"1e95a344-9122-444b-b417-82ca9b32c9cc\") " pod="kube-system/kube-proxy-sdsnc" Nov 12 22:30:19.090294 kubelet[2548]: I1112 22:30:19.089762 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-bpf-maps\") pod \"cilium-lmtd8\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " pod="kube-system/cilium-lmtd8" Nov 12 22:30:19.090294 kubelet[2548]: I1112 22:30:19.089788 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b48j\" (UniqueName: \"kubernetes.io/projected/86fc4af8-f7fc-4739-a290-c0210a79f843-kube-api-access-6b48j\") pod \"cilium-lmtd8\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " pod="kube-system/cilium-lmtd8" Nov 12 22:30:19.098858 systemd[1]: Created slice kubepods-burstable-pod86fc4af8_f7fc_4739_a290_c0210a79f843.slice - libcontainer container kubepods-burstable-pod86fc4af8_f7fc_4739_a290_c0210a79f843.slice. Nov 12 22:30:19.246486 systemd[1]: Created slice kubepods-besteffort-podf4ee1684_8137_49a2_bfdd_a5b45b5744e5.slice - libcontainer container kubepods-besteffort-podf4ee1684_8137_49a2_bfdd_a5b45b5744e5.slice. Nov 12 22:30:19.291688 kubelet[2548]: I1112 22:30:19.291633 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmwkw\" (UniqueName: \"kubernetes.io/projected/f4ee1684-8137-49a2-bfdd-a5b45b5744e5-kube-api-access-rmwkw\") pod \"cilium-operator-5d85765b45-t2gjd\" (UID: \"f4ee1684-8137-49a2-bfdd-a5b45b5744e5\") " pod="kube-system/cilium-operator-5d85765b45-t2gjd" Nov 12 22:30:19.291688 kubelet[2548]: I1112 22:30:19.291683 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4ee1684-8137-49a2-bfdd-a5b45b5744e5-cilium-config-path\") pod \"cilium-operator-5d85765b45-t2gjd\" (UID: \"f4ee1684-8137-49a2-bfdd-a5b45b5744e5\") " pod="kube-system/cilium-operator-5d85765b45-t2gjd" Nov 12 22:30:19.394524 kubelet[2548]: E1112 22:30:19.394346 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:19.395748 containerd[1459]: time="2024-11-12T22:30:19.395002950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sdsnc,Uid:1e95a344-9122-444b-b417-82ca9b32c9cc,Namespace:kube-system,Attempt:0,}" Nov 12 22:30:19.401741 kubelet[2548]: E1112 22:30:19.401716 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:19.402466 containerd[1459]: time="2024-11-12T22:30:19.402204668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmtd8,Uid:86fc4af8-f7fc-4739-a290-c0210a79f843,Namespace:kube-system,Attempt:0,}" Nov 12 22:30:19.418617 containerd[1459]: time="2024-11-12T22:30:19.418516592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:30:19.418617 containerd[1459]: time="2024-11-12T22:30:19.418583392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:30:19.418617 containerd[1459]: time="2024-11-12T22:30:19.418599272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:30:19.418780 containerd[1459]: time="2024-11-12T22:30:19.418669872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:30:19.425700 containerd[1459]: time="2024-11-12T22:30:19.425599788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:30:19.425700 containerd[1459]: time="2024-11-12T22:30:19.425650908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:30:19.425700 containerd[1459]: time="2024-11-12T22:30:19.425673788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:30:19.426238 containerd[1459]: time="2024-11-12T22:30:19.426146111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:30:19.441705 systemd[1]: Started cri-containerd-38541dcb9d9c8f3e9723bba6e513e3f6ef5bae4a05216bb3af7f727e87f2ac52.scope - libcontainer container 38541dcb9d9c8f3e9723bba6e513e3f6ef5bae4a05216bb3af7f727e87f2ac52. Nov 12 22:30:19.444214 systemd[1]: Started cri-containerd-ec3b7fe5f3c04c603dbd0a21c14dae40e08f384e23328e1d9e6545817d0a6955.scope - libcontainer container ec3b7fe5f3c04c603dbd0a21c14dae40e08f384e23328e1d9e6545817d0a6955. Nov 12 22:30:19.468410 containerd[1459]: time="2024-11-12T22:30:19.468366648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sdsnc,Uid:1e95a344-9122-444b-b417-82ca9b32c9cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"38541dcb9d9c8f3e9723bba6e513e3f6ef5bae4a05216bb3af7f727e87f2ac52\"" Nov 12 22:30:19.469065 kubelet[2548]: E1112 22:30:19.469044 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:19.470798 containerd[1459]: time="2024-11-12T22:30:19.470770181Z" level=info msg="CreateContainer within sandbox \"38541dcb9d9c8f3e9723bba6e513e3f6ef5bae4a05216bb3af7f727e87f2ac52\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 22:30:19.473657 containerd[1459]: time="2024-11-12T22:30:19.473587075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmtd8,Uid:86fc4af8-f7fc-4739-a290-c0210a79f843,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec3b7fe5f3c04c603dbd0a21c14dae40e08f384e23328e1d9e6545817d0a6955\"" Nov 12 22:30:19.474320 kubelet[2548]: E1112 22:30:19.474282 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:19.476005 containerd[1459]: time="2024-11-12T22:30:19.475811927Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 12 22:30:19.489157 containerd[1459]: time="2024-11-12T22:30:19.489107555Z" level=info msg="CreateContainer within sandbox \"38541dcb9d9c8f3e9723bba6e513e3f6ef5bae4a05216bb3af7f727e87f2ac52\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"58ada77f338f80b0e72128fd2420755c15626e503d410fecc33b674282175b63\"" Nov 12 22:30:19.489668 containerd[1459]: time="2024-11-12T22:30:19.489641318Z" level=info msg="StartContainer for \"58ada77f338f80b0e72128fd2420755c15626e503d410fecc33b674282175b63\"" Nov 12 22:30:19.516722 systemd[1]: Started cri-containerd-58ada77f338f80b0e72128fd2420755c15626e503d410fecc33b674282175b63.scope - libcontainer container 58ada77f338f80b0e72128fd2420755c15626e503d410fecc33b674282175b63. Nov 12 22:30:19.539644 containerd[1459]: time="2024-11-12T22:30:19.539591135Z" level=info msg="StartContainer for \"58ada77f338f80b0e72128fd2420755c15626e503d410fecc33b674282175b63\" returns successfully" Nov 12 22:30:19.551876 kubelet[2548]: E1112 22:30:19.550771 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:19.551970 containerd[1459]: time="2024-11-12T22:30:19.551826718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-t2gjd,Uid:f4ee1684-8137-49a2-bfdd-a5b45b5744e5,Namespace:kube-system,Attempt:0,}" Nov 12 22:30:19.570801 containerd[1459]: time="2024-11-12T22:30:19.570693295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:30:19.570801 containerd[1459]: time="2024-11-12T22:30:19.570751815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:30:19.570801 containerd[1459]: time="2024-11-12T22:30:19.570763056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:30:19.571016 containerd[1459]: time="2024-11-12T22:30:19.570836296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:30:19.593750 systemd[1]: Started cri-containerd-7bc2e0eac3483318597a4abbacd0d00aae18ed7103c6c1eefe0d28dfd1d98847.scope - libcontainer container 7bc2e0eac3483318597a4abbacd0d00aae18ed7103c6c1eefe0d28dfd1d98847. Nov 12 22:30:19.627807 containerd[1459]: time="2024-11-12T22:30:19.627671109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-t2gjd,Uid:f4ee1684-8137-49a2-bfdd-a5b45b5744e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bc2e0eac3483318597a4abbacd0d00aae18ed7103c6c1eefe0d28dfd1d98847\"" Nov 12 22:30:19.628655 kubelet[2548]: E1112 22:30:19.628634 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:19.881597 kubelet[2548]: E1112 22:30:19.881251 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:19.892627 kubelet[2548]: I1112 22:30:19.892468 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sdsnc" podStartSLOduration=0.892453952 podStartE2EDuration="892.453952ms" podCreationTimestamp="2024-11-12 22:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:30:19.892182631 +0000 UTC m=+7.111607875" watchObservedRunningTime="2024-11-12 22:30:19.892453952 +0000 UTC m=+7.111879196" Nov 12 22:30:24.787004 kubelet[2548]: E1112 22:30:24.786971 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:25.481647 kubelet[2548]: E1112 22:30:25.481527 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:26.574754 update_engine[1449]: I20241112 22:30:26.574677 1449 update_attempter.cc:509] Updating boot flags... Nov 12 22:30:26.602723 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2927) Nov 12 22:30:26.633583 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2926) Nov 12 22:30:27.350197 kubelet[2548]: E1112 22:30:27.349951 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:33.350487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1038989333.mount: Deactivated successfully. Nov 12 22:30:34.687138 containerd[1459]: time="2024-11-12T22:30:34.687074122Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:30:34.688134 containerd[1459]: time="2024-11-12T22:30:34.688090124Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651490" Nov 12 22:30:34.689035 containerd[1459]: time="2024-11-12T22:30:34.688990366Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:30:34.690682 containerd[1459]: time="2024-11-12T22:30:34.690572169Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 15.214534161s" Nov 12 22:30:34.690682 containerd[1459]: time="2024-11-12T22:30:34.690605529Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Nov 12 22:30:34.693362 containerd[1459]: time="2024-11-12T22:30:34.693012934Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 12 22:30:34.700306 containerd[1459]: time="2024-11-12T22:30:34.700268828Z" level=info msg="CreateContainer within sandbox \"ec3b7fe5f3c04c603dbd0a21c14dae40e08f384e23328e1d9e6545817d0a6955\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 22:30:34.729529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3910193571.mount: Deactivated successfully. Nov 12 22:30:34.731056 containerd[1459]: time="2024-11-12T22:30:34.730964808Z" level=info msg="CreateContainer within sandbox \"ec3b7fe5f3c04c603dbd0a21c14dae40e08f384e23328e1d9e6545817d0a6955\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"388c1cb42d990712771e2c16542842f6bf46efcf10ec497576a194be8bda44ad\"" Nov 12 22:30:34.732218 containerd[1459]: time="2024-11-12T22:30:34.731432569Z" level=info msg="StartContainer for \"388c1cb42d990712771e2c16542842f6bf46efcf10ec497576a194be8bda44ad\"" Nov 12 22:30:34.756727 systemd[1]: Started cri-containerd-388c1cb42d990712771e2c16542842f6bf46efcf10ec497576a194be8bda44ad.scope - libcontainer container 388c1cb42d990712771e2c16542842f6bf46efcf10ec497576a194be8bda44ad. Nov 12 22:30:34.780119 containerd[1459]: time="2024-11-12T22:30:34.779981704Z" level=info msg="StartContainer for \"388c1cb42d990712771e2c16542842f6bf46efcf10ec497576a194be8bda44ad\" returns successfully" Nov 12 22:30:34.818293 systemd[1]: cri-containerd-388c1cb42d990712771e2c16542842f6bf46efcf10ec497576a194be8bda44ad.scope: Deactivated successfully. Nov 12 22:30:34.915926 kubelet[2548]: E1112 22:30:34.915855 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:34.921341 containerd[1459]: time="2024-11-12T22:30:34.908962316Z" level=info msg="shim disconnected" id=388c1cb42d990712771e2c16542842f6bf46efcf10ec497576a194be8bda44ad namespace=k8s.io Nov 12 22:30:34.921341 containerd[1459]: time="2024-11-12T22:30:34.921324740Z" level=warning msg="cleaning up after shim disconnected" id=388c1cb42d990712771e2c16542842f6bf46efcf10ec497576a194be8bda44ad namespace=k8s.io Nov 12 22:30:34.921341 containerd[1459]: time="2024-11-12T22:30:34.921347020Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:30:35.726505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-388c1cb42d990712771e2c16542842f6bf46efcf10ec497576a194be8bda44ad-rootfs.mount: Deactivated successfully. Nov 12 22:30:35.912612 kubelet[2548]: E1112 22:30:35.912525 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:35.916398 containerd[1459]: time="2024-11-12T22:30:35.916355734Z" level=info msg="CreateContainer within sandbox \"ec3b7fe5f3c04c603dbd0a21c14dae40e08f384e23328e1d9e6545817d0a6955\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 22:30:35.926455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount460268783.mount: Deactivated successfully. Nov 12 22:30:35.931790 containerd[1459]: time="2024-11-12T22:30:35.930853481Z" level=info msg="CreateContainer within sandbox \"ec3b7fe5f3c04c603dbd0a21c14dae40e08f384e23328e1d9e6545817d0a6955\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fe91578781469fb3e8d16fedbcc4d3c7da7c7c8865592749845fb02da688dadf\"" Nov 12 22:30:35.933671 containerd[1459]: time="2024-11-12T22:30:35.933629446Z" level=info msg="StartContainer for \"fe91578781469fb3e8d16fedbcc4d3c7da7c7c8865592749845fb02da688dadf\"" Nov 12 22:30:35.956733 systemd[1]: Started cri-containerd-fe91578781469fb3e8d16fedbcc4d3c7da7c7c8865592749845fb02da688dadf.scope - libcontainer container fe91578781469fb3e8d16fedbcc4d3c7da7c7c8865592749845fb02da688dadf. Nov 12 22:30:35.982912 containerd[1459]: time="2024-11-12T22:30:35.982817056Z" level=info msg="StartContainer for \"fe91578781469fb3e8d16fedbcc4d3c7da7c7c8865592749845fb02da688dadf\" returns successfully" Nov 12 22:30:35.994760 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 22:30:35.994965 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:30:35.995032 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:30:36.000836 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:30:36.001000 systemd[1]: cri-containerd-fe91578781469fb3e8d16fedbcc4d3c7da7c7c8865592749845fb02da688dadf.scope: Deactivated successfully. Nov 12 22:30:36.029382 containerd[1459]: time="2024-11-12T22:30:36.029150338Z" level=info msg="shim disconnected" id=fe91578781469fb3e8d16fedbcc4d3c7da7c7c8865592749845fb02da688dadf namespace=k8s.io Nov 12 22:30:36.029382 containerd[1459]: time="2024-11-12T22:30:36.029196618Z" level=warning msg="cleaning up after shim disconnected" id=fe91578781469fb3e8d16fedbcc4d3c7da7c7c8865592749845fb02da688dadf namespace=k8s.io Nov 12 22:30:36.029382 containerd[1459]: time="2024-11-12T22:30:36.029204058Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:30:36.029620 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:30:36.726799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe91578781469fb3e8d16fedbcc4d3c7da7c7c8865592749845fb02da688dadf-rootfs.mount: Deactivated successfully. Nov 12 22:30:36.916501 kubelet[2548]: E1112 22:30:36.916388 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:36.920032 containerd[1459]: time="2024-11-12T22:30:36.919582869Z" level=info msg="CreateContainer within sandbox \"ec3b7fe5f3c04c603dbd0a21c14dae40e08f384e23328e1d9e6545817d0a6955\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 22:30:36.964245 containerd[1459]: time="2024-11-12T22:30:36.962211142Z" level=info msg="CreateContainer within sandbox \"ec3b7fe5f3c04c603dbd0a21c14dae40e08f384e23328e1d9e6545817d0a6955\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2fb01dc0f30b4d87fb6c39ff2222aaa33b56f3d4b1f2ca67f9662ad632606f60\"" Nov 12 22:30:36.964534 containerd[1459]: time="2024-11-12T22:30:36.964479386Z" level=info msg="StartContainer for \"2fb01dc0f30b4d87fb6c39ff2222aaa33b56f3d4b1f2ca67f9662ad632606f60\"" Nov 12 22:30:37.002697 systemd[1]: Started cri-containerd-2fb01dc0f30b4d87fb6c39ff2222aaa33b56f3d4b1f2ca67f9662ad632606f60.scope - libcontainer container 2fb01dc0f30b4d87fb6c39ff2222aaa33b56f3d4b1f2ca67f9662ad632606f60. Nov 12 22:30:37.035450 containerd[1459]: time="2024-11-12T22:30:37.035413664Z" level=info msg="StartContainer for \"2fb01dc0f30b4d87fb6c39ff2222aaa33b56f3d4b1f2ca67f9662ad632606f60\" returns successfully" Nov 12 22:30:37.064539 systemd[1]: cri-containerd-2fb01dc0f30b4d87fb6c39ff2222aaa33b56f3d4b1f2ca67f9662ad632606f60.scope: Deactivated successfully. Nov 12 22:30:37.103652 containerd[1459]: time="2024-11-12T22:30:37.103588214Z" level=info msg="shim disconnected" id=2fb01dc0f30b4d87fb6c39ff2222aaa33b56f3d4b1f2ca67f9662ad632606f60 namespace=k8s.io Nov 12 22:30:37.103652 containerd[1459]: time="2024-11-12T22:30:37.103643414Z" level=warning msg="cleaning up after shim disconnected" id=2fb01dc0f30b4d87fb6c39ff2222aaa33b56f3d4b1f2ca67f9662ad632606f60 namespace=k8s.io Nov 12 22:30:37.103652 containerd[1459]: time="2024-11-12T22:30:37.103651814Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:30:37.123322 containerd[1459]: time="2024-11-12T22:30:37.123275565Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:30:37.124952 containerd[1459]: time="2024-11-12T22:30:37.124903088Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138286" Nov 12 22:30:37.125756 containerd[1459]: time="2024-11-12T22:30:37.125693049Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:30:37.127092 containerd[1459]: time="2024-11-12T22:30:37.127062772Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.434017438s" Nov 12 22:30:37.127628 containerd[1459]: time="2024-11-12T22:30:37.127175732Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Nov 12 22:30:37.129058 containerd[1459]: time="2024-11-12T22:30:37.129027655Z" level=info msg="CreateContainer within sandbox \"7bc2e0eac3483318597a4abbacd0d00aae18ed7103c6c1eefe0d28dfd1d98847\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 12 22:30:37.139156 containerd[1459]: time="2024-11-12T22:30:37.139062671Z" level=info msg="CreateContainer within sandbox \"7bc2e0eac3483318597a4abbacd0d00aae18ed7103c6c1eefe0d28dfd1d98847\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118\"" Nov 12 22:30:37.140242 containerd[1459]: time="2024-11-12T22:30:37.140029352Z" level=info msg="StartContainer for \"1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118\"" Nov 12 22:30:37.166763 systemd[1]: Started cri-containerd-1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118.scope - libcontainer container 1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118. Nov 12 22:30:37.208649 containerd[1459]: time="2024-11-12T22:30:37.207032020Z" level=info msg="StartContainer for \"1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118\" returns successfully" Nov 12 22:30:37.727970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fb01dc0f30b4d87fb6c39ff2222aaa33b56f3d4b1f2ca67f9662ad632606f60-rootfs.mount: Deactivated successfully. Nov 12 22:30:37.919688 kubelet[2548]: E1112 22:30:37.919657 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:37.927279 kubelet[2548]: E1112 22:30:37.927207 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:37.929463 containerd[1459]: time="2024-11-12T22:30:37.929428665Z" level=info msg="CreateContainer within sandbox \"ec3b7fe5f3c04c603dbd0a21c14dae40e08f384e23328e1d9e6545817d0a6955\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 22:30:37.932247 kubelet[2548]: I1112 22:30:37.932122 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-t2gjd" podStartSLOduration=1.434427138 podStartE2EDuration="18.932089949s" podCreationTimestamp="2024-11-12 22:30:19 +0000 UTC" firstStartedPulling="2024-11-12 22:30:19.630236122 +0000 UTC m=+6.849661366" lastFinishedPulling="2024-11-12 22:30:37.127898933 +0000 UTC m=+24.347324177" observedRunningTime="2024-11-12 22:30:37.931430708 +0000 UTC m=+25.150855952" watchObservedRunningTime="2024-11-12 22:30:37.932089949 +0000 UTC m=+25.151515193" Nov 12 22:30:37.947590 containerd[1459]: time="2024-11-12T22:30:37.947533094Z" level=info msg="CreateContainer within sandbox \"ec3b7fe5f3c04c603dbd0a21c14dae40e08f384e23328e1d9e6545817d0a6955\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"00176b0a60da12972a4e55f047898500989be8b23645ea3007e2cefe04b46c1f\"" Nov 12 22:30:37.948155 containerd[1459]: time="2024-11-12T22:30:37.948123375Z" level=info msg="StartContainer for \"00176b0a60da12972a4e55f047898500989be8b23645ea3007e2cefe04b46c1f\"" Nov 12 22:30:37.976701 systemd[1]: Started cri-containerd-00176b0a60da12972a4e55f047898500989be8b23645ea3007e2cefe04b46c1f.scope - libcontainer container 00176b0a60da12972a4e55f047898500989be8b23645ea3007e2cefe04b46c1f. Nov 12 22:30:37.994932 systemd[1]: cri-containerd-00176b0a60da12972a4e55f047898500989be8b23645ea3007e2cefe04b46c1f.scope: Deactivated successfully. Nov 12 22:30:37.997282 containerd[1459]: time="2024-11-12T22:30:37.997230854Z" level=info msg="StartContainer for \"00176b0a60da12972a4e55f047898500989be8b23645ea3007e2cefe04b46c1f\" returns successfully" Nov 12 22:30:38.018439 containerd[1459]: time="2024-11-12T22:30:38.018376086Z" level=info msg="shim disconnected" id=00176b0a60da12972a4e55f047898500989be8b23645ea3007e2cefe04b46c1f namespace=k8s.io Nov 12 22:30:38.018439 containerd[1459]: time="2024-11-12T22:30:38.018428886Z" level=warning msg="cleaning up after shim disconnected" id=00176b0a60da12972a4e55f047898500989be8b23645ea3007e2cefe04b46c1f namespace=k8s.io Nov 12 22:30:38.018439 containerd[1459]: time="2024-11-12T22:30:38.018439366Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:30:38.565810 systemd[1]: Started sshd@7-10.0.0.65:22-10.0.0.1:43090.service - OpenSSH per-connection server daemon (10.0.0.1:43090). Nov 12 22:30:38.605060 sshd[3259]: Accepted publickey for core from 10.0.0.1 port 43090 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:30:38.606412 sshd-session[3259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:30:38.609832 systemd-logind[1446]: New session 8 of user core. Nov 12 22:30:38.621704 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 22:30:38.727085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00176b0a60da12972a4e55f047898500989be8b23645ea3007e2cefe04b46c1f-rootfs.mount: Deactivated successfully. Nov 12 22:30:38.752402 sshd[3261]: Connection closed by 10.0.0.1 port 43090 Nov 12 22:30:38.752731 sshd-session[3259]: pam_unix(sshd:session): session closed for user core Nov 12 22:30:38.755089 systemd[1]: sshd@7-10.0.0.65:22-10.0.0.1:43090.service: Deactivated successfully. Nov 12 22:30:38.756599 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 22:30:38.758796 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Nov 12 22:30:38.759721 systemd-logind[1446]: Removed session 8. Nov 12 22:30:38.931319 kubelet[2548]: E1112 22:30:38.930960 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:38.931662 kubelet[2548]: E1112 22:30:38.931597 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:38.935093 containerd[1459]: time="2024-11-12T22:30:38.935051391Z" level=info msg="CreateContainer within sandbox \"ec3b7fe5f3c04c603dbd0a21c14dae40e08f384e23328e1d9e6545817d0a6955\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 22:30:38.946344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1078253176.mount: Deactivated successfully. Nov 12 22:30:38.952678 containerd[1459]: time="2024-11-12T22:30:38.952642098Z" level=info msg="CreateContainer within sandbox \"ec3b7fe5f3c04c603dbd0a21c14dae40e08f384e23328e1d9e6545817d0a6955\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4\"" Nov 12 22:30:38.953210 containerd[1459]: time="2024-11-12T22:30:38.953079218Z" level=info msg="StartContainer for \"0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4\"" Nov 12 22:30:38.980782 systemd[1]: Started cri-containerd-0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4.scope - libcontainer container 0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4. Nov 12 22:30:39.009480 containerd[1459]: time="2024-11-12T22:30:39.009420343Z" level=info msg="StartContainer for \"0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4\" returns successfully" Nov 12 22:30:39.100503 kubelet[2548]: I1112 22:30:39.100465 2548 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Nov 12 22:30:39.129880 systemd[1]: Created slice kubepods-burstable-pod925c5f76_665b_4e47_aac7_f7c714c934ac.slice - libcontainer container kubepods-burstable-pod925c5f76_665b_4e47_aac7_f7c714c934ac.slice. Nov 12 22:30:39.135898 systemd[1]: Created slice kubepods-burstable-pod8c737532_7b5b_4d86_ad28_349c1ba28e38.slice - libcontainer container kubepods-burstable-pod8c737532_7b5b_4d86_ad28_349c1ba28e38.slice. Nov 12 22:30:39.163647 kubelet[2548]: I1112 22:30:39.163600 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/925c5f76-665b-4e47-aac7-f7c714c934ac-config-volume\") pod \"coredns-6f6b679f8f-5rcwl\" (UID: \"925c5f76-665b-4e47-aac7-f7c714c934ac\") " pod="kube-system/coredns-6f6b679f8f-5rcwl" Nov 12 22:30:39.163647 kubelet[2548]: I1112 22:30:39.163649 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vl8c\" (UniqueName: \"kubernetes.io/projected/925c5f76-665b-4e47-aac7-f7c714c934ac-kube-api-access-5vl8c\") pod \"coredns-6f6b679f8f-5rcwl\" (UID: \"925c5f76-665b-4e47-aac7-f7c714c934ac\") " pod="kube-system/coredns-6f6b679f8f-5rcwl" Nov 12 22:30:39.163802 kubelet[2548]: I1112 22:30:39.163669 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpvzj\" (UniqueName: \"kubernetes.io/projected/8c737532-7b5b-4d86-ad28-349c1ba28e38-kube-api-access-fpvzj\") pod \"coredns-6f6b679f8f-rvplj\" (UID: \"8c737532-7b5b-4d86-ad28-349c1ba28e38\") " pod="kube-system/coredns-6f6b679f8f-rvplj" Nov 12 22:30:39.163802 kubelet[2548]: I1112 22:30:39.163691 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c737532-7b5b-4d86-ad28-349c1ba28e38-config-volume\") pod \"coredns-6f6b679f8f-rvplj\" (UID: \"8c737532-7b5b-4d86-ad28-349c1ba28e38\") " pod="kube-system/coredns-6f6b679f8f-rvplj" Nov 12 22:30:39.434020 kubelet[2548]: E1112 22:30:39.433978 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:39.435570 containerd[1459]: time="2024-11-12T22:30:39.435192546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5rcwl,Uid:925c5f76-665b-4e47-aac7-f7c714c934ac,Namespace:kube-system,Attempt:0,}" Nov 12 22:30:39.438430 kubelet[2548]: E1112 22:30:39.438392 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:39.438937 containerd[1459]: time="2024-11-12T22:30:39.438910911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rvplj,Uid:8c737532-7b5b-4d86-ad28-349c1ba28e38,Namespace:kube-system,Attempt:0,}" Nov 12 22:30:39.935599 kubelet[2548]: E1112 22:30:39.935567 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:39.950096 kubelet[2548]: I1112 22:30:39.950047 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lmtd8" podStartSLOduration=5.732354985 podStartE2EDuration="20.950031035s" podCreationTimestamp="2024-11-12 22:30:19 +0000 UTC" firstStartedPulling="2024-11-12 22:30:19.475202323 +0000 UTC m=+6.694627527" lastFinishedPulling="2024-11-12 22:30:34.692878333 +0000 UTC m=+21.912303577" observedRunningTime="2024-11-12 22:30:39.949195754 +0000 UTC m=+27.168620998" watchObservedRunningTime="2024-11-12 22:30:39.950031035 +0000 UTC m=+27.169456279" Nov 12 22:30:40.936768 kubelet[2548]: E1112 22:30:40.936653 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:41.183454 systemd-networkd[1393]: cilium_host: Link UP Nov 12 22:30:41.183602 systemd-networkd[1393]: cilium_net: Link UP Nov 12 22:30:41.184294 systemd-networkd[1393]: cilium_net: Gained carrier Nov 12 22:30:41.184642 systemd-networkd[1393]: cilium_host: Gained carrier Nov 12 22:30:41.184956 systemd-networkd[1393]: cilium_net: Gained IPv6LL Nov 12 22:30:41.185291 systemd-networkd[1393]: cilium_host: Gained IPv6LL Nov 12 22:30:41.266700 systemd-networkd[1393]: cilium_vxlan: Link UP Nov 12 22:30:41.266707 systemd-networkd[1393]: cilium_vxlan: Gained carrier Nov 12 22:30:41.560590 kernel: NET: Registered PF_ALG protocol family Nov 12 22:30:41.943760 kubelet[2548]: E1112 22:30:41.943724 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:42.104371 systemd-networkd[1393]: lxc_health: Link UP Nov 12 22:30:42.110478 systemd-networkd[1393]: lxc_health: Gained carrier Nov 12 22:30:42.553182 systemd-networkd[1393]: lxc150b6f1dff29: Link UP Nov 12 22:30:42.559578 kernel: eth0: renamed from tmp3bd9f Nov 12 22:30:42.573046 systemd-networkd[1393]: lxc37ffe39f70c4: Link UP Nov 12 22:30:42.579575 kernel: eth0: renamed from tmp883f2 Nov 12 22:30:42.589821 systemd-networkd[1393]: lxc150b6f1dff29: Gained carrier Nov 12 22:30:42.591196 systemd-networkd[1393]: lxc37ffe39f70c4: Gained carrier Nov 12 22:30:42.718795 systemd-networkd[1393]: cilium_vxlan: Gained IPv6LL Nov 12 22:30:43.413076 kubelet[2548]: E1112 22:30:43.413029 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:43.764660 systemd[1]: Started sshd@8-10.0.0.65:22-10.0.0.1:39724.service - OpenSSH per-connection server daemon (10.0.0.1:39724). Nov 12 22:30:43.816315 sshd[3812]: Accepted publickey for core from 10.0.0.1 port 39724 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:30:43.818243 sshd-session[3812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:30:43.823092 systemd-logind[1446]: New session 9 of user core. Nov 12 22:30:43.829724 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 22:30:43.950344 sshd[3814]: Connection closed by 10.0.0.1 port 39724 Nov 12 22:30:43.949360 sshd-session[3812]: pam_unix(sshd:session): session closed for user core Nov 12 22:30:43.952492 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Nov 12 22:30:43.952664 systemd[1]: sshd@8-10.0.0.65:22-10.0.0.1:39724.service: Deactivated successfully. Nov 12 22:30:43.956287 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 22:30:43.958302 systemd-logind[1446]: Removed session 9. Nov 12 22:30:44.062880 systemd-networkd[1393]: lxc_health: Gained IPv6LL Nov 12 22:30:44.190702 systemd-networkd[1393]: lxc37ffe39f70c4: Gained IPv6LL Nov 12 22:30:44.510725 systemd-networkd[1393]: lxc150b6f1dff29: Gained IPv6LL Nov 12 22:30:46.096943 containerd[1459]: time="2024-11-12T22:30:46.096832575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:30:46.096943 containerd[1459]: time="2024-11-12T22:30:46.096899015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:30:46.097447 containerd[1459]: time="2024-11-12T22:30:46.096912255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:30:46.097447 containerd[1459]: time="2024-11-12T22:30:46.097000615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:30:46.098273 containerd[1459]: time="2024-11-12T22:30:46.098173376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:30:46.098273 containerd[1459]: time="2024-11-12T22:30:46.098224416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:30:46.098273 containerd[1459]: time="2024-11-12T22:30:46.098236216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:30:46.098581 containerd[1459]: time="2024-11-12T22:30:46.098492416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:30:46.110132 systemd[1]: run-containerd-runc-k8s.io-3bd9f00aa6de6b88371e2ff4b78c8ab226a55f2df948c91fd2bf2590aa956667-runc.SvBQ2F.mount: Deactivated successfully. Nov 12 22:30:46.118733 systemd[1]: Started cri-containerd-3bd9f00aa6de6b88371e2ff4b78c8ab226a55f2df948c91fd2bf2590aa956667.scope - libcontainer container 3bd9f00aa6de6b88371e2ff4b78c8ab226a55f2df948c91fd2bf2590aa956667. Nov 12 22:30:46.121692 systemd[1]: Started cri-containerd-883f298584fcc8465bf6ff65a08003c8b1badb44b735b4da84ca326adc04f6d2.scope - libcontainer container 883f298584fcc8465bf6ff65a08003c8b1badb44b735b4da84ca326adc04f6d2. Nov 12 22:30:46.131606 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 22:30:46.141477 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 22:30:46.153483 containerd[1459]: time="2024-11-12T22:30:46.153437786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rvplj,Uid:8c737532-7b5b-4d86-ad28-349c1ba28e38,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bd9f00aa6de6b88371e2ff4b78c8ab226a55f2df948c91fd2bf2590aa956667\"" Nov 12 22:30:46.154881 kubelet[2548]: E1112 22:30:46.154846 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:46.157987 containerd[1459]: time="2024-11-12T22:30:46.157931350Z" level=info msg="CreateContainer within sandbox \"3bd9f00aa6de6b88371e2ff4b78c8ab226a55f2df948c91fd2bf2590aa956667\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 22:30:46.162313 containerd[1459]: time="2024-11-12T22:30:46.162212074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5rcwl,Uid:925c5f76-665b-4e47-aac7-f7c714c934ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"883f298584fcc8465bf6ff65a08003c8b1badb44b735b4da84ca326adc04f6d2\"" Nov 12 22:30:46.163372 kubelet[2548]: E1112 22:30:46.163296 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:46.165386 containerd[1459]: time="2024-11-12T22:30:46.165223556Z" level=info msg="CreateContainer within sandbox \"883f298584fcc8465bf6ff65a08003c8b1badb44b735b4da84ca326adc04f6d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 22:30:46.179889 containerd[1459]: time="2024-11-12T22:30:46.179839090Z" level=info msg="CreateContainer within sandbox \"3bd9f00aa6de6b88371e2ff4b78c8ab226a55f2df948c91fd2bf2590aa956667\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f240165221065306f37ba80f402d0999ae9cae61d8a1caa3bc3706ac8b769fc\"" Nov 12 22:30:46.181574 containerd[1459]: time="2024-11-12T22:30:46.181075931Z" level=info msg="StartContainer for \"5f240165221065306f37ba80f402d0999ae9cae61d8a1caa3bc3706ac8b769fc\"" Nov 12 22:30:46.195958 containerd[1459]: time="2024-11-12T22:30:46.195908304Z" level=info msg="CreateContainer within sandbox \"883f298584fcc8465bf6ff65a08003c8b1badb44b735b4da84ca326adc04f6d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1689569af31df46f9c204d306e33f0d729e08316eb4ff104d60b49802f7ad3b1\"" Nov 12 22:30:46.198092 containerd[1459]: time="2024-11-12T22:30:46.197290105Z" level=info msg="StartContainer for \"1689569af31df46f9c204d306e33f0d729e08316eb4ff104d60b49802f7ad3b1\"" Nov 12 22:30:46.202048 systemd[1]: Started cri-containerd-5f240165221065306f37ba80f402d0999ae9cae61d8a1caa3bc3706ac8b769fc.scope - libcontainer container 5f240165221065306f37ba80f402d0999ae9cae61d8a1caa3bc3706ac8b769fc. Nov 12 22:30:46.226755 systemd[1]: Started cri-containerd-1689569af31df46f9c204d306e33f0d729e08316eb4ff104d60b49802f7ad3b1.scope - libcontainer container 1689569af31df46f9c204d306e33f0d729e08316eb4ff104d60b49802f7ad3b1. Nov 12 22:30:46.232589 containerd[1459]: time="2024-11-12T22:30:46.231932857Z" level=info msg="StartContainer for \"5f240165221065306f37ba80f402d0999ae9cae61d8a1caa3bc3706ac8b769fc\" returns successfully" Nov 12 22:30:46.261779 containerd[1459]: time="2024-11-12T22:30:46.260841443Z" level=info msg="StartContainer for \"1689569af31df46f9c204d306e33f0d729e08316eb4ff104d60b49802f7ad3b1\" returns successfully" Nov 12 22:30:46.947977 kubelet[2548]: E1112 22:30:46.947938 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:46.947977 kubelet[2548]: E1112 22:30:46.950014 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:46.968783 kubelet[2548]: I1112 22:30:46.968719 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rvplj" podStartSLOduration=27.968700521 podStartE2EDuration="27.968700521s" podCreationTimestamp="2024-11-12 22:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:30:46.968564801 +0000 UTC m=+34.187990085" watchObservedRunningTime="2024-11-12 22:30:46.968700521 +0000 UTC m=+34.188125765" Nov 12 22:30:46.982806 kubelet[2548]: I1112 22:30:46.982718 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-5rcwl" podStartSLOduration=27.982700973 podStartE2EDuration="27.982700973s" podCreationTimestamp="2024-11-12 22:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:30:46.982264213 +0000 UTC m=+34.201689457" watchObservedRunningTime="2024-11-12 22:30:46.982700973 +0000 UTC m=+34.202126217" Nov 12 22:30:47.951098 kubelet[2548]: E1112 22:30:47.951057 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:47.951098 kubelet[2548]: E1112 22:30:47.951100 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:48.699887 kubelet[2548]: I1112 22:30:48.699829 2548 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 22:30:48.700362 kubelet[2548]: E1112 22:30:48.700310 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:48.953411 kubelet[2548]: E1112 22:30:48.953058 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:48.953411 kubelet[2548]: E1112 22:30:48.953263 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:48.953411 kubelet[2548]: E1112 22:30:48.953375 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:30:48.964394 systemd[1]: Started sshd@9-10.0.0.65:22-10.0.0.1:39740.service - OpenSSH per-connection server daemon (10.0.0.1:39740). Nov 12 22:30:49.012747 sshd[4001]: Accepted publickey for core from 10.0.0.1 port 39740 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:30:49.014441 sshd-session[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:30:49.019214 systemd-logind[1446]: New session 10 of user core. Nov 12 22:30:49.025182 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 22:30:49.143936 sshd[4003]: Connection closed by 10.0.0.1 port 39740 Nov 12 22:30:49.144793 sshd-session[4001]: pam_unix(sshd:session): session closed for user core Nov 12 22:30:49.147275 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 22:30:49.148745 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Nov 12 22:30:49.148951 systemd[1]: sshd@9-10.0.0.65:22-10.0.0.1:39740.service: Deactivated successfully. Nov 12 22:30:49.151262 systemd-logind[1446]: Removed session 10. Nov 12 22:30:54.159827 systemd[1]: Started sshd@10-10.0.0.65:22-10.0.0.1:58832.service - OpenSSH per-connection server daemon (10.0.0.1:58832). Nov 12 22:30:54.222001 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 58832 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:30:54.223524 sshd-session[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:30:54.230039 systemd-logind[1446]: New session 11 of user core. Nov 12 22:30:54.241498 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 22:30:54.371386 sshd[4021]: Connection closed by 10.0.0.1 port 58832 Nov 12 22:30:54.372603 sshd-session[4019]: pam_unix(sshd:session): session closed for user core Nov 12 22:30:54.380308 systemd[1]: sshd@10-10.0.0.65:22-10.0.0.1:58832.service: Deactivated successfully. Nov 12 22:30:54.383281 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 22:30:54.384655 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Nov 12 22:30:54.389965 systemd[1]: Started sshd@11-10.0.0.65:22-10.0.0.1:58848.service - OpenSSH per-connection server daemon (10.0.0.1:58848). Nov 12 22:30:54.393601 systemd-logind[1446]: Removed session 11. Nov 12 22:30:54.427672 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 58848 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:30:54.428949 sshd-session[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:30:54.434008 systemd-logind[1446]: New session 12 of user core. Nov 12 22:30:54.440776 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 22:30:54.595916 sshd[4036]: Connection closed by 10.0.0.1 port 58848 Nov 12 22:30:54.596667 sshd-session[4034]: pam_unix(sshd:session): session closed for user core Nov 12 22:30:54.609539 systemd[1]: sshd@11-10.0.0.65:22-10.0.0.1:58848.service: Deactivated successfully. Nov 12 22:30:54.612652 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 22:30:54.614470 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Nov 12 22:30:54.627985 systemd[1]: Started sshd@12-10.0.0.65:22-10.0.0.1:58854.service - OpenSSH per-connection server daemon (10.0.0.1:58854). Nov 12 22:30:54.629372 systemd-logind[1446]: Removed session 12. Nov 12 22:30:54.670874 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 58854 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:30:54.672356 sshd-session[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:30:54.676957 systemd-logind[1446]: New session 13 of user core. Nov 12 22:30:54.688546 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 22:30:54.807610 sshd[4048]: Connection closed by 10.0.0.1 port 58854 Nov 12 22:30:54.807404 sshd-session[4046]: pam_unix(sshd:session): session closed for user core Nov 12 22:30:54.809806 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 22:30:54.811143 systemd[1]: sshd@12-10.0.0.65:22-10.0.0.1:58854.service: Deactivated successfully. Nov 12 22:30:54.814117 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Nov 12 22:30:54.814941 systemd-logind[1446]: Removed session 13. Nov 12 22:30:59.818150 systemd[1]: Started sshd@13-10.0.0.65:22-10.0.0.1:58870.service - OpenSSH per-connection server daemon (10.0.0.1:58870). Nov 12 22:30:59.857316 sshd[4060]: Accepted publickey for core from 10.0.0.1 port 58870 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:30:59.858442 sshd-session[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:30:59.861906 systemd-logind[1446]: New session 14 of user core. Nov 12 22:30:59.872701 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 22:30:59.987656 sshd[4062]: Connection closed by 10.0.0.1 port 58870 Nov 12 22:30:59.987997 sshd-session[4060]: pam_unix(sshd:session): session closed for user core Nov 12 22:30:59.991523 systemd[1]: sshd@13-10.0.0.65:22-10.0.0.1:58870.service: Deactivated successfully. Nov 12 22:30:59.994131 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 22:30:59.994829 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Nov 12 22:30:59.995689 systemd-logind[1446]: Removed session 14. Nov 12 22:31:04.999094 systemd[1]: Started sshd@14-10.0.0.65:22-10.0.0.1:46066.service - OpenSSH per-connection server daemon (10.0.0.1:46066). Nov 12 22:31:05.038314 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 46066 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:31:05.039536 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:31:05.043089 systemd-logind[1446]: New session 15 of user core. Nov 12 22:31:05.058714 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 22:31:05.166446 sshd[4077]: Connection closed by 10.0.0.1 port 46066 Nov 12 22:31:05.166799 sshd-session[4075]: pam_unix(sshd:session): session closed for user core Nov 12 22:31:05.180020 systemd[1]: sshd@14-10.0.0.65:22-10.0.0.1:46066.service: Deactivated successfully. Nov 12 22:31:05.181540 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 22:31:05.182876 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Nov 12 22:31:05.184093 systemd[1]: Started sshd@15-10.0.0.65:22-10.0.0.1:46068.service - OpenSSH per-connection server daemon (10.0.0.1:46068). Nov 12 22:31:05.185699 systemd-logind[1446]: Removed session 15. Nov 12 22:31:05.223433 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 46068 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:31:05.224489 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:31:05.228089 systemd-logind[1446]: New session 16 of user core. Nov 12 22:31:05.242708 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 22:31:05.434094 sshd[4092]: Connection closed by 10.0.0.1 port 46068 Nov 12 22:31:05.434612 sshd-session[4090]: pam_unix(sshd:session): session closed for user core Nov 12 22:31:05.450932 systemd[1]: sshd@15-10.0.0.65:22-10.0.0.1:46068.service: Deactivated successfully. Nov 12 22:31:05.452382 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 22:31:05.453619 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Nov 12 22:31:05.464881 systemd[1]: Started sshd@16-10.0.0.65:22-10.0.0.1:46078.service - OpenSSH per-connection server daemon (10.0.0.1:46078). Nov 12 22:31:05.466032 systemd-logind[1446]: Removed session 16. Nov 12 22:31:05.505905 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 46078 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:31:05.507000 sshd-session[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:31:05.510482 systemd-logind[1446]: New session 17 of user core. Nov 12 22:31:05.519682 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 22:31:06.737273 sshd[4105]: Connection closed by 10.0.0.1 port 46078 Nov 12 22:31:06.737937 sshd-session[4103]: pam_unix(sshd:session): session closed for user core Nov 12 22:31:06.746141 systemd[1]: sshd@16-10.0.0.65:22-10.0.0.1:46078.service: Deactivated successfully. Nov 12 22:31:06.752102 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 22:31:06.756984 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Nov 12 22:31:06.768880 systemd[1]: Started sshd@17-10.0.0.65:22-10.0.0.1:46084.service - OpenSSH per-connection server daemon (10.0.0.1:46084). Nov 12 22:31:06.770690 systemd-logind[1446]: Removed session 17. Nov 12 22:31:06.807492 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 46084 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:31:06.808815 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:31:06.813010 systemd-logind[1446]: New session 18 of user core. Nov 12 22:31:06.824698 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 22:31:07.043133 sshd[4127]: Connection closed by 10.0.0.1 port 46084 Nov 12 22:31:07.044304 sshd-session[4124]: pam_unix(sshd:session): session closed for user core Nov 12 22:31:07.050029 systemd[1]: sshd@17-10.0.0.65:22-10.0.0.1:46084.service: Deactivated successfully. Nov 12 22:31:07.052145 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 22:31:07.053882 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Nov 12 22:31:07.059852 systemd[1]: Started sshd@18-10.0.0.65:22-10.0.0.1:46096.service - OpenSSH per-connection server daemon (10.0.0.1:46096). Nov 12 22:31:07.060938 systemd-logind[1446]: Removed session 18. Nov 12 22:31:07.095727 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 46096 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:31:07.096917 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:31:07.101081 systemd-logind[1446]: New session 19 of user core. Nov 12 22:31:07.109767 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 22:31:07.220542 sshd[4140]: Connection closed by 10.0.0.1 port 46096 Nov 12 22:31:07.221089 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Nov 12 22:31:07.223887 systemd[1]: sshd@18-10.0.0.65:22-10.0.0.1:46096.service: Deactivated successfully. Nov 12 22:31:07.225699 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 22:31:07.227139 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Nov 12 22:31:07.227964 systemd-logind[1446]: Removed session 19. Nov 12 22:31:12.231169 systemd[1]: Started sshd@19-10.0.0.65:22-10.0.0.1:46102.service - OpenSSH per-connection server daemon (10.0.0.1:46102). Nov 12 22:31:12.269863 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 46102 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:31:12.270987 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:31:12.274332 systemd-logind[1446]: New session 20 of user core. Nov 12 22:31:12.282759 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 22:31:12.390140 sshd[4158]: Connection closed by 10.0.0.1 port 46102 Nov 12 22:31:12.390463 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Nov 12 22:31:12.394361 systemd[1]: sshd@19-10.0.0.65:22-10.0.0.1:46102.service: Deactivated successfully. Nov 12 22:31:12.396105 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 22:31:12.396687 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Nov 12 22:31:12.397391 systemd-logind[1446]: Removed session 20. Nov 12 22:31:17.401948 systemd[1]: Started sshd@20-10.0.0.65:22-10.0.0.1:46198.service - OpenSSH per-connection server daemon (10.0.0.1:46198). Nov 12 22:31:17.440997 sshd[4174]: Accepted publickey for core from 10.0.0.1 port 46198 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:31:17.442081 sshd-session[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:31:17.445239 systemd-logind[1446]: New session 21 of user core. Nov 12 22:31:17.455752 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 22:31:17.559510 sshd[4176]: Connection closed by 10.0.0.1 port 46198 Nov 12 22:31:17.559830 sshd-session[4174]: pam_unix(sshd:session): session closed for user core Nov 12 22:31:17.562808 systemd[1]: sshd@20-10.0.0.65:22-10.0.0.1:46198.service: Deactivated successfully. Nov 12 22:31:17.565075 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 22:31:17.565657 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Nov 12 22:31:17.566378 systemd-logind[1446]: Removed session 21. Nov 12 22:31:22.570142 systemd[1]: Started sshd@21-10.0.0.65:22-10.0.0.1:34596.service - OpenSSH per-connection server daemon (10.0.0.1:34596). Nov 12 22:31:22.610215 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 34596 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:31:22.611491 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:31:22.615227 systemd-logind[1446]: New session 22 of user core. Nov 12 22:31:22.632770 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 22:31:22.740225 sshd[4192]: Connection closed by 10.0.0.1 port 34596 Nov 12 22:31:22.740751 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Nov 12 22:31:22.753136 systemd[1]: sshd@21-10.0.0.65:22-10.0.0.1:34596.service: Deactivated successfully. Nov 12 22:31:22.754682 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 22:31:22.756823 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Nov 12 22:31:22.758293 systemd[1]: Started sshd@22-10.0.0.65:22-10.0.0.1:34602.service - OpenSSH per-connection server daemon (10.0.0.1:34602). Nov 12 22:31:22.759279 systemd-logind[1446]: Removed session 22. Nov 12 22:31:22.797884 sshd[4204]: Accepted publickey for core from 10.0.0.1 port 34602 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:31:22.799048 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:31:22.803114 systemd-logind[1446]: New session 23 of user core. Nov 12 22:31:22.812709 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 22:31:24.824263 containerd[1459]: time="2024-11-12T22:31:24.824191676Z" level=info msg="StopContainer for \"1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118\" with timeout 30 (s)" Nov 12 22:31:24.824889 containerd[1459]: time="2024-11-12T22:31:24.824635360Z" level=info msg="Stop container \"1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118\" with signal terminated" Nov 12 22:31:24.838181 systemd[1]: cri-containerd-1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118.scope: Deactivated successfully. Nov 12 22:31:24.866122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118-rootfs.mount: Deactivated successfully. Nov 12 22:31:24.871480 containerd[1459]: time="2024-11-12T22:31:24.870927293Z" level=info msg="shim disconnected" id=1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118 namespace=k8s.io Nov 12 22:31:24.871654 containerd[1459]: time="2024-11-12T22:31:24.871631179Z" level=warning msg="cleaning up after shim disconnected" id=1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118 namespace=k8s.io Nov 12 22:31:24.871711 containerd[1459]: time="2024-11-12T22:31:24.871698380Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:31:24.873532 containerd[1459]: time="2024-11-12T22:31:24.873499756Z" level=info msg="StopContainer for \"0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4\" with timeout 2 (s)" Nov 12 22:31:24.873733 containerd[1459]: time="2024-11-12T22:31:24.873707398Z" level=info msg="Stop container \"0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4\" with signal terminated" Nov 12 22:31:24.879717 systemd-networkd[1393]: lxc_health: Link DOWN Nov 12 22:31:24.879726 systemd-networkd[1393]: lxc_health: Lost carrier Nov 12 22:31:24.894041 containerd[1459]: time="2024-11-12T22:31:24.893992259Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 22:31:24.901647 systemd[1]: cri-containerd-0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4.scope: Deactivated successfully. Nov 12 22:31:24.902190 systemd[1]: cri-containerd-0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4.scope: Consumed 6.378s CPU time. Nov 12 22:31:24.924442 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4-rootfs.mount: Deactivated successfully. Nov 12 22:31:24.931063 containerd[1459]: time="2024-11-12T22:31:24.930994789Z" level=info msg="StopContainer for \"1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118\" returns successfully" Nov 12 22:31:24.932618 containerd[1459]: time="2024-11-12T22:31:24.932415281Z" level=info msg="shim disconnected" id=0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4 namespace=k8s.io Nov 12 22:31:24.932618 containerd[1459]: time="2024-11-12T22:31:24.932457202Z" level=warning msg="cleaning up after shim disconnected" id=0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4 namespace=k8s.io Nov 12 22:31:24.932618 containerd[1459]: time="2024-11-12T22:31:24.932465002Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:31:24.933664 containerd[1459]: time="2024-11-12T22:31:24.933527411Z" level=info msg="StopPodSandbox for \"7bc2e0eac3483318597a4abbacd0d00aae18ed7103c6c1eefe0d28dfd1d98847\"" Nov 12 22:31:24.933664 containerd[1459]: time="2024-11-12T22:31:24.933598532Z" level=info msg="Container to stop \"1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:31:24.935276 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7bc2e0eac3483318597a4abbacd0d00aae18ed7103c6c1eefe0d28dfd1d98847-shm.mount: Deactivated successfully. Nov 12 22:31:24.941601 systemd[1]: cri-containerd-7bc2e0eac3483318597a4abbacd0d00aae18ed7103c6c1eefe0d28dfd1d98847.scope: Deactivated successfully. Nov 12 22:31:24.955726 containerd[1459]: time="2024-11-12T22:31:24.955638168Z" level=info msg="StopContainer for \"0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4\" returns successfully" Nov 12 22:31:24.956274 containerd[1459]: time="2024-11-12T22:31:24.956238334Z" level=info msg="StopPodSandbox for \"ec3b7fe5f3c04c603dbd0a21c14dae40e08f384e23328e1d9e6545817d0a6955\"" Nov 12 22:31:24.956323 containerd[1459]: time="2024-11-12T22:31:24.956287934Z" level=info msg="Container to stop \"388c1cb42d990712771e2c16542842f6bf46efcf10ec497576a194be8bda44ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:31:24.956323 containerd[1459]: time="2024-11-12T22:31:24.956300534Z" level=info msg="Container to stop \"fe91578781469fb3e8d16fedbcc4d3c7da7c7c8865592749845fb02da688dadf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:31:24.956323 containerd[1459]: time="2024-11-12T22:31:24.956309454Z" level=info msg="Container to stop \"2fb01dc0f30b4d87fb6c39ff2222aaa33b56f3d4b1f2ca67f9662ad632606f60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:31:24.956323 containerd[1459]: time="2024-11-12T22:31:24.956318255Z" level=info msg="Container to stop \"00176b0a60da12972a4e55f047898500989be8b23645ea3007e2cefe04b46c1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:31:24.956444 containerd[1459]: time="2024-11-12T22:31:24.956326895Z" level=info msg="Container to stop \"0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:31:24.958260 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec3b7fe5f3c04c603dbd0a21c14dae40e08f384e23328e1d9e6545817d0a6955-shm.mount: Deactivated successfully. Nov 12 22:31:24.963675 systemd[1]: cri-containerd-ec3b7fe5f3c04c603dbd0a21c14dae40e08f384e23328e1d9e6545817d0a6955.scope: Deactivated successfully. Nov 12 22:31:24.967271 containerd[1459]: time="2024-11-12T22:31:24.967196792Z" level=info msg="shim disconnected" id=7bc2e0eac3483318597a4abbacd0d00aae18ed7103c6c1eefe0d28dfd1d98847 namespace=k8s.io Nov 12 22:31:24.967271 containerd[1459]: time="2024-11-12T22:31:24.967264432Z" level=warning msg="cleaning up after shim disconnected" id=7bc2e0eac3483318597a4abbacd0d00aae18ed7103c6c1eefe0d28dfd1d98847 namespace=k8s.io Nov 12 22:31:24.967271 containerd[1459]: time="2024-11-12T22:31:24.967272672Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:31:24.986936 containerd[1459]: time="2024-11-12T22:31:24.986816607Z" level=info msg="shim disconnected" id=ec3b7fe5f3c04c603dbd0a21c14dae40e08f384e23328e1d9e6545817d0a6955 namespace=k8s.io Nov 12 22:31:24.986936 containerd[1459]: time="2024-11-12T22:31:24.986876727Z" level=warning msg="cleaning up after shim disconnected" id=ec3b7fe5f3c04c603dbd0a21c14dae40e08f384e23328e1d9e6545817d0a6955 namespace=k8s.io Nov 12 22:31:24.986936 containerd[1459]: time="2024-11-12T22:31:24.986893807Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:31:24.994486 containerd[1459]: time="2024-11-12T22:31:24.994407314Z" level=info msg="TearDown network for sandbox \"7bc2e0eac3483318597a4abbacd0d00aae18ed7103c6c1eefe0d28dfd1d98847\" successfully" Nov 12 22:31:24.994486 containerd[1459]: time="2024-11-12T22:31:24.994457155Z" level=info msg="StopPodSandbox for \"7bc2e0eac3483318597a4abbacd0d00aae18ed7103c6c1eefe0d28dfd1d98847\" returns successfully" Nov 12 22:31:25.000091 containerd[1459]: time="2024-11-12T22:31:24.999356278Z" level=info msg="TearDown network for sandbox \"ec3b7fe5f3c04c603dbd0a21c14dae40e08f384e23328e1d9e6545817d0a6955\" successfully" Nov 12 22:31:25.000091 containerd[1459]: time="2024-11-12T22:31:24.999388119Z" level=info msg="StopPodSandbox for \"ec3b7fe5f3c04c603dbd0a21c14dae40e08f384e23328e1d9e6545817d0a6955\" returns successfully" Nov 12 22:31:25.017020 kubelet[2548]: I1112 22:31:25.016986 2548 scope.go:117] "RemoveContainer" containerID="1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118" Nov 12 22:31:25.019003 containerd[1459]: time="2024-11-12T22:31:25.017815279Z" level=info msg="RemoveContainer for \"1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118\"" Nov 12 22:31:25.023589 containerd[1459]: time="2024-11-12T22:31:25.022448999Z" level=info msg="RemoveContainer for \"1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118\" returns successfully" Nov 12 22:31:25.023589 containerd[1459]: time="2024-11-12T22:31:25.023089605Z" level=error msg="ContainerStatus for \"1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118\": not found" Nov 12 22:31:25.023734 kubelet[2548]: I1112 22:31:25.022879 2548 scope.go:117] "RemoveContainer" containerID="1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118" Nov 12 22:31:25.023734 kubelet[2548]: E1112 22:31:25.023251 2548 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118\": not found" containerID="1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118" Nov 12 22:31:25.023734 kubelet[2548]: I1112 22:31:25.023275 2548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118"} err="failed to get container status \"1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118\": rpc error: code = NotFound desc = an error occurred when try to find container \"1000ee9ded8081d6fce7e429d320d9807e5c89c81656916de76458a9f73b0118\": not found" Nov 12 22:31:25.026878 kubelet[2548]: I1112 22:31:25.026860 2548 scope.go:117] "RemoveContainer" containerID="0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4" Nov 12 22:31:25.028985 containerd[1459]: time="2024-11-12T22:31:25.028949615Z" level=info msg="RemoveContainer for \"0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4\"" Nov 12 22:31:25.031464 containerd[1459]: time="2024-11-12T22:31:25.031434317Z" level=info msg="RemoveContainer for \"0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4\" returns successfully" Nov 12 22:31:25.031716 kubelet[2548]: I1112 22:31:25.031634 2548 scope.go:117] "RemoveContainer" containerID="00176b0a60da12972a4e55f047898500989be8b23645ea3007e2cefe04b46c1f" Nov 12 22:31:25.036659 containerd[1459]: time="2024-11-12T22:31:25.036613562Z" level=info msg="RemoveContainer for \"00176b0a60da12972a4e55f047898500989be8b23645ea3007e2cefe04b46c1f\"" Nov 12 22:31:25.042936 containerd[1459]: time="2024-11-12T22:31:25.042880856Z" level=info msg="RemoveContainer for \"00176b0a60da12972a4e55f047898500989be8b23645ea3007e2cefe04b46c1f\" returns successfully" Nov 12 22:31:25.043237 kubelet[2548]: I1112 22:31:25.043143 2548 scope.go:117] "RemoveContainer" containerID="2fb01dc0f30b4d87fb6c39ff2222aaa33b56f3d4b1f2ca67f9662ad632606f60" Nov 12 22:31:25.044160 containerd[1459]: time="2024-11-12T22:31:25.044129587Z" level=info msg="RemoveContainer for \"2fb01dc0f30b4d87fb6c39ff2222aaa33b56f3d4b1f2ca67f9662ad632606f60\"" Nov 12 22:31:25.046379 containerd[1459]: time="2024-11-12T22:31:25.046341006Z" level=info msg="RemoveContainer for \"2fb01dc0f30b4d87fb6c39ff2222aaa33b56f3d4b1f2ca67f9662ad632606f60\" returns successfully" Nov 12 22:31:25.046556 kubelet[2548]: I1112 22:31:25.046518 2548 scope.go:117] "RemoveContainer" containerID="fe91578781469fb3e8d16fedbcc4d3c7da7c7c8865592749845fb02da688dadf" Nov 12 22:31:25.047409 containerd[1459]: time="2024-11-12T22:31:25.047332695Z" level=info msg="RemoveContainer for \"fe91578781469fb3e8d16fedbcc4d3c7da7c7c8865592749845fb02da688dadf\"" Nov 12 22:31:25.052844 containerd[1459]: time="2024-11-12T22:31:25.052802582Z" level=info msg="RemoveContainer for \"fe91578781469fb3e8d16fedbcc4d3c7da7c7c8865592749845fb02da688dadf\" returns successfully" Nov 12 22:31:25.053111 kubelet[2548]: I1112 22:31:25.053000 2548 scope.go:117] "RemoveContainer" containerID="388c1cb42d990712771e2c16542842f6bf46efcf10ec497576a194be8bda44ad" Nov 12 22:31:25.053911 containerd[1459]: time="2024-11-12T22:31:25.053887872Z" level=info msg="RemoveContainer for \"388c1cb42d990712771e2c16542842f6bf46efcf10ec497576a194be8bda44ad\"" Nov 12 22:31:25.055845 containerd[1459]: time="2024-11-12T22:31:25.055813928Z" level=info msg="RemoveContainer for \"388c1cb42d990712771e2c16542842f6bf46efcf10ec497576a194be8bda44ad\" returns successfully" Nov 12 22:31:25.055985 kubelet[2548]: I1112 22:31:25.055965 2548 scope.go:117] "RemoveContainer" containerID="0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4" Nov 12 22:31:25.056241 containerd[1459]: time="2024-11-12T22:31:25.056174852Z" level=error msg="ContainerStatus for \"0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4\": not found" Nov 12 22:31:25.056359 kubelet[2548]: E1112 22:31:25.056338 2548 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4\": not found" containerID="0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4" Nov 12 22:31:25.056488 kubelet[2548]: I1112 22:31:25.056424 2548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4"} err="failed to get container status \"0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4\": rpc error: code = NotFound desc = an error occurred when try to find container \"0818b7f409fe6e1905632ccfb65a4a183697a2f18db989b7481da73fa42f6ae4\": not found" Nov 12 22:31:25.056488 kubelet[2548]: I1112 22:31:25.056449 2548 scope.go:117] "RemoveContainer" containerID="00176b0a60da12972a4e55f047898500989be8b23645ea3007e2cefe04b46c1f" Nov 12 22:31:25.056719 containerd[1459]: time="2024-11-12T22:31:25.056686416Z" level=error msg="ContainerStatus for \"00176b0a60da12972a4e55f047898500989be8b23645ea3007e2cefe04b46c1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"00176b0a60da12972a4e55f047898500989be8b23645ea3007e2cefe04b46c1f\": not found" Nov 12 22:31:25.056864 kubelet[2548]: E1112 22:31:25.056826 2548 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"00176b0a60da12972a4e55f047898500989be8b23645ea3007e2cefe04b46c1f\": not found" containerID="00176b0a60da12972a4e55f047898500989be8b23645ea3007e2cefe04b46c1f" Nov 12 22:31:25.057014 kubelet[2548]: I1112 22:31:25.056927 2548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"00176b0a60da12972a4e55f047898500989be8b23645ea3007e2cefe04b46c1f"} err="failed to get container status \"00176b0a60da12972a4e55f047898500989be8b23645ea3007e2cefe04b46c1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"00176b0a60da12972a4e55f047898500989be8b23645ea3007e2cefe04b46c1f\": not found" Nov 12 22:31:25.057014 kubelet[2548]: I1112 22:31:25.056952 2548 scope.go:117] "RemoveContainer" containerID="2fb01dc0f30b4d87fb6c39ff2222aaa33b56f3d4b1f2ca67f9662ad632606f60" Nov 12 22:31:25.057153 containerd[1459]: time="2024-11-12T22:31:25.057117180Z" level=error msg="ContainerStatus for \"2fb01dc0f30b4d87fb6c39ff2222aaa33b56f3d4b1f2ca67f9662ad632606f60\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2fb01dc0f30b4d87fb6c39ff2222aaa33b56f3d4b1f2ca67f9662ad632606f60\": not found" Nov 12 22:31:25.057275 kubelet[2548]: E1112 22:31:25.057255 2548 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2fb01dc0f30b4d87fb6c39ff2222aaa33b56f3d4b1f2ca67f9662ad632606f60\": not found" containerID="2fb01dc0f30b4d87fb6c39ff2222aaa33b56f3d4b1f2ca67f9662ad632606f60" Nov 12 22:31:25.057318 kubelet[2548]: I1112 22:31:25.057281 2548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2fb01dc0f30b4d87fb6c39ff2222aaa33b56f3d4b1f2ca67f9662ad632606f60"} err="failed to get container status \"2fb01dc0f30b4d87fb6c39ff2222aaa33b56f3d4b1f2ca67f9662ad632606f60\": rpc error: code = NotFound desc = an error occurred when try to find container \"2fb01dc0f30b4d87fb6c39ff2222aaa33b56f3d4b1f2ca67f9662ad632606f60\": not found" Nov 12 22:31:25.057318 kubelet[2548]: I1112 22:31:25.057298 2548 scope.go:117] "RemoveContainer" containerID="fe91578781469fb3e8d16fedbcc4d3c7da7c7c8865592749845fb02da688dadf" Nov 12 22:31:25.057473 containerd[1459]: time="2024-11-12T22:31:25.057447623Z" level=error msg="ContainerStatus for \"fe91578781469fb3e8d16fedbcc4d3c7da7c7c8865592749845fb02da688dadf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe91578781469fb3e8d16fedbcc4d3c7da7c7c8865592749845fb02da688dadf\": not found" Nov 12 22:31:25.057558 kubelet[2548]: E1112 22:31:25.057530 2548 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe91578781469fb3e8d16fedbcc4d3c7da7c7c8865592749845fb02da688dadf\": not found" containerID="fe91578781469fb3e8d16fedbcc4d3c7da7c7c8865592749845fb02da688dadf" Nov 12 22:31:25.057558 kubelet[2548]: I1112 22:31:25.057562 2548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe91578781469fb3e8d16fedbcc4d3c7da7c7c8865592749845fb02da688dadf"} err="failed to get container status \"fe91578781469fb3e8d16fedbcc4d3c7da7c7c8865592749845fb02da688dadf\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe91578781469fb3e8d16fedbcc4d3c7da7c7c8865592749845fb02da688dadf\": not found" Nov 12 22:31:25.057664 kubelet[2548]: I1112 22:31:25.057576 2548 scope.go:117] "RemoveContainer" containerID="388c1cb42d990712771e2c16542842f6bf46efcf10ec497576a194be8bda44ad" Nov 12 22:31:25.057780 containerd[1459]: time="2024-11-12T22:31:25.057736585Z" level=error msg="ContainerStatus for \"388c1cb42d990712771e2c16542842f6bf46efcf10ec497576a194be8bda44ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"388c1cb42d990712771e2c16542842f6bf46efcf10ec497576a194be8bda44ad\": not found" Nov 12 22:31:25.057897 kubelet[2548]: E1112 22:31:25.057880 2548 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"388c1cb42d990712771e2c16542842f6bf46efcf10ec497576a194be8bda44ad\": not found" containerID="388c1cb42d990712771e2c16542842f6bf46efcf10ec497576a194be8bda44ad" Nov 12 22:31:25.058002 kubelet[2548]: I1112 22:31:25.057940 2548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"388c1cb42d990712771e2c16542842f6bf46efcf10ec497576a194be8bda44ad"} err="failed to get container status \"388c1cb42d990712771e2c16542842f6bf46efcf10ec497576a194be8bda44ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"388c1cb42d990712771e2c16542842f6bf46efcf10ec497576a194be8bda44ad\": not found" Nov 12 22:31:25.131609 kubelet[2548]: I1112 22:31:25.131304 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86fc4af8-f7fc-4739-a290-c0210a79f843-cilium-config-path\") pod \"86fc4af8-f7fc-4739-a290-c0210a79f843\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " Nov 12 22:31:25.131609 kubelet[2548]: I1112 22:31:25.131349 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-cilium-run\") pod \"86fc4af8-f7fc-4739-a290-c0210a79f843\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " Nov 12 22:31:25.131609 kubelet[2548]: I1112 22:31:25.131395 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b48j\" (UniqueName: \"kubernetes.io/projected/86fc4af8-f7fc-4739-a290-c0210a79f843-kube-api-access-6b48j\") pod \"86fc4af8-f7fc-4739-a290-c0210a79f843\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " Nov 12 22:31:25.131609 kubelet[2548]: I1112 22:31:25.131414 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86fc4af8-f7fc-4739-a290-c0210a79f843-clustermesh-secrets\") pod \"86fc4af8-f7fc-4739-a290-c0210a79f843\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " Nov 12 22:31:25.131609 kubelet[2548]: I1112 22:31:25.131430 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-etc-cni-netd\") pod \"86fc4af8-f7fc-4739-a290-c0210a79f843\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " Nov 12 22:31:25.131609 kubelet[2548]: I1112 22:31:25.131444 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-cni-path\") pod \"86fc4af8-f7fc-4739-a290-c0210a79f843\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " Nov 12 22:31:25.133083 kubelet[2548]: I1112 22:31:25.131460 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4ee1684-8137-49a2-bfdd-a5b45b5744e5-cilium-config-path\") pod \"f4ee1684-8137-49a2-bfdd-a5b45b5744e5\" (UID: \"f4ee1684-8137-49a2-bfdd-a5b45b5744e5\") " Nov 12 22:31:25.133083 kubelet[2548]: I1112 22:31:25.131475 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-cilium-cgroup\") pod \"86fc4af8-f7fc-4739-a290-c0210a79f843\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " Nov 12 22:31:25.133083 kubelet[2548]: I1112 22:31:25.131489 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-hostproc\") pod \"86fc4af8-f7fc-4739-a290-c0210a79f843\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " Nov 12 22:31:25.133083 kubelet[2548]: I1112 22:31:25.131505 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86fc4af8-f7fc-4739-a290-c0210a79f843-hubble-tls\") pod \"86fc4af8-f7fc-4739-a290-c0210a79f843\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " Nov 12 22:31:25.133083 kubelet[2548]: I1112 22:31:25.131521 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmwkw\" (UniqueName: \"kubernetes.io/projected/f4ee1684-8137-49a2-bfdd-a5b45b5744e5-kube-api-access-rmwkw\") pod \"f4ee1684-8137-49a2-bfdd-a5b45b5744e5\" (UID: \"f4ee1684-8137-49a2-bfdd-a5b45b5744e5\") " Nov 12 22:31:25.133083 kubelet[2548]: I1112 22:31:25.131535 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-lib-modules\") pod \"86fc4af8-f7fc-4739-a290-c0210a79f843\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " Nov 12 22:31:25.133216 kubelet[2548]: I1112 22:31:25.131607 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-bpf-maps\") pod \"86fc4af8-f7fc-4739-a290-c0210a79f843\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " Nov 12 22:31:25.133216 kubelet[2548]: I1112 22:31:25.131634 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-host-proc-sys-kernel\") pod \"86fc4af8-f7fc-4739-a290-c0210a79f843\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " Nov 12 22:31:25.133216 kubelet[2548]: I1112 22:31:25.131669 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-host-proc-sys-net\") pod \"86fc4af8-f7fc-4739-a290-c0210a79f843\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " Nov 12 22:31:25.133216 kubelet[2548]: I1112 22:31:25.131690 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-xtables-lock\") pod \"86fc4af8-f7fc-4739-a290-c0210a79f843\" (UID: \"86fc4af8-f7fc-4739-a290-c0210a79f843\") " Nov 12 22:31:25.137823 kubelet[2548]: I1112 22:31:25.137787 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4ee1684-8137-49a2-bfdd-a5b45b5744e5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f4ee1684-8137-49a2-bfdd-a5b45b5744e5" (UID: "f4ee1684-8137-49a2-bfdd-a5b45b5744e5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 22:31:25.137875 kubelet[2548]: I1112 22:31:25.137859 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-hostproc" (OuterVolumeSpecName: "hostproc") pod "86fc4af8-f7fc-4739-a290-c0210a79f843" (UID: "86fc4af8-f7fc-4739-a290-c0210a79f843"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:31:25.137904 kubelet[2548]: I1112 22:31:25.137881 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "86fc4af8-f7fc-4739-a290-c0210a79f843" (UID: "86fc4af8-f7fc-4739-a290-c0210a79f843"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:31:25.138863 kubelet[2548]: I1112 22:31:25.138834 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86fc4af8-f7fc-4739-a290-c0210a79f843-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "86fc4af8-f7fc-4739-a290-c0210a79f843" (UID: "86fc4af8-f7fc-4739-a290-c0210a79f843"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 22:31:25.138931 kubelet[2548]: I1112 22:31:25.138869 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "86fc4af8-f7fc-4739-a290-c0210a79f843" (UID: "86fc4af8-f7fc-4739-a290-c0210a79f843"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:31:25.138931 kubelet[2548]: I1112 22:31:25.138898 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "86fc4af8-f7fc-4739-a290-c0210a79f843" (UID: "86fc4af8-f7fc-4739-a290-c0210a79f843"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:31:25.138931 kubelet[2548]: I1112 22:31:25.138914 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "86fc4af8-f7fc-4739-a290-c0210a79f843" (UID: "86fc4af8-f7fc-4739-a290-c0210a79f843"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:31:25.140971 kubelet[2548]: I1112 22:31:25.140747 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-cni-path" (OuterVolumeSpecName: "cni-path") pod "86fc4af8-f7fc-4739-a290-c0210a79f843" (UID: "86fc4af8-f7fc-4739-a290-c0210a79f843"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:31:25.140971 kubelet[2548]: I1112 22:31:25.140804 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "86fc4af8-f7fc-4739-a290-c0210a79f843" (UID: "86fc4af8-f7fc-4739-a290-c0210a79f843"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:31:25.140971 kubelet[2548]: I1112 22:31:25.140844 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86fc4af8-f7fc-4739-a290-c0210a79f843-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "86fc4af8-f7fc-4739-a290-c0210a79f843" (UID: "86fc4af8-f7fc-4739-a290-c0210a79f843"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 22:31:25.140971 kubelet[2548]: I1112 22:31:25.140884 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "86fc4af8-f7fc-4739-a290-c0210a79f843" (UID: "86fc4af8-f7fc-4739-a290-c0210a79f843"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:31:25.140971 kubelet[2548]: I1112 22:31:25.140904 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "86fc4af8-f7fc-4739-a290-c0210a79f843" (UID: "86fc4af8-f7fc-4739-a290-c0210a79f843"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:31:25.141613 kubelet[2548]: I1112 22:31:25.140920 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "86fc4af8-f7fc-4739-a290-c0210a79f843" (UID: "86fc4af8-f7fc-4739-a290-c0210a79f843"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:31:25.141710 kubelet[2548]: I1112 22:31:25.141666 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86fc4af8-f7fc-4739-a290-c0210a79f843-kube-api-access-6b48j" (OuterVolumeSpecName: "kube-api-access-6b48j") pod "86fc4af8-f7fc-4739-a290-c0210a79f843" (UID: "86fc4af8-f7fc-4739-a290-c0210a79f843"). InnerVolumeSpecName "kube-api-access-6b48j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 22:31:25.142783 kubelet[2548]: I1112 22:31:25.142749 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4ee1684-8137-49a2-bfdd-a5b45b5744e5-kube-api-access-rmwkw" (OuterVolumeSpecName: "kube-api-access-rmwkw") pod "f4ee1684-8137-49a2-bfdd-a5b45b5744e5" (UID: "f4ee1684-8137-49a2-bfdd-a5b45b5744e5"). InnerVolumeSpecName "kube-api-access-rmwkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 22:31:25.143529 kubelet[2548]: I1112 22:31:25.143502 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86fc4af8-f7fc-4739-a290-c0210a79f843-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "86fc4af8-f7fc-4739-a290-c0210a79f843" (UID: "86fc4af8-f7fc-4739-a290-c0210a79f843"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 12 22:31:25.232872 kubelet[2548]: I1112 22:31:25.232826 2548 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 12 22:31:25.232872 kubelet[2548]: I1112 22:31:25.232862 2548 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 12 22:31:25.232872 kubelet[2548]: I1112 22:31:25.232874 2548 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 12 22:31:25.232872 kubelet[2548]: I1112 22:31:25.232883 2548 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 12 22:31:25.233085 kubelet[2548]: I1112 22:31:25.232891 2548 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86fc4af8-f7fc-4739-a290-c0210a79f843-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 22:31:25.233085 kubelet[2548]: I1112 22:31:25.232898 2548 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 12 22:31:25.233085 kubelet[2548]: I1112 22:31:25.232906 2548 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6b48j\" (UniqueName: \"kubernetes.io/projected/86fc4af8-f7fc-4739-a290-c0210a79f843-kube-api-access-6b48j\") on node \"localhost\" DevicePath \"\"" Nov 12 22:31:25.233085 kubelet[2548]: I1112 22:31:25.232915 2548 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 12 22:31:25.233085 kubelet[2548]: I1112 22:31:25.232922 2548 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 12 22:31:25.233085 kubelet[2548]: I1112 22:31:25.232929 2548 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4ee1684-8137-49a2-bfdd-a5b45b5744e5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 22:31:25.233085 kubelet[2548]: I1112 22:31:25.232936 2548 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86fc4af8-f7fc-4739-a290-c0210a79f843-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 12 22:31:25.233085 kubelet[2548]: I1112 22:31:25.232944 2548 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 12 22:31:25.233332 kubelet[2548]: I1112 22:31:25.232952 2548 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 12 22:31:25.233332 kubelet[2548]: I1112 22:31:25.232959 2548 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86fc4af8-f7fc-4739-a290-c0210a79f843-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 12 22:31:25.233332 kubelet[2548]: I1112 22:31:25.232967 2548 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rmwkw\" (UniqueName: \"kubernetes.io/projected/f4ee1684-8137-49a2-bfdd-a5b45b5744e5-kube-api-access-rmwkw\") on node \"localhost\" DevicePath \"\"" Nov 12 22:31:25.233332 kubelet[2548]: I1112 22:31:25.232974 2548 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86fc4af8-f7fc-4739-a290-c0210a79f843-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 12 22:31:25.321025 systemd[1]: Removed slice kubepods-besteffort-podf4ee1684_8137_49a2_bfdd_a5b45b5744e5.slice - libcontainer container kubepods-besteffort-podf4ee1684_8137_49a2_bfdd_a5b45b5744e5.slice. Nov 12 22:31:25.848601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bc2e0eac3483318597a4abbacd0d00aae18ed7103c6c1eefe0d28dfd1d98847-rootfs.mount: Deactivated successfully. Nov 12 22:31:25.848699 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec3b7fe5f3c04c603dbd0a21c14dae40e08f384e23328e1d9e6545817d0a6955-rootfs.mount: Deactivated successfully. Nov 12 22:31:25.848751 systemd[1]: var-lib-kubelet-pods-f4ee1684\x2d8137\x2d49a2\x2dbfdd\x2da5b45b5744e5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drmwkw.mount: Deactivated successfully. Nov 12 22:31:25.848808 systemd[1]: var-lib-kubelet-pods-86fc4af8\x2df7fc\x2d4739\x2da290\x2dc0210a79f843-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6b48j.mount: Deactivated successfully. Nov 12 22:31:25.848862 systemd[1]: var-lib-kubelet-pods-86fc4af8\x2df7fc\x2d4739\x2da290\x2dc0210a79f843-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 12 22:31:25.848912 systemd[1]: var-lib-kubelet-pods-86fc4af8\x2df7fc\x2d4739\x2da290\x2dc0210a79f843-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 12 22:31:26.035591 systemd[1]: Removed slice kubepods-burstable-pod86fc4af8_f7fc_4739_a290_c0210a79f843.slice - libcontainer container kubepods-burstable-pod86fc4af8_f7fc_4739_a290_c0210a79f843.slice. Nov 12 22:31:26.035701 systemd[1]: kubepods-burstable-pod86fc4af8_f7fc_4739_a290_c0210a79f843.slice: Consumed 6.518s CPU time. Nov 12 22:31:26.784546 sshd[4206]: Connection closed by 10.0.0.1 port 34602 Nov 12 22:31:26.785047 sshd-session[4204]: pam_unix(sshd:session): session closed for user core Nov 12 22:31:26.799387 systemd[1]: sshd@22-10.0.0.65:22-10.0.0.1:34602.service: Deactivated successfully. Nov 12 22:31:26.801401 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 22:31:26.801622 systemd[1]: session-23.scope: Consumed 1.350s CPU time. Nov 12 22:31:26.802807 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Nov 12 22:31:26.810892 systemd[1]: Started sshd@23-10.0.0.65:22-10.0.0.1:34614.service - OpenSSH per-connection server daemon (10.0.0.1:34614). Nov 12 22:31:26.811784 systemd-logind[1446]: Removed session 23. Nov 12 22:31:26.851493 sshd[4369]: Accepted publickey for core from 10.0.0.1 port 34614 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:31:26.852906 sshd-session[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:31:26.857262 systemd-logind[1446]: New session 24 of user core. Nov 12 22:31:26.859195 kubelet[2548]: I1112 22:31:26.859155 2548 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86fc4af8-f7fc-4739-a290-c0210a79f843" path="/var/lib/kubelet/pods/86fc4af8-f7fc-4739-a290-c0210a79f843/volumes" Nov 12 22:31:26.869101 kubelet[2548]: I1112 22:31:26.859745 2548 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4ee1684-8137-49a2-bfdd-a5b45b5744e5" path="/var/lib/kubelet/pods/f4ee1684-8137-49a2-bfdd-a5b45b5744e5/volumes" Nov 12 22:31:26.868768 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 22:31:27.907224 kubelet[2548]: E1112 22:31:27.907180 2548 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 12 22:31:27.963893 sshd[4371]: Connection closed by 10.0.0.1 port 34614 Nov 12 22:31:27.964351 sshd-session[4369]: pam_unix(sshd:session): session closed for user core Nov 12 22:31:27.976163 systemd[1]: sshd@23-10.0.0.65:22-10.0.0.1:34614.service: Deactivated successfully. Nov 12 22:31:27.980628 kubelet[2548]: E1112 22:31:27.977645 2548 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86fc4af8-f7fc-4739-a290-c0210a79f843" containerName="apply-sysctl-overwrites" Nov 12 22:31:27.980628 kubelet[2548]: E1112 22:31:27.977676 2548 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4ee1684-8137-49a2-bfdd-a5b45b5744e5" containerName="cilium-operator" Nov 12 22:31:27.980628 kubelet[2548]: E1112 22:31:27.977684 2548 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86fc4af8-f7fc-4739-a290-c0210a79f843" containerName="mount-bpf-fs" Nov 12 22:31:27.980628 kubelet[2548]: E1112 22:31:27.977699 2548 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86fc4af8-f7fc-4739-a290-c0210a79f843" containerName="clean-cilium-state" Nov 12 22:31:27.980628 kubelet[2548]: E1112 22:31:27.977706 2548 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86fc4af8-f7fc-4739-a290-c0210a79f843" containerName="cilium-agent" Nov 12 22:31:27.980628 kubelet[2548]: E1112 22:31:27.977720 2548 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86fc4af8-f7fc-4739-a290-c0210a79f843" containerName="mount-cgroup" Nov 12 22:31:27.980628 kubelet[2548]: I1112 22:31:27.977744 2548 memory_manager.go:354] "RemoveStaleState removing state" podUID="86fc4af8-f7fc-4739-a290-c0210a79f843" containerName="cilium-agent" Nov 12 22:31:27.980628 kubelet[2548]: I1112 22:31:27.977750 2548 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4ee1684-8137-49a2-bfdd-a5b45b5744e5" containerName="cilium-operator" Nov 12 22:31:27.977675 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 22:31:27.977823 systemd[1]: session-24.scope: Consumed 1.010s CPU time. Nov 12 22:31:27.981058 systemd-logind[1446]: Session 24 logged out. Waiting for processes to exit. Nov 12 22:31:27.991354 systemd[1]: Started sshd@24-10.0.0.65:22-10.0.0.1:34620.service - OpenSSH per-connection server daemon (10.0.0.1:34620). Nov 12 22:31:27.993881 systemd-logind[1446]: Removed session 24. Nov 12 22:31:28.001471 systemd[1]: Created slice kubepods-burstable-podaadf8896_1b29_4e7f_8308_fa004e796c4f.slice - libcontainer container kubepods-burstable-podaadf8896_1b29_4e7f_8308_fa004e796c4f.slice. Nov 12 22:31:28.037230 sshd[4382]: Accepted publickey for core from 10.0.0.1 port 34620 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:31:28.038359 sshd-session[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:31:28.041880 systemd-logind[1446]: New session 25 of user core. Nov 12 22:31:28.051694 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 22:31:28.052507 kubelet[2548]: I1112 22:31:28.052177 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aadf8896-1b29-4e7f-8308-fa004e796c4f-bpf-maps\") pod \"cilium-9r75r\" (UID: \"aadf8896-1b29-4e7f-8308-fa004e796c4f\") " pod="kube-system/cilium-9r75r" Nov 12 22:31:28.052507 kubelet[2548]: I1112 22:31:28.052210 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aadf8896-1b29-4e7f-8308-fa004e796c4f-cni-path\") pod \"cilium-9r75r\" (UID: \"aadf8896-1b29-4e7f-8308-fa004e796c4f\") " pod="kube-system/cilium-9r75r" Nov 12 22:31:28.052507 kubelet[2548]: I1112 22:31:28.052237 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aadf8896-1b29-4e7f-8308-fa004e796c4f-xtables-lock\") pod \"cilium-9r75r\" (UID: \"aadf8896-1b29-4e7f-8308-fa004e796c4f\") " pod="kube-system/cilium-9r75r" Nov 12 22:31:28.052507 kubelet[2548]: I1112 22:31:28.052258 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7blsr\" (UniqueName: \"kubernetes.io/projected/aadf8896-1b29-4e7f-8308-fa004e796c4f-kube-api-access-7blsr\") pod \"cilium-9r75r\" (UID: \"aadf8896-1b29-4e7f-8308-fa004e796c4f\") " pod="kube-system/cilium-9r75r" Nov 12 22:31:28.052507 kubelet[2548]: I1112 22:31:28.052286 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aadf8896-1b29-4e7f-8308-fa004e796c4f-cilium-run\") pod \"cilium-9r75r\" (UID: \"aadf8896-1b29-4e7f-8308-fa004e796c4f\") " pod="kube-system/cilium-9r75r" Nov 12 22:31:28.052507 kubelet[2548]: I1112 22:31:28.052331 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aadf8896-1b29-4e7f-8308-fa004e796c4f-hostproc\") pod \"cilium-9r75r\" (UID: \"aadf8896-1b29-4e7f-8308-fa004e796c4f\") " pod="kube-system/cilium-9r75r" Nov 12 22:31:28.052780 kubelet[2548]: I1112 22:31:28.052395 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aadf8896-1b29-4e7f-8308-fa004e796c4f-hubble-tls\") pod \"cilium-9r75r\" (UID: \"aadf8896-1b29-4e7f-8308-fa004e796c4f\") " pod="kube-system/cilium-9r75r" Nov 12 22:31:28.052780 kubelet[2548]: I1112 22:31:28.052430 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aadf8896-1b29-4e7f-8308-fa004e796c4f-cilium-cgroup\") pod \"cilium-9r75r\" (UID: \"aadf8896-1b29-4e7f-8308-fa004e796c4f\") " pod="kube-system/cilium-9r75r" Nov 12 22:31:28.052780 kubelet[2548]: I1112 22:31:28.052450 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aadf8896-1b29-4e7f-8308-fa004e796c4f-lib-modules\") pod \"cilium-9r75r\" (UID: \"aadf8896-1b29-4e7f-8308-fa004e796c4f\") " pod="kube-system/cilium-9r75r" Nov 12 22:31:28.052780 kubelet[2548]: I1112 22:31:28.052468 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aadf8896-1b29-4e7f-8308-fa004e796c4f-cilium-config-path\") pod \"cilium-9r75r\" (UID: \"aadf8896-1b29-4e7f-8308-fa004e796c4f\") " pod="kube-system/cilium-9r75r" Nov 12 22:31:28.052780 kubelet[2548]: I1112 22:31:28.052491 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aadf8896-1b29-4e7f-8308-fa004e796c4f-etc-cni-netd\") pod \"cilium-9r75r\" (UID: \"aadf8896-1b29-4e7f-8308-fa004e796c4f\") " pod="kube-system/cilium-9r75r" Nov 12 22:31:28.052780 kubelet[2548]: I1112 22:31:28.052508 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aadf8896-1b29-4e7f-8308-fa004e796c4f-clustermesh-secrets\") pod \"cilium-9r75r\" (UID: \"aadf8896-1b29-4e7f-8308-fa004e796c4f\") " pod="kube-system/cilium-9r75r" Nov 12 22:31:28.052904 kubelet[2548]: I1112 22:31:28.052526 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aadf8896-1b29-4e7f-8308-fa004e796c4f-cilium-ipsec-secrets\") pod \"cilium-9r75r\" (UID: \"aadf8896-1b29-4e7f-8308-fa004e796c4f\") " pod="kube-system/cilium-9r75r" Nov 12 22:31:28.052904 kubelet[2548]: I1112 22:31:28.052565 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aadf8896-1b29-4e7f-8308-fa004e796c4f-host-proc-sys-kernel\") pod \"cilium-9r75r\" (UID: \"aadf8896-1b29-4e7f-8308-fa004e796c4f\") " pod="kube-system/cilium-9r75r" Nov 12 22:31:28.052904 kubelet[2548]: I1112 22:31:28.052598 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aadf8896-1b29-4e7f-8308-fa004e796c4f-host-proc-sys-net\") pod \"cilium-9r75r\" (UID: \"aadf8896-1b29-4e7f-8308-fa004e796c4f\") " pod="kube-system/cilium-9r75r" Nov 12 22:31:28.101065 sshd[4384]: Connection closed by 10.0.0.1 port 34620 Nov 12 22:31:28.102723 sshd-session[4382]: pam_unix(sshd:session): session closed for user core Nov 12 22:31:28.112128 systemd[1]: sshd@24-10.0.0.65:22-10.0.0.1:34620.service: Deactivated successfully. Nov 12 22:31:28.114005 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 22:31:28.115388 systemd-logind[1446]: Session 25 logged out. Waiting for processes to exit. Nov 12 22:31:28.124277 systemd[1]: Started sshd@25-10.0.0.65:22-10.0.0.1:34630.service - OpenSSH per-connection server daemon (10.0.0.1:34630). Nov 12 22:31:28.125440 systemd-logind[1446]: Removed session 25. Nov 12 22:31:28.162411 sshd[4390]: Accepted publickey for core from 10.0.0.1 port 34630 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:31:28.161601 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:31:28.169033 systemd-logind[1446]: New session 26 of user core. Nov 12 22:31:28.178684 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 22:31:28.307042 kubelet[2548]: E1112 22:31:28.307008 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:31:28.307672 containerd[1459]: time="2024-11-12T22:31:28.307639438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9r75r,Uid:aadf8896-1b29-4e7f-8308-fa004e796c4f,Namespace:kube-system,Attempt:0,}" Nov 12 22:31:28.333516 containerd[1459]: time="2024-11-12T22:31:28.333346523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:31:28.333516 containerd[1459]: time="2024-11-12T22:31:28.333426124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:31:28.333516 containerd[1459]: time="2024-11-12T22:31:28.333438644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:31:28.333764 containerd[1459]: time="2024-11-12T22:31:28.333588405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:31:28.350727 systemd[1]: Started cri-containerd-28a1ef90bef04acad14bf4c9a63b73237f9e4c98e83a8774eda075115aa8106d.scope - libcontainer container 28a1ef90bef04acad14bf4c9a63b73237f9e4c98e83a8774eda075115aa8106d. Nov 12 22:31:28.370394 containerd[1459]: time="2024-11-12T22:31:28.370285259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9r75r,Uid:aadf8896-1b29-4e7f-8308-fa004e796c4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"28a1ef90bef04acad14bf4c9a63b73237f9e4c98e83a8774eda075115aa8106d\"" Nov 12 22:31:28.370979 kubelet[2548]: E1112 22:31:28.370945 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:31:28.374541 containerd[1459]: time="2024-11-12T22:31:28.374446692Z" level=info msg="CreateContainer within sandbox \"28a1ef90bef04acad14bf4c9a63b73237f9e4c98e83a8774eda075115aa8106d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 22:31:28.386630 containerd[1459]: time="2024-11-12T22:31:28.386581509Z" level=info msg="CreateContainer within sandbox \"28a1ef90bef04acad14bf4c9a63b73237f9e4c98e83a8774eda075115aa8106d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c1c3750bfed3482ec40fe6daf9a25403d0f17fc559b4077290ee63a23017a549\"" Nov 12 22:31:28.387119 containerd[1459]: time="2024-11-12T22:31:28.387088833Z" level=info msg="StartContainer for \"c1c3750bfed3482ec40fe6daf9a25403d0f17fc559b4077290ee63a23017a549\"" Nov 12 22:31:28.432731 systemd[1]: Started cri-containerd-c1c3750bfed3482ec40fe6daf9a25403d0f17fc559b4077290ee63a23017a549.scope - libcontainer container c1c3750bfed3482ec40fe6daf9a25403d0f17fc559b4077290ee63a23017a549. Nov 12 22:31:28.456583 containerd[1459]: time="2024-11-12T22:31:28.456526228Z" level=info msg="StartContainer for \"c1c3750bfed3482ec40fe6daf9a25403d0f17fc559b4077290ee63a23017a549\" returns successfully" Nov 12 22:31:28.481633 systemd[1]: cri-containerd-c1c3750bfed3482ec40fe6daf9a25403d0f17fc559b4077290ee63a23017a549.scope: Deactivated successfully. Nov 12 22:31:28.510377 containerd[1459]: time="2024-11-12T22:31:28.510189417Z" level=info msg="shim disconnected" id=c1c3750bfed3482ec40fe6daf9a25403d0f17fc559b4077290ee63a23017a549 namespace=k8s.io Nov 12 22:31:28.510377 containerd[1459]: time="2024-11-12T22:31:28.510268938Z" level=warning msg="cleaning up after shim disconnected" id=c1c3750bfed3482ec40fe6daf9a25403d0f17fc559b4077290ee63a23017a549 namespace=k8s.io Nov 12 22:31:28.510377 containerd[1459]: time="2024-11-12T22:31:28.510277578Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:31:29.038450 kubelet[2548]: E1112 22:31:29.038410 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:31:29.040402 containerd[1459]: time="2024-11-12T22:31:29.040356647Z" level=info msg="CreateContainer within sandbox \"28a1ef90bef04acad14bf4c9a63b73237f9e4c98e83a8774eda075115aa8106d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 22:31:29.050021 containerd[1459]: time="2024-11-12T22:31:29.049954922Z" level=info msg="CreateContainer within sandbox \"28a1ef90bef04acad14bf4c9a63b73237f9e4c98e83a8774eda075115aa8106d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d39ff706959e16c965d5a1d36dee4a10b0a15c124f76a51c5acabf1b73a96529\"" Nov 12 22:31:29.050889 containerd[1459]: time="2024-11-12T22:31:29.050483206Z" level=info msg="StartContainer for \"d39ff706959e16c965d5a1d36dee4a10b0a15c124f76a51c5acabf1b73a96529\"" Nov 12 22:31:29.073732 systemd[1]: Started cri-containerd-d39ff706959e16c965d5a1d36dee4a10b0a15c124f76a51c5acabf1b73a96529.scope - libcontainer container d39ff706959e16c965d5a1d36dee4a10b0a15c124f76a51c5acabf1b73a96529. Nov 12 22:31:29.091955 containerd[1459]: time="2024-11-12T22:31:29.091915408Z" level=info msg="StartContainer for \"d39ff706959e16c965d5a1d36dee4a10b0a15c124f76a51c5acabf1b73a96529\" returns successfully" Nov 12 22:31:29.100345 systemd[1]: cri-containerd-d39ff706959e16c965d5a1d36dee4a10b0a15c124f76a51c5acabf1b73a96529.scope: Deactivated successfully. Nov 12 22:31:29.126180 containerd[1459]: time="2024-11-12T22:31:29.126032754Z" level=info msg="shim disconnected" id=d39ff706959e16c965d5a1d36dee4a10b0a15c124f76a51c5acabf1b73a96529 namespace=k8s.io Nov 12 22:31:29.126180 containerd[1459]: time="2024-11-12T22:31:29.126084194Z" level=warning msg="cleaning up after shim disconnected" id=d39ff706959e16c965d5a1d36dee4a10b0a15c124f76a51c5acabf1b73a96529 namespace=k8s.io Nov 12 22:31:29.126180 containerd[1459]: time="2024-11-12T22:31:29.126091874Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:31:29.856046 kubelet[2548]: E1112 22:31:29.855950 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:31:30.043536 kubelet[2548]: E1112 22:31:30.043396 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:31:30.047173 containerd[1459]: time="2024-11-12T22:31:30.047103152Z" level=info msg="CreateContainer within sandbox \"28a1ef90bef04acad14bf4c9a63b73237f9e4c98e83a8774eda075115aa8106d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 22:31:30.070679 containerd[1459]: time="2024-11-12T22:31:30.070632170Z" level=info msg="CreateContainer within sandbox \"28a1ef90bef04acad14bf4c9a63b73237f9e4c98e83a8774eda075115aa8106d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"91a39a450d7a6de45b121c6af0dd444997f56ac5fd745136e8a1d504cf7dc6d7\"" Nov 12 22:31:30.071114 containerd[1459]: time="2024-11-12T22:31:30.071087373Z" level=info msg="StartContainer for \"91a39a450d7a6de45b121c6af0dd444997f56ac5fd745136e8a1d504cf7dc6d7\"" Nov 12 22:31:30.101730 systemd[1]: Started cri-containerd-91a39a450d7a6de45b121c6af0dd444997f56ac5fd745136e8a1d504cf7dc6d7.scope - libcontainer container 91a39a450d7a6de45b121c6af0dd444997f56ac5fd745136e8a1d504cf7dc6d7. Nov 12 22:31:30.128434 containerd[1459]: time="2024-11-12T22:31:30.128109405Z" level=info msg="StartContainer for \"91a39a450d7a6de45b121c6af0dd444997f56ac5fd745136e8a1d504cf7dc6d7\" returns successfully" Nov 12 22:31:30.129954 systemd[1]: cri-containerd-91a39a450d7a6de45b121c6af0dd444997f56ac5fd745136e8a1d504cf7dc6d7.scope: Deactivated successfully. Nov 12 22:31:30.150584 containerd[1459]: time="2024-11-12T22:31:30.150520895Z" level=info msg="shim disconnected" id=91a39a450d7a6de45b121c6af0dd444997f56ac5fd745136e8a1d504cf7dc6d7 namespace=k8s.io Nov 12 22:31:30.150584 containerd[1459]: time="2024-11-12T22:31:30.150581016Z" level=warning msg="cleaning up after shim disconnected" id=91a39a450d7a6de45b121c6af0dd444997f56ac5fd745136e8a1d504cf7dc6d7 namespace=k8s.io Nov 12 22:31:30.150756 containerd[1459]: time="2024-11-12T22:31:30.150589776Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:31:30.158059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91a39a450d7a6de45b121c6af0dd444997f56ac5fd745136e8a1d504cf7dc6d7-rootfs.mount: Deactivated successfully. Nov 12 22:31:30.856785 kubelet[2548]: E1112 22:31:30.856742 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:31:31.046486 kubelet[2548]: E1112 22:31:31.046451 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:31:31.049840 containerd[1459]: time="2024-11-12T22:31:31.049802977Z" level=info msg="CreateContainer within sandbox \"28a1ef90bef04acad14bf4c9a63b73237f9e4c98e83a8774eda075115aa8106d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 22:31:31.062146 containerd[1459]: time="2024-11-12T22:31:31.062085068Z" level=info msg="CreateContainer within sandbox \"28a1ef90bef04acad14bf4c9a63b73237f9e4c98e83a8774eda075115aa8106d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2a0f15ed9eac902bc9ff6d435deaf6246c161fcb4059268f3c91524656604196\"" Nov 12 22:31:31.062246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3127942226.mount: Deactivated successfully. Nov 12 22:31:31.064447 containerd[1459]: time="2024-11-12T22:31:31.063810241Z" level=info msg="StartContainer for \"2a0f15ed9eac902bc9ff6d435deaf6246c161fcb4059268f3c91524656604196\"" Nov 12 22:31:31.089698 systemd[1]: Started cri-containerd-2a0f15ed9eac902bc9ff6d435deaf6246c161fcb4059268f3c91524656604196.scope - libcontainer container 2a0f15ed9eac902bc9ff6d435deaf6246c161fcb4059268f3c91524656604196. Nov 12 22:31:31.107842 systemd[1]: cri-containerd-2a0f15ed9eac902bc9ff6d435deaf6246c161fcb4059268f3c91524656604196.scope: Deactivated successfully. Nov 12 22:31:31.112636 containerd[1459]: time="2024-11-12T22:31:31.111519553Z" level=info msg="StartContainer for \"2a0f15ed9eac902bc9ff6d435deaf6246c161fcb4059268f3c91524656604196\" returns successfully" Nov 12 22:31:31.114518 containerd[1459]: time="2024-11-12T22:31:31.108495250Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaadf8896_1b29_4e7f_8308_fa004e796c4f.slice/cri-containerd-2a0f15ed9eac902bc9ff6d435deaf6246c161fcb4059268f3c91524656604196.scope/memory.events\": no such file or directory" Nov 12 22:31:31.132038 containerd[1459]: time="2024-11-12T22:31:31.131988504Z" level=info msg="shim disconnected" id=2a0f15ed9eac902bc9ff6d435deaf6246c161fcb4059268f3c91524656604196 namespace=k8s.io Nov 12 22:31:31.132038 containerd[1459]: time="2024-11-12T22:31:31.132035104Z" level=warning msg="cleaning up after shim disconnected" id=2a0f15ed9eac902bc9ff6d435deaf6246c161fcb4059268f3c91524656604196 namespace=k8s.io Nov 12 22:31:31.132038 containerd[1459]: time="2024-11-12T22:31:31.132042984Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:31:31.158139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a0f15ed9eac902bc9ff6d435deaf6246c161fcb4059268f3c91524656604196-rootfs.mount: Deactivated successfully. Nov 12 22:31:32.057039 kubelet[2548]: E1112 22:31:32.057006 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:31:32.058976 containerd[1459]: time="2024-11-12T22:31:32.058944809Z" level=info msg="CreateContainer within sandbox \"28a1ef90bef04acad14bf4c9a63b73237f9e4c98e83a8774eda075115aa8106d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 22:31:32.070792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1779878465.mount: Deactivated successfully. Nov 12 22:31:32.072318 containerd[1459]: time="2024-11-12T22:31:32.072266464Z" level=info msg="CreateContainer within sandbox \"28a1ef90bef04acad14bf4c9a63b73237f9e4c98e83a8774eda075115aa8106d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"90121a6c478b8fbffeab3c95f7b333b53eeb12a0c9907c64dba5b39dce4a873a\"" Nov 12 22:31:32.073733 containerd[1459]: time="2024-11-12T22:31:32.073689634Z" level=info msg="StartContainer for \"90121a6c478b8fbffeab3c95f7b333b53eeb12a0c9907c64dba5b39dce4a873a\"" Nov 12 22:31:32.099749 systemd[1]: Started cri-containerd-90121a6c478b8fbffeab3c95f7b333b53eeb12a0c9907c64dba5b39dce4a873a.scope - libcontainer container 90121a6c478b8fbffeab3c95f7b333b53eeb12a0c9907c64dba5b39dce4a873a. Nov 12 22:31:32.121989 containerd[1459]: time="2024-11-12T22:31:32.121954421Z" level=info msg="StartContainer for \"90121a6c478b8fbffeab3c95f7b333b53eeb12a0c9907c64dba5b39dce4a873a\" returns successfully" Nov 12 22:31:32.381574 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Nov 12 22:31:33.065582 kubelet[2548]: E1112 22:31:33.065509 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:31:33.080252 kubelet[2548]: I1112 22:31:33.080175 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9r75r" podStartSLOduration=6.080159407 podStartE2EDuration="6.080159407s" podCreationTimestamp="2024-11-12 22:31:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:31:33.0791846 +0000 UTC m=+80.298609884" watchObservedRunningTime="2024-11-12 22:31:33.080159407 +0000 UTC m=+80.299584651" Nov 12 22:31:34.308614 kubelet[2548]: E1112 22:31:34.308514 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:31:35.098759 systemd-networkd[1393]: lxc_health: Link UP Nov 12 22:31:35.110665 systemd-networkd[1393]: lxc_health: Gained carrier Nov 12 22:31:35.855953 kubelet[2548]: E1112 22:31:35.855916 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:31:36.309565 kubelet[2548]: E1112 22:31:36.309518 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:31:36.990680 systemd-networkd[1393]: lxc_health: Gained IPv6LL Nov 12 22:31:37.072867 kubelet[2548]: E1112 22:31:37.072813 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:31:38.074390 kubelet[2548]: E1112 22:31:38.074343 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:31:40.967878 sshd[4396]: Connection closed by 10.0.0.1 port 34630 Nov 12 22:31:40.968848 sshd-session[4390]: pam_unix(sshd:session): session closed for user core Nov 12 22:31:40.972404 systemd[1]: sshd@25-10.0.0.65:22-10.0.0.1:34630.service: Deactivated successfully. Nov 12 22:31:40.975235 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 22:31:40.975852 systemd-logind[1446]: Session 26 logged out. Waiting for processes to exit. Nov 12 22:31:40.976778 systemd-logind[1446]: Removed session 26.