Nov 12 22:35:57.918417 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 12 22:35:57.918438 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Tue Nov 12 21:07:55 -00 2024 Nov 12 22:35:57.918448 kernel: KASLR enabled Nov 12 22:35:57.918454 kernel: efi: EFI v2.7 by EDK II Nov 12 22:35:57.918459 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Nov 12 22:35:57.918465 kernel: random: crng init done Nov 12 22:35:57.918472 kernel: secureboot: Secure boot disabled Nov 12 22:35:57.918478 kernel: ACPI: Early table checksum verification disabled Nov 12 22:35:57.918484 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Nov 12 22:35:57.918491 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 12 22:35:57.918497 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:35:57.918503 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:35:57.918509 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:35:57.918515 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:35:57.918523 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:35:57.918530 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:35:57.918537 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:35:57.918543 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:35:57.918549 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:35:57.918555 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 12 22:35:57.918561 kernel: NUMA: Failed to initialise from firmware Nov 12 22:35:57.918568 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 22:35:57.918574 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Nov 12 22:35:57.918580 kernel: Zone ranges: Nov 12 22:35:57.918586 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 22:35:57.918594 kernel: DMA32 empty Nov 12 22:35:57.918600 kernel: Normal empty Nov 12 22:35:57.918606 kernel: Movable zone start for each node Nov 12 22:35:57.918612 kernel: Early memory node ranges Nov 12 22:35:57.918618 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Nov 12 22:35:57.918624 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Nov 12 22:35:57.918631 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Nov 12 22:35:57.918637 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 12 22:35:57.918643 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 12 22:35:57.918649 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 12 22:35:57.918655 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 12 22:35:57.918661 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 22:35:57.918669 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 12 22:35:57.918675 kernel: psci: probing for conduit method from ACPI. Nov 12 22:35:57.918681 kernel: psci: PSCIv1.1 detected in firmware. Nov 12 22:35:57.918690 kernel: psci: Using standard PSCI v0.2 function IDs Nov 12 22:35:57.918696 kernel: psci: Trusted OS migration not required Nov 12 22:35:57.918703 kernel: psci: SMC Calling Convention v1.1 Nov 12 22:35:57.918711 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 12 22:35:57.918718 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Nov 12 22:35:57.918724 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Nov 12 22:35:57.918731 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 12 22:35:57.918738 kernel: Detected PIPT I-cache on CPU0 Nov 12 22:35:57.918744 kernel: CPU features: detected: GIC system register CPU interface Nov 12 22:35:57.918751 kernel: CPU features: detected: Hardware dirty bit management Nov 12 22:35:57.918757 kernel: CPU features: detected: Spectre-v4 Nov 12 22:35:57.918764 kernel: CPU features: detected: Spectre-BHB Nov 12 22:35:57.918770 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 12 22:35:57.918778 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 12 22:35:57.918785 kernel: CPU features: detected: ARM erratum 1418040 Nov 12 22:35:57.918791 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 12 22:35:57.918798 kernel: alternatives: applying boot alternatives Nov 12 22:35:57.918805 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=054b3f497d0699ec5dd6f755e221ed9e2d4f35054d20dd4fb5abe997efb88cfb Nov 12 22:35:57.918812 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 22:35:57.918819 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 22:35:57.918826 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 22:35:57.918833 kernel: Fallback order for Node 0: 0 Nov 12 22:35:57.918839 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Nov 12 22:35:57.918846 kernel: Policy zone: DMA Nov 12 22:35:57.918854 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 22:35:57.918860 kernel: software IO TLB: area num 4. Nov 12 22:35:57.918867 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Nov 12 22:35:57.918874 kernel: Memory: 2386320K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 185968K reserved, 0K cma-reserved) Nov 12 22:35:57.918881 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 22:35:57.918887 kernel: trace event string verifier disabled Nov 12 22:35:57.918894 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 22:35:57.918901 kernel: rcu: RCU event tracing is enabled. Nov 12 22:35:57.918908 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 22:35:57.918914 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 22:35:57.918921 kernel: Tracing variant of Tasks RCU enabled. Nov 12 22:35:57.918928 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 22:35:57.918936 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 22:35:57.918942 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 12 22:35:57.918949 kernel: GICv3: 256 SPIs implemented Nov 12 22:35:57.918955 kernel: GICv3: 0 Extended SPIs implemented Nov 12 22:35:57.918974 kernel: Root IRQ handler: gic_handle_irq Nov 12 22:35:57.918981 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 12 22:35:57.918988 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 12 22:35:57.918994 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 12 22:35:57.919001 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Nov 12 22:35:57.919008 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Nov 12 22:35:57.919015 kernel: GICv3: using LPI property table @0x00000000400f0000 Nov 12 22:35:57.919023 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Nov 12 22:35:57.919029 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 22:35:57.919036 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 22:35:57.919042 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 12 22:35:57.919049 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 12 22:35:57.919056 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 12 22:35:57.919063 kernel: arm-pv: using stolen time PV Nov 12 22:35:57.919069 kernel: Console: colour dummy device 80x25 Nov 12 22:35:57.919076 kernel: ACPI: Core revision 20230628 Nov 12 22:35:57.919083 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 12 22:35:57.919090 kernel: pid_max: default: 32768 minimum: 301 Nov 12 22:35:57.919098 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 22:35:57.919105 kernel: landlock: Up and running. Nov 12 22:35:57.919111 kernel: SELinux: Initializing. Nov 12 22:35:57.919118 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 22:35:57.919125 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 22:35:57.919132 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 22:35:57.919139 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 22:35:57.919146 kernel: rcu: Hierarchical SRCU implementation. Nov 12 22:35:57.919153 kernel: rcu: Max phase no-delay instances is 400. Nov 12 22:35:57.919161 kernel: Platform MSI: ITS@0x8080000 domain created Nov 12 22:35:57.919167 kernel: PCI/MSI: ITS@0x8080000 domain created Nov 12 22:35:57.919174 kernel: Remapping and enabling EFI services. Nov 12 22:35:57.919181 kernel: smp: Bringing up secondary CPUs ... Nov 12 22:35:57.919188 kernel: Detected PIPT I-cache on CPU1 Nov 12 22:35:57.919195 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 12 22:35:57.919202 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Nov 12 22:35:57.919208 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 22:35:57.919215 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 12 22:35:57.919223 kernel: Detected PIPT I-cache on CPU2 Nov 12 22:35:57.919230 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 12 22:35:57.919241 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Nov 12 22:35:57.919250 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 22:35:57.919257 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 12 22:35:57.919264 kernel: Detected PIPT I-cache on CPU3 Nov 12 22:35:57.919271 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 12 22:35:57.919278 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Nov 12 22:35:57.919286 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 22:35:57.919294 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 12 22:35:57.919302 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 22:35:57.919309 kernel: SMP: Total of 4 processors activated. Nov 12 22:35:57.919316 kernel: CPU features: detected: 32-bit EL0 Support Nov 12 22:35:57.919324 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 12 22:35:57.919336 kernel: CPU features: detected: Common not Private translations Nov 12 22:35:57.919344 kernel: CPU features: detected: CRC32 instructions Nov 12 22:35:57.919352 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 12 22:35:57.919361 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 12 22:35:57.919368 kernel: CPU features: detected: LSE atomic instructions Nov 12 22:35:57.919375 kernel: CPU features: detected: Privileged Access Never Nov 12 22:35:57.919382 kernel: CPU features: detected: RAS Extension Support Nov 12 22:35:57.919390 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 12 22:35:57.919397 kernel: CPU: All CPU(s) started at EL1 Nov 12 22:35:57.919404 kernel: alternatives: applying system-wide alternatives Nov 12 22:35:57.919411 kernel: devtmpfs: initialized Nov 12 22:35:57.919418 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 22:35:57.919427 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 22:35:57.919434 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 22:35:57.919441 kernel: SMBIOS 3.0.0 present. Nov 12 22:35:57.919448 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Nov 12 22:35:57.919455 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 22:35:57.919463 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 12 22:35:57.919470 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 12 22:35:57.919477 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 12 22:35:57.919485 kernel: audit: initializing netlink subsys (disabled) Nov 12 22:35:57.919493 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Nov 12 22:35:57.919500 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 22:35:57.919507 kernel: cpuidle: using governor menu Nov 12 22:35:57.919515 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 12 22:35:57.919522 kernel: ASID allocator initialised with 32768 entries Nov 12 22:35:57.919529 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 22:35:57.919536 kernel: Serial: AMBA PL011 UART driver Nov 12 22:35:57.919543 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 12 22:35:57.919551 kernel: Modules: 0 pages in range for non-PLT usage Nov 12 22:35:57.919559 kernel: Modules: 508960 pages in range for PLT usage Nov 12 22:35:57.919566 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 22:35:57.919573 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 22:35:57.919580 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 12 22:35:57.919588 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 12 22:35:57.919595 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 22:35:57.919602 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 22:35:57.919609 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 12 22:35:57.919616 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 12 22:35:57.919625 kernel: ACPI: Added _OSI(Module Device) Nov 12 22:35:57.919632 kernel: ACPI: Added _OSI(Processor Device) Nov 12 22:35:57.919639 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 22:35:57.919646 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 22:35:57.919653 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 22:35:57.919660 kernel: ACPI: Interpreter enabled Nov 12 22:35:57.919667 kernel: ACPI: Using GIC for interrupt routing Nov 12 22:35:57.919675 kernel: ACPI: MCFG table detected, 1 entries Nov 12 22:35:57.919682 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 12 22:35:57.919689 kernel: printk: console [ttyAMA0] enabled Nov 12 22:35:57.919697 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 22:35:57.919823 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 22:35:57.919893 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 12 22:35:57.919957 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 12 22:35:57.920065 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 12 22:35:57.920127 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 12 22:35:57.920137 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 12 22:35:57.920148 kernel: PCI host bridge to bus 0000:00 Nov 12 22:35:57.920217 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 12 22:35:57.920275 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 12 22:35:57.920340 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 12 22:35:57.920406 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 22:35:57.920485 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Nov 12 22:35:57.920565 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 22:35:57.920635 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Nov 12 22:35:57.920699 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Nov 12 22:35:57.920762 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Nov 12 22:35:57.920826 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Nov 12 22:35:57.920890 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Nov 12 22:35:57.920954 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Nov 12 22:35:57.921030 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 12 22:35:57.921088 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 12 22:35:57.921145 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 12 22:35:57.921155 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 12 22:35:57.921162 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 12 22:35:57.921169 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 12 22:35:57.921177 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 12 22:35:57.921184 kernel: iommu: Default domain type: Translated Nov 12 22:35:57.921193 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 12 22:35:57.921200 kernel: efivars: Registered efivars operations Nov 12 22:35:57.921208 kernel: vgaarb: loaded Nov 12 22:35:57.921215 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 12 22:35:57.921222 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 22:35:57.921230 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 22:35:57.921237 kernel: pnp: PnP ACPI init Nov 12 22:35:57.921311 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 12 22:35:57.921323 kernel: pnp: PnP ACPI: found 1 devices Nov 12 22:35:57.921330 kernel: NET: Registered PF_INET protocol family Nov 12 22:35:57.921345 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 22:35:57.921352 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 22:35:57.921360 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 22:35:57.921367 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 22:35:57.921375 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 22:35:57.921382 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 22:35:57.921389 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 22:35:57.921399 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 22:35:57.921406 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 22:35:57.921414 kernel: PCI: CLS 0 bytes, default 64 Nov 12 22:35:57.921421 kernel: kvm [1]: HYP mode not available Nov 12 22:35:57.921429 kernel: Initialise system trusted keyrings Nov 12 22:35:57.921436 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 22:35:57.921443 kernel: Key type asymmetric registered Nov 12 22:35:57.921451 kernel: Asymmetric key parser 'x509' registered Nov 12 22:35:57.921458 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 12 22:35:57.921466 kernel: io scheduler mq-deadline registered Nov 12 22:35:57.921473 kernel: io scheduler kyber registered Nov 12 22:35:57.921480 kernel: io scheduler bfq registered Nov 12 22:35:57.921488 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 12 22:35:57.921495 kernel: ACPI: button: Power Button [PWRB] Nov 12 22:35:57.921502 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 12 22:35:57.921574 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 12 22:35:57.921584 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 22:35:57.921592 kernel: thunder_xcv, ver 1.0 Nov 12 22:35:57.921600 kernel: thunder_bgx, ver 1.0 Nov 12 22:35:57.921608 kernel: nicpf, ver 1.0 Nov 12 22:35:57.921615 kernel: nicvf, ver 1.0 Nov 12 22:35:57.921687 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 12 22:35:57.921749 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-11-12T22:35:57 UTC (1731450957) Nov 12 22:35:57.921760 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 12 22:35:57.921767 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Nov 12 22:35:57.921774 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 12 22:35:57.921783 kernel: watchdog: Hard watchdog permanently disabled Nov 12 22:35:57.921791 kernel: NET: Registered PF_INET6 protocol family Nov 12 22:35:57.921798 kernel: Segment Routing with IPv6 Nov 12 22:35:57.921805 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 22:35:57.921813 kernel: NET: Registered PF_PACKET protocol family Nov 12 22:35:57.921820 kernel: Key type dns_resolver registered Nov 12 22:35:57.921827 kernel: registered taskstats version 1 Nov 12 22:35:57.921834 kernel: Loading compiled-in X.509 certificates Nov 12 22:35:57.921841 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 27dd0d090d7a0971a24582c9198f7e80123ea69f' Nov 12 22:35:57.921849 kernel: Key type .fscrypt registered Nov 12 22:35:57.921857 kernel: Key type fscrypt-provisioning registered Nov 12 22:35:57.921864 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 22:35:57.921871 kernel: ima: Allocated hash algorithm: sha1 Nov 12 22:35:57.921878 kernel: ima: No architecture policies found Nov 12 22:35:57.921886 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 12 22:35:57.921893 kernel: clk: Disabling unused clocks Nov 12 22:35:57.921900 kernel: Freeing unused kernel memory: 39680K Nov 12 22:35:57.921907 kernel: Run /init as init process Nov 12 22:35:57.921915 kernel: with arguments: Nov 12 22:35:57.921922 kernel: /init Nov 12 22:35:57.921929 kernel: with environment: Nov 12 22:35:57.921937 kernel: HOME=/ Nov 12 22:35:57.921944 kernel: TERM=linux Nov 12 22:35:57.921951 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 22:35:57.921960 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 22:35:57.921987 systemd[1]: Detected virtualization kvm. Nov 12 22:35:57.921997 systemd[1]: Detected architecture arm64. Nov 12 22:35:57.922004 systemd[1]: Running in initrd. Nov 12 22:35:57.922012 systemd[1]: No hostname configured, using default hostname. Nov 12 22:35:57.922019 systemd[1]: Hostname set to . Nov 12 22:35:57.922027 systemd[1]: Initializing machine ID from VM UUID. Nov 12 22:35:57.922035 systemd[1]: Queued start job for default target initrd.target. Nov 12 22:35:57.922055 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:35:57.922063 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:35:57.922074 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 22:35:57.922082 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 22:35:57.922090 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 22:35:57.922098 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 22:35:57.922107 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 22:35:57.922115 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 22:35:57.922123 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:35:57.922133 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:35:57.922141 systemd[1]: Reached target paths.target - Path Units. Nov 12 22:35:57.922149 systemd[1]: Reached target slices.target - Slice Units. Nov 12 22:35:57.922157 systemd[1]: Reached target swap.target - Swaps. Nov 12 22:35:57.922165 systemd[1]: Reached target timers.target - Timer Units. Nov 12 22:35:57.922173 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 22:35:57.922180 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 22:35:57.922188 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 22:35:57.922197 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 22:35:57.922205 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:35:57.922213 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 22:35:57.922221 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:35:57.922229 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 22:35:57.922237 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 22:35:57.922244 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 22:35:57.922252 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 22:35:57.922259 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 22:35:57.922268 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 22:35:57.922276 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 22:35:57.922284 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:35:57.922291 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 22:35:57.922299 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:35:57.922307 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 22:35:57.922317 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 22:35:57.922324 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:35:57.922355 systemd-journald[239]: Collecting audit messages is disabled. Nov 12 22:35:57.922375 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 22:35:57.922384 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:35:57.922393 systemd-journald[239]: Journal started Nov 12 22:35:57.922410 systemd-journald[239]: Runtime Journal (/run/log/journal/9eabaf23f8a8434290a6e08a3ccc2f26) is 5.9M, max 47.3M, 41.4M free. Nov 12 22:35:57.913569 systemd-modules-load[240]: Inserted module 'overlay' Nov 12 22:35:57.927993 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 22:35:57.928019 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 22:35:57.931495 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 22:35:57.932203 kernel: Bridge firewalling registered Nov 12 22:35:57.931989 systemd-modules-load[240]: Inserted module 'br_netfilter' Nov 12 22:35:57.934241 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 22:35:57.937011 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:35:57.943088 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:35:57.944551 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 22:35:57.948655 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:35:57.950577 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 22:35:57.951984 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:35:57.958900 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:35:57.961294 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 22:35:57.965969 dracut-cmdline[273]: dracut-dracut-053 Nov 12 22:35:57.969140 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=054b3f497d0699ec5dd6f755e221ed9e2d4f35054d20dd4fb5abe997efb88cfb Nov 12 22:35:57.987150 systemd-resolved[280]: Positive Trust Anchors: Nov 12 22:35:57.987221 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 22:35:57.987256 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 22:35:57.991876 systemd-resolved[280]: Defaulting to hostname 'linux'. Nov 12 22:35:57.992921 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 22:35:57.995699 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:35:58.035990 kernel: SCSI subsystem initialized Nov 12 22:35:58.039981 kernel: Loading iSCSI transport class v2.0-870. Nov 12 22:35:58.049004 kernel: iscsi: registered transport (tcp) Nov 12 22:35:58.060190 kernel: iscsi: registered transport (qla4xxx) Nov 12 22:35:58.060208 kernel: QLogic iSCSI HBA Driver Nov 12 22:35:58.100529 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 22:35:58.114177 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 22:35:58.130001 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 22:35:58.130108 kernel: device-mapper: uevent: version 1.0.3 Nov 12 22:35:58.130131 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 22:35:58.174992 kernel: raid6: neonx8 gen() 15776 MB/s Nov 12 22:35:58.191988 kernel: raid6: neonx4 gen() 15653 MB/s Nov 12 22:35:58.208995 kernel: raid6: neonx2 gen() 13236 MB/s Nov 12 22:35:58.225995 kernel: raid6: neonx1 gen() 10505 MB/s Nov 12 22:35:58.242987 kernel: raid6: int64x8 gen() 6952 MB/s Nov 12 22:35:58.259988 kernel: raid6: int64x4 gen() 7331 MB/s Nov 12 22:35:58.276988 kernel: raid6: int64x2 gen() 6124 MB/s Nov 12 22:35:58.294108 kernel: raid6: int64x1 gen() 5053 MB/s Nov 12 22:35:58.294123 kernel: raid6: using algorithm neonx8 gen() 15776 MB/s Nov 12 22:35:58.312002 kernel: raid6: .... xor() 11972 MB/s, rmw enabled Nov 12 22:35:58.312033 kernel: raid6: using neon recovery algorithm Nov 12 22:35:58.317399 kernel: xor: measuring software checksum speed Nov 12 22:35:58.317421 kernel: 8regs : 19750 MB/sec Nov 12 22:35:58.318076 kernel: 32regs : 19646 MB/sec Nov 12 22:35:58.319269 kernel: arm64_neon : 26883 MB/sec Nov 12 22:35:58.319293 kernel: xor: using function: arm64_neon (26883 MB/sec) Nov 12 22:35:58.369988 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 22:35:58.381001 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 22:35:58.396131 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:35:58.407641 systemd-udevd[460]: Using default interface naming scheme 'v255'. Nov 12 22:35:58.410860 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:35:58.422163 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 22:35:58.434612 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Nov 12 22:35:58.459407 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 22:35:58.471105 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 22:35:58.509697 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:35:58.519104 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 22:35:58.533212 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 22:35:58.534544 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 22:35:58.536352 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:35:58.538538 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 22:35:58.551001 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 12 22:35:58.565813 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 22:35:58.566026 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 22:35:58.566038 kernel: GPT:9289727 != 19775487 Nov 12 22:35:58.566053 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 22:35:58.566064 kernel: GPT:9289727 != 19775487 Nov 12 22:35:58.566073 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 22:35:58.566082 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:35:58.551092 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 22:35:58.560579 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 22:35:58.565433 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 22:35:58.565547 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:35:58.567881 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:35:58.569082 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 22:35:58.569209 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:35:58.571338 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:35:58.584201 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:35:58.589196 kernel: BTRFS: device fsid 337794e4-53df-462b-aefc-e93e6a958f34 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (513) Nov 12 22:35:58.589218 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (510) Nov 12 22:35:58.597596 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:35:58.602939 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 22:35:58.607438 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 22:35:58.613743 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 22:35:58.614936 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 22:35:58.620410 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 22:35:58.630122 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 22:35:58.631860 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:35:58.636464 disk-uuid[551]: Primary Header is updated. Nov 12 22:35:58.636464 disk-uuid[551]: Secondary Entries is updated. Nov 12 22:35:58.636464 disk-uuid[551]: Secondary Header is updated. Nov 12 22:35:58.639582 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:35:58.651556 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:35:59.654072 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:35:59.654279 disk-uuid[552]: The operation has completed successfully. Nov 12 22:35:59.672650 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 22:35:59.672740 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 22:35:59.697148 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 22:35:59.700337 sh[573]: Success Nov 12 22:35:59.729234 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 12 22:35:59.770774 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 22:35:59.782310 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 22:35:59.784581 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 22:35:59.795130 kernel: BTRFS info (device dm-0): first mount of filesystem 337794e4-53df-462b-aefc-e93e6a958f34 Nov 12 22:35:59.795173 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 12 22:35:59.795184 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 22:35:59.796614 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 22:35:59.796650 kernel: BTRFS info (device dm-0): using free space tree Nov 12 22:35:59.800543 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 22:35:59.802450 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 22:35:59.814119 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 22:35:59.816717 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 22:35:59.826295 kernel: BTRFS info (device vda6): first mount of filesystem e7e17182-4510-4c0b-82ae-ebdf6a7625d9 Nov 12 22:35:59.826343 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 22:35:59.826362 kernel: BTRFS info (device vda6): using free space tree Nov 12 22:35:59.830091 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 22:35:59.837014 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 22:35:59.838839 kernel: BTRFS info (device vda6): last unmount of filesystem e7e17182-4510-4c0b-82ae-ebdf6a7625d9 Nov 12 22:35:59.844407 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 22:35:59.851153 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 22:35:59.913616 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 22:35:59.928148 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 22:35:59.945223 ignition[668]: Ignition 2.20.0 Nov 12 22:35:59.945234 ignition[668]: Stage: fetch-offline Nov 12 22:35:59.945272 ignition[668]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:35:59.945281 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:35:59.945468 ignition[668]: parsed url from cmdline: "" Nov 12 22:35:59.945472 ignition[668]: no config URL provided Nov 12 22:35:59.945477 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 22:35:59.945485 ignition[668]: no config at "/usr/lib/ignition/user.ign" Nov 12 22:35:59.951226 systemd-networkd[766]: lo: Link UP Nov 12 22:35:59.945512 ignition[668]: op(1): [started] loading QEMU firmware config module Nov 12 22:35:59.951230 systemd-networkd[766]: lo: Gained carrier Nov 12 22:35:59.945518 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 22:35:59.952277 systemd-networkd[766]: Enumeration completed Nov 12 22:35:59.952978 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:35:59.956079 ignition[668]: op(1): [finished] loading QEMU firmware config module Nov 12 22:35:59.952982 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:35:59.954370 systemd-networkd[766]: eth0: Link UP Nov 12 22:35:59.954374 systemd-networkd[766]: eth0: Gained carrier Nov 12 22:35:59.954380 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:35:59.955009 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 22:35:59.956218 systemd[1]: Reached target network.target - Network. Nov 12 22:35:59.983014 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 22:36:00.006944 ignition[668]: parsing config with SHA512: cf1304a6ed379f107077f975c2ef5f60385b53439c892809b49aebafaaa91e32b6dfe3eda5f63611d570c60d6d7bdc0da71ab72241118bed5c0b88d0013f88cd Nov 12 22:36:00.013987 unknown[668]: fetched base config from "system" Nov 12 22:36:00.013997 unknown[668]: fetched user config from "qemu" Nov 12 22:36:00.014495 ignition[668]: fetch-offline: fetch-offline passed Nov 12 22:36:00.016452 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 22:36:00.014576 ignition[668]: Ignition finished successfully Nov 12 22:36:00.017801 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 22:36:00.023133 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 22:36:00.033300 ignition[773]: Ignition 2.20.0 Nov 12 22:36:00.033310 ignition[773]: Stage: kargs Nov 12 22:36:00.033475 ignition[773]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:36:00.033484 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:36:00.034496 ignition[773]: kargs: kargs passed Nov 12 22:36:00.034541 ignition[773]: Ignition finished successfully Nov 12 22:36:00.037019 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 22:36:00.048093 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 22:36:00.057483 ignition[782]: Ignition 2.20.0 Nov 12 22:36:00.057495 ignition[782]: Stage: disks Nov 12 22:36:00.057656 ignition[782]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:36:00.060343 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 22:36:00.057666 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:36:00.061490 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 22:36:00.058612 ignition[782]: disks: disks passed Nov 12 22:36:00.063158 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 22:36:00.058660 ignition[782]: Ignition finished successfully Nov 12 22:36:00.065159 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 22:36:00.067022 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 22:36:00.068480 systemd[1]: Reached target basic.target - Basic System. Nov 12 22:36:00.084117 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 22:36:00.093740 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 22:36:00.097697 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 22:36:00.106084 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 22:36:00.147994 kernel: EXT4-fs (vda9): mounted filesystem be7e07bb-77fc-4aec-a4f6-d76dc4498784 r/w with ordered data mode. Quota mode: none. Nov 12 22:36:00.148080 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 22:36:00.149369 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 22:36:00.164055 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 22:36:00.165705 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 22:36:00.167185 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 22:36:00.167222 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 22:36:00.177289 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (801) Nov 12 22:36:00.177312 kernel: BTRFS info (device vda6): first mount of filesystem e7e17182-4510-4c0b-82ae-ebdf6a7625d9 Nov 12 22:36:00.177330 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 22:36:00.177342 kernel: BTRFS info (device vda6): using free space tree Nov 12 22:36:00.167243 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 22:36:00.171569 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 22:36:00.173170 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 22:36:00.181996 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 22:36:00.183427 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 22:36:00.220507 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 22:36:00.224833 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Nov 12 22:36:00.228697 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 22:36:00.232576 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 22:36:00.301907 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 22:36:00.312057 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 22:36:00.313564 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 22:36:00.318993 kernel: BTRFS info (device vda6): last unmount of filesystem e7e17182-4510-4c0b-82ae-ebdf6a7625d9 Nov 12 22:36:00.335038 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 22:36:00.337527 ignition[914]: INFO : Ignition 2.20.0 Nov 12 22:36:00.337527 ignition[914]: INFO : Stage: mount Nov 12 22:36:00.338953 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:36:00.338953 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:36:00.338953 ignition[914]: INFO : mount: mount passed Nov 12 22:36:00.338953 ignition[914]: INFO : Ignition finished successfully Nov 12 22:36:00.340193 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 22:36:00.357093 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 22:36:00.793202 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 22:36:00.804216 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 22:36:00.810686 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (927) Nov 12 22:36:00.810719 kernel: BTRFS info (device vda6): first mount of filesystem e7e17182-4510-4c0b-82ae-ebdf6a7625d9 Nov 12 22:36:00.810730 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 22:36:00.812242 kernel: BTRFS info (device vda6): using free space tree Nov 12 22:36:00.814986 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 22:36:00.815565 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 22:36:00.830735 ignition[944]: INFO : Ignition 2.20.0 Nov 12 22:36:00.830735 ignition[944]: INFO : Stage: files Nov 12 22:36:00.832285 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:36:00.832285 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:36:00.832285 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Nov 12 22:36:00.835591 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 22:36:00.835591 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 22:36:00.835591 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 22:36:00.835591 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 22:36:00.835591 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 22:36:00.834892 unknown[944]: wrote ssh authorized keys file for user: core Nov 12 22:36:00.842803 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 22:36:00.842803 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 22:36:00.842803 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 22:36:00.842803 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Nov 12 22:36:00.886560 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 22:36:00.971525 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 22:36:00.971525 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 22:36:00.975150 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Nov 12 22:36:01.053095 systemd-networkd[766]: eth0: Gained IPv6LL Nov 12 22:36:01.332634 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Nov 12 22:36:01.405438 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 22:36:01.407308 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Nov 12 22:36:01.407308 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 22:36:01.407308 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 22:36:01.407308 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 22:36:01.407308 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 22:36:01.407308 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 22:36:01.407308 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 22:36:01.407308 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 22:36:01.407308 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 22:36:01.407308 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 22:36:01.407308 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 22:36:01.407308 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 22:36:01.407308 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 22:36:01.407308 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Nov 12 22:36:01.621923 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Nov 12 22:36:01.877192 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 22:36:01.877192 ignition[944]: INFO : files: op(d): [started] processing unit "containerd.service" Nov 12 22:36:01.880899 ignition[944]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 22:36:01.880899 ignition[944]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 22:36:01.880899 ignition[944]: INFO : files: op(d): [finished] processing unit "containerd.service" Nov 12 22:36:01.880899 ignition[944]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Nov 12 22:36:01.880899 ignition[944]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 22:36:01.880899 ignition[944]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 22:36:01.880899 ignition[944]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Nov 12 22:36:01.880899 ignition[944]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Nov 12 22:36:01.880899 ignition[944]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 22:36:01.880899 ignition[944]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 22:36:01.880899 ignition[944]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Nov 12 22:36:01.880899 ignition[944]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 22:36:01.904753 ignition[944]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 22:36:01.908673 ignition[944]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 22:36:01.910360 ignition[944]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 22:36:01.910360 ignition[944]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Nov 12 22:36:01.910360 ignition[944]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 22:36:01.910360 ignition[944]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 22:36:01.910360 ignition[944]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 22:36:01.910360 ignition[944]: INFO : files: files passed Nov 12 22:36:01.910360 ignition[944]: INFO : Ignition finished successfully Nov 12 22:36:01.910662 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 22:36:01.923167 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 22:36:01.925108 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 22:36:01.929264 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 22:36:01.929362 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 22:36:01.933783 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 22:36:01.937208 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:36:01.937208 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:36:01.940485 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:36:01.939956 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 22:36:01.941882 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 22:36:01.960230 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 22:36:01.980336 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 22:36:01.980480 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 22:36:01.982693 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 22:36:01.984711 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 22:36:01.986561 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 22:36:01.987355 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 22:36:02.003871 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 22:36:02.012151 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 22:36:02.019924 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:36:02.021124 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:36:02.023165 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 22:36:02.024987 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 22:36:02.025108 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 22:36:02.027877 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 22:36:02.028989 systemd[1]: Stopped target basic.target - Basic System. Nov 12 22:36:02.030926 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 22:36:02.032866 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 22:36:02.034800 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 22:36:02.036844 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 22:36:02.038895 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 22:36:02.041065 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 22:36:02.042908 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 22:36:02.044961 systemd[1]: Stopped target swap.target - Swaps. Nov 12 22:36:02.046746 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 22:36:02.046877 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 22:36:02.049435 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:36:02.051343 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:36:02.053356 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 22:36:02.053457 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:36:02.055423 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 22:36:02.055531 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 22:36:02.058269 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 22:36:02.058394 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 22:36:02.060731 systemd[1]: Stopped target paths.target - Path Units. Nov 12 22:36:02.062373 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 22:36:02.062473 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:36:02.064226 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 22:36:02.065877 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 22:36:02.067659 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 22:36:02.067744 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 22:36:02.069251 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 22:36:02.069342 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 22:36:02.071038 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 22:36:02.071171 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 22:36:02.073559 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 22:36:02.073661 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 22:36:02.085124 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 22:36:02.086747 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 22:36:02.087691 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 22:36:02.087810 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:36:02.089691 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 22:36:02.089789 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 22:36:02.095232 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 22:36:02.096303 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 22:36:02.098812 ignition[1000]: INFO : Ignition 2.20.0 Nov 12 22:36:02.098812 ignition[1000]: INFO : Stage: umount Nov 12 22:36:02.098812 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:36:02.098812 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:36:02.098812 ignition[1000]: INFO : umount: umount passed Nov 12 22:36:02.098812 ignition[1000]: INFO : Ignition finished successfully Nov 12 22:36:02.099927 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 22:36:02.100033 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 22:36:02.101689 systemd[1]: Stopped target network.target - Network. Nov 12 22:36:02.103344 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 22:36:02.103405 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 22:36:02.105669 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 22:36:02.105718 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 22:36:02.107413 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 22:36:02.107460 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 22:36:02.109242 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 22:36:02.109294 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 22:36:02.111368 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 22:36:02.113273 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 22:36:02.116181 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 22:36:02.117151 systemd-networkd[766]: eth0: DHCPv6 lease lost Nov 12 22:36:02.117808 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 22:36:02.117896 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 22:36:02.119306 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 22:36:02.119419 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 22:36:02.121847 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 22:36:02.121897 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:36:02.123201 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 22:36:02.123254 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 22:36:02.132075 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 22:36:02.133558 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 22:36:02.133628 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 22:36:02.135665 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:36:02.138679 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 22:36:02.138776 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 22:36:02.142694 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 22:36:02.142750 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:36:02.144477 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 22:36:02.144525 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 22:36:02.146390 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 22:36:02.146438 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:36:02.160891 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 22:36:02.161095 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:36:02.163479 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 22:36:02.163525 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 22:36:02.165381 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 22:36:02.165412 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:36:02.167075 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 22:36:02.167126 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 22:36:02.169917 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 22:36:02.169976 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 22:36:02.172657 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 22:36:02.172702 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:36:02.176326 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 22:36:02.177467 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 22:36:02.177537 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:36:02.179524 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 22:36:02.179567 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 22:36:02.181787 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 22:36:02.181835 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:36:02.183843 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 22:36:02.183891 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:36:02.186146 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 22:36:02.186245 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 22:36:02.188202 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 22:36:02.188292 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 22:36:02.190951 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 22:36:02.193190 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 22:36:02.201508 systemd[1]: Switching root. Nov 12 22:36:02.233900 systemd-journald[239]: Journal stopped Nov 12 22:36:02.959864 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Nov 12 22:36:02.959920 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 22:36:02.959932 kernel: SELinux: policy capability open_perms=1 Nov 12 22:36:02.959942 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 22:36:02.959954 kernel: SELinux: policy capability always_check_network=0 Nov 12 22:36:02.960024 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 22:36:02.960038 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 22:36:02.960048 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 22:36:02.960057 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 22:36:02.960067 kernel: audit: type=1403 audit(1731450962.418:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 22:36:02.960078 systemd[1]: Successfully loaded SELinux policy in 32.775ms. Nov 12 22:36:02.960099 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.880ms. Nov 12 22:36:02.960110 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 22:36:02.960123 systemd[1]: Detected virtualization kvm. Nov 12 22:36:02.960134 systemd[1]: Detected architecture arm64. Nov 12 22:36:02.960144 systemd[1]: Detected first boot. Nov 12 22:36:02.960154 systemd[1]: Initializing machine ID from VM UUID. Nov 12 22:36:02.960167 zram_generator::config[1066]: No configuration found. Nov 12 22:36:02.960181 systemd[1]: Populated /etc with preset unit settings. Nov 12 22:36:02.960192 systemd[1]: Queued start job for default target multi-user.target. Nov 12 22:36:02.960203 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 22:36:02.960216 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 22:36:02.960226 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 22:36:02.960237 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 22:36:02.960247 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 22:36:02.960257 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 22:36:02.960268 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 22:36:02.960278 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 22:36:02.960289 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 22:36:02.960299 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:36:02.960312 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:36:02.960332 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 22:36:02.960342 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 22:36:02.960358 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 22:36:02.960368 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 22:36:02.960379 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 12 22:36:02.960389 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:36:02.960399 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 22:36:02.960410 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:36:02.960422 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 22:36:02.960433 systemd[1]: Reached target slices.target - Slice Units. Nov 12 22:36:02.960443 systemd[1]: Reached target swap.target - Swaps. Nov 12 22:36:02.960454 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 22:36:02.960464 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 22:36:02.960474 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 22:36:02.960485 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 22:36:02.960495 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:36:02.960507 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 22:36:02.960517 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:36:02.960528 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 22:36:02.960538 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 22:36:02.960548 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 22:36:02.960573 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 22:36:02.960584 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 22:36:02.960594 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 22:36:02.960604 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 22:36:02.960616 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 22:36:02.960627 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:36:02.960637 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 22:36:02.960648 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 22:36:02.960660 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:36:02.960670 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 22:36:02.960681 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:36:02.960691 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 22:36:02.960701 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:36:02.960714 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 22:36:02.960725 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 12 22:36:02.960736 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 12 22:36:02.960746 kernel: fuse: init (API version 7.39) Nov 12 22:36:02.960756 kernel: loop: module loaded Nov 12 22:36:02.960778 kernel: ACPI: bus type drm_connector registered Nov 12 22:36:02.960789 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 22:36:02.960800 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 22:36:02.960812 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 22:36:02.960823 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 22:36:02.960850 systemd-journald[1151]: Collecting audit messages is disabled. Nov 12 22:36:02.960872 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 22:36:02.960884 systemd-journald[1151]: Journal started Nov 12 22:36:02.960905 systemd-journald[1151]: Runtime Journal (/run/log/journal/9eabaf23f8a8434290a6e08a3ccc2f26) is 5.9M, max 47.3M, 41.4M free. Nov 12 22:36:02.964989 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 22:36:02.965942 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 22:36:02.967167 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 22:36:02.968427 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 22:36:02.969560 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 22:36:02.970859 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 22:36:02.972032 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 22:36:02.973351 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 22:36:02.974850 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:36:02.976359 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 22:36:02.976524 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 22:36:02.978013 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:36:02.978171 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:36:02.979602 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 22:36:02.979759 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 22:36:02.981118 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:36:02.981274 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:36:02.983091 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 22:36:02.983248 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 22:36:02.984555 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:36:02.984763 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:36:02.986486 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 22:36:02.988115 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 22:36:02.989561 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 22:36:03.001561 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 22:36:03.013054 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 22:36:03.015531 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 22:36:03.016861 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 22:36:03.018572 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 22:36:03.021034 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 22:36:03.022325 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 22:36:03.024004 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 22:36:03.025419 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 22:36:03.029015 systemd-journald[1151]: Time spent on flushing to /var/log/journal/9eabaf23f8a8434290a6e08a3ccc2f26 is 14.862ms for 849 entries. Nov 12 22:36:03.029015 systemd-journald[1151]: System Journal (/var/log/journal/9eabaf23f8a8434290a6e08a3ccc2f26) is 8.0M, max 195.6M, 187.6M free. Nov 12 22:36:03.063168 systemd-journald[1151]: Received client request to flush runtime journal. Nov 12 22:36:03.029148 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:36:03.032696 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 22:36:03.035508 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:36:03.037207 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 22:36:03.040373 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 22:36:03.042091 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 22:36:03.044945 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 22:36:03.059188 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 22:36:03.062744 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Nov 12 22:36:03.062754 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Nov 12 22:36:03.063387 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:36:03.065550 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 22:36:03.067205 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 22:36:03.072597 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 22:36:03.075535 udevadm[1204]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 12 22:36:03.095307 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 22:36:03.107251 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 22:36:03.119218 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Nov 12 22:36:03.119538 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Nov 12 22:36:03.123366 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:36:03.449757 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 22:36:03.463124 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:36:03.481374 systemd-udevd[1224]: Using default interface naming scheme 'v255'. Nov 12 22:36:03.494922 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:36:03.503099 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 22:36:03.518196 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 22:36:03.532012 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1236) Nov 12 22:36:03.535011 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1236) Nov 12 22:36:03.537857 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Nov 12 22:36:03.553012 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1228) Nov 12 22:36:03.574170 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 22:36:03.585528 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 22:36:03.627172 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:36:03.635088 systemd-networkd[1230]: lo: Link UP Nov 12 22:36:03.635095 systemd-networkd[1230]: lo: Gained carrier Nov 12 22:36:03.636562 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 22:36:03.637671 systemd-networkd[1230]: Enumeration completed Nov 12 22:36:03.638122 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:36:03.638130 systemd-networkd[1230]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:36:03.638351 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 22:36:03.638778 systemd-networkd[1230]: eth0: Link UP Nov 12 22:36:03.638786 systemd-networkd[1230]: eth0: Gained carrier Nov 12 22:36:03.638798 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:36:03.641114 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 22:36:03.643578 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 22:36:03.659706 lvm[1261]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 22:36:03.660078 systemd-networkd[1230]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 22:36:03.668741 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:36:03.698415 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 22:36:03.699923 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:36:03.711104 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 22:36:03.714632 lvm[1270]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 22:36:03.752523 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 22:36:03.754040 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 22:36:03.755371 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 22:36:03.755404 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 22:36:03.756445 systemd[1]: Reached target machines.target - Containers. Nov 12 22:36:03.758516 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 22:36:03.767103 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 22:36:03.769430 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 22:36:03.770538 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:36:03.771495 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 22:36:03.773778 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 22:36:03.780044 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 22:36:03.782520 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 22:36:03.790655 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 22:36:03.790978 kernel: loop0: detected capacity change from 0 to 113536 Nov 12 22:36:03.797936 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 22:36:03.799109 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 22:36:03.803988 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 22:36:03.828991 kernel: loop1: detected capacity change from 0 to 194512 Nov 12 22:36:03.884011 kernel: loop2: detected capacity change from 0 to 116808 Nov 12 22:36:03.934984 kernel: loop3: detected capacity change from 0 to 113536 Nov 12 22:36:03.946011 kernel: loop4: detected capacity change from 0 to 194512 Nov 12 22:36:03.957989 kernel: loop5: detected capacity change from 0 to 116808 Nov 12 22:36:03.961248 (sd-merge)[1290]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 22:36:03.961681 (sd-merge)[1290]: Merged extensions into '/usr'. Nov 12 22:36:03.965810 systemd[1]: Reloading requested from client PID 1278 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 22:36:03.965829 systemd[1]: Reloading... Nov 12 22:36:04.022000 zram_generator::config[1321]: No configuration found. Nov 12 22:36:04.041395 ldconfig[1275]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 22:36:04.113743 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:36:04.156624 systemd[1]: Reloading finished in 190 ms. Nov 12 22:36:04.170164 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 22:36:04.171597 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 22:36:04.182398 systemd[1]: Starting ensure-sysext.service... Nov 12 22:36:04.184389 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 22:36:04.187649 systemd[1]: Reloading requested from client PID 1360 ('systemctl') (unit ensure-sysext.service)... Nov 12 22:36:04.187666 systemd[1]: Reloading... Nov 12 22:36:04.204134 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 22:36:04.204426 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 22:36:04.205115 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 22:36:04.205358 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Nov 12 22:36:04.205411 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Nov 12 22:36:04.207926 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 22:36:04.207938 systemd-tmpfiles[1361]: Skipping /boot Nov 12 22:36:04.221176 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 22:36:04.221191 systemd-tmpfiles[1361]: Skipping /boot Nov 12 22:36:04.224092 zram_generator::config[1387]: No configuration found. Nov 12 22:36:04.324403 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:36:04.367671 systemd[1]: Reloading finished in 179 ms. Nov 12 22:36:04.387997 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:36:04.400786 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 12 22:36:04.403443 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 22:36:04.405974 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 22:36:04.412188 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 22:36:04.415746 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 22:36:04.425806 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:36:04.430292 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:36:04.435383 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:36:04.454384 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:36:04.457434 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:36:04.458429 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 22:36:04.460414 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 22:36:04.462266 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:36:04.462449 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:36:04.464393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:36:04.464552 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:36:04.466213 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:36:04.466433 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:36:04.476142 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:36:04.508355 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:36:04.511984 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:36:04.514827 augenrules[1473]: No rules Nov 12 22:36:04.516287 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:36:04.518204 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:36:04.522422 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 22:36:04.523668 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 22:36:04.525728 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 22:36:04.526129 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 12 22:36:04.528269 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:36:04.528449 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:36:04.529718 systemd-resolved[1434]: Positive Trust Anchors: Nov 12 22:36:04.529789 systemd-resolved[1434]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 22:36:04.529820 systemd-resolved[1434]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 22:36:04.530695 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:36:04.530938 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:36:04.532792 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:36:04.533121 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:36:04.535764 systemd-resolved[1434]: Defaulting to hostname 'linux'. Nov 12 22:36:04.538286 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 22:36:04.540200 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 22:36:04.541904 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 22:36:04.547555 systemd[1]: Reached target network.target - Network. Nov 12 22:36:04.548875 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:36:04.560262 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 12 22:36:04.561277 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:36:04.562688 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:36:04.564734 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 22:36:04.567201 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:36:04.574554 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:36:04.576947 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:36:04.577202 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 22:36:04.578421 augenrules[1492]: /sbin/augenrules: No change Nov 12 22:36:04.578656 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:36:04.578884 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:36:04.580861 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 22:36:04.581166 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 22:36:04.583042 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:36:04.583281 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:36:04.584908 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:36:04.585197 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:36:04.586616 augenrules[1517]: No rules Nov 12 22:36:04.587284 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 22:36:04.587578 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 12 22:36:04.590450 systemd[1]: Finished ensure-sysext.service. Nov 12 22:36:04.595563 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 22:36:04.595637 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 22:36:04.611220 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 22:36:04.659003 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 22:36:04.659727 systemd-timesyncd[1531]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 22:36:04.659774 systemd-timesyncd[1531]: Initial clock synchronization to Tue 2024-11-12 22:36:04.427857 UTC. Nov 12 22:36:04.661179 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 22:36:04.662471 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 22:36:04.663840 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 22:36:04.665265 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 22:36:04.666704 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 22:36:04.666819 systemd[1]: Reached target paths.target - Path Units. Nov 12 22:36:04.667870 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 22:36:04.669259 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 22:36:04.670588 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 22:36:04.671981 systemd[1]: Reached target timers.target - Timer Units. Nov 12 22:36:04.673705 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 22:36:04.676798 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 22:36:04.679299 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 22:36:04.682956 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 22:36:04.684098 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 22:36:04.685102 systemd[1]: Reached target basic.target - Basic System. Nov 12 22:36:04.686326 systemd[1]: System is tainted: cgroupsv1 Nov 12 22:36:04.686390 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 22:36:04.686409 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 22:36:04.688221 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 22:36:04.690553 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 22:36:04.693082 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 22:36:04.697053 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 22:36:04.698175 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 22:36:04.700063 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 22:36:04.703993 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 22:36:04.711513 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 22:36:04.715277 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 22:36:04.724493 jq[1537]: false Nov 12 22:36:04.722121 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 22:36:04.732146 extend-filesystems[1539]: Found loop3 Nov 12 22:36:04.732146 extend-filesystems[1539]: Found loop4 Nov 12 22:36:04.732146 extend-filesystems[1539]: Found loop5 Nov 12 22:36:04.732146 extend-filesystems[1539]: Found vda Nov 12 22:36:04.732146 extend-filesystems[1539]: Found vda1 Nov 12 22:36:04.741463 extend-filesystems[1539]: Found vda2 Nov 12 22:36:04.741463 extend-filesystems[1539]: Found vda3 Nov 12 22:36:04.741463 extend-filesystems[1539]: Found usr Nov 12 22:36:04.741463 extend-filesystems[1539]: Found vda4 Nov 12 22:36:04.741463 extend-filesystems[1539]: Found vda6 Nov 12 22:36:04.741463 extend-filesystems[1539]: Found vda7 Nov 12 22:36:04.741463 extend-filesystems[1539]: Found vda9 Nov 12 22:36:04.741463 extend-filesystems[1539]: Checking size of /dev/vda9 Nov 12 22:36:04.738941 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 22:36:04.743892 dbus-daemon[1536]: [system] SELinux support is enabled Nov 12 22:36:04.747129 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 22:36:04.752267 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 22:36:04.755914 extend-filesystems[1539]: Resized partition /dev/vda9 Nov 12 22:36:04.758854 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 22:36:04.766232 systemd-networkd[1230]: eth0: Gained IPv6LL Nov 12 22:36:04.769975 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1231) Nov 12 22:36:04.767339 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 22:36:04.767627 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 22:36:04.767881 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 22:36:04.768142 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 22:36:04.771181 extend-filesystems[1564]: resize2fs 1.47.1 (20-May-2024) Nov 12 22:36:04.773241 jq[1563]: true Nov 12 22:36:04.777120 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 22:36:04.779606 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 22:36:04.785585 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 22:36:04.785835 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 22:36:04.800993 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 22:36:04.803873 jq[1571]: true Nov 12 22:36:04.818217 (ntainerd)[1573]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 22:36:04.829613 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 22:36:04.840691 systemd-logind[1547]: Watching system buttons on /dev/input/event0 (Power Button) Nov 12 22:36:04.842232 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 22:36:04.845222 extend-filesystems[1564]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 22:36:04.845222 extend-filesystems[1564]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 22:36:04.845222 extend-filesystems[1564]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 22:36:04.863860 update_engine[1559]: I20241112 22:36:04.843647 1559 main.cc:92] Flatcar Update Engine starting Nov 12 22:36:04.863860 update_engine[1559]: I20241112 22:36:04.852154 1559 update_check_scheduler.cc:74] Next update check in 6m18s Nov 12 22:36:04.842510 systemd-logind[1547]: New seat seat0. Nov 12 22:36:04.864279 extend-filesystems[1539]: Resized filesystem in /dev/vda9 Nov 12 22:36:04.846595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:36:04.870074 tar[1567]: linux-arm64/helm Nov 12 22:36:04.850581 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 22:36:04.852544 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 22:36:04.852571 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 22:36:04.854205 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 22:36:04.854223 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 22:36:04.857445 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 22:36:04.862659 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 22:36:04.862881 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 22:36:04.866339 systemd[1]: Started update-engine.service - Update Engine. Nov 12 22:36:04.870187 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 22:36:04.874300 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 22:36:04.890586 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 22:36:04.890852 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 22:36:04.914559 bash[1605]: Updated "/home/core/.ssh/authorized_keys" Nov 12 22:36:04.914834 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 22:36:04.916696 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 22:36:04.920617 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 22:36:04.921246 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 22:36:04.986849 locksmithd[1612]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 22:36:05.106999 containerd[1573]: time="2024-11-12T22:36:05.106097138Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Nov 12 22:36:05.153527 containerd[1573]: time="2024-11-12T22:36:05.153465825Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:36:05.154953 containerd[1573]: time="2024-11-12T22:36:05.154866395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:36:05.154953 containerd[1573]: time="2024-11-12T22:36:05.154901196Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 22:36:05.154953 containerd[1573]: time="2024-11-12T22:36:05.154917898Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 22:36:05.156093 containerd[1573]: time="2024-11-12T22:36:05.155080681Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 22:36:05.156093 containerd[1573]: time="2024-11-12T22:36:05.155102898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 22:36:05.156093 containerd[1573]: time="2024-11-12T22:36:05.155155877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:36:05.156093 containerd[1573]: time="2024-11-12T22:36:05.155166869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:36:05.156093 containerd[1573]: time="2024-11-12T22:36:05.155356181Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:36:05.156093 containerd[1573]: time="2024-11-12T22:36:05.155369814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 22:36:05.156093 containerd[1573]: time="2024-11-12T22:36:05.155382787Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:36:05.156093 containerd[1573]: time="2024-11-12T22:36:05.155392419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 22:36:05.156093 containerd[1573]: time="2024-11-12T22:36:05.155456468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:36:05.156093 containerd[1573]: time="2024-11-12T22:36:05.155628495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:36:05.156093 containerd[1573]: time="2024-11-12T22:36:05.155756981Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:36:05.156273 containerd[1573]: time="2024-11-12T22:36:05.155770809Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 22:36:05.156273 containerd[1573]: time="2024-11-12T22:36:05.155841500Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 22:36:05.156273 containerd[1573]: time="2024-11-12T22:36:05.155879253Z" level=info msg="metadata content store policy set" policy=shared Nov 12 22:36:05.159594 containerd[1573]: time="2024-11-12T22:36:05.158896970Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 22:36:05.159594 containerd[1573]: time="2024-11-12T22:36:05.158947930Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 22:36:05.159594 containerd[1573]: time="2024-11-12T22:36:05.159036953Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 22:36:05.159594 containerd[1573]: time="2024-11-12T22:36:05.159056335Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 22:36:05.159594 containerd[1573]: time="2024-11-12T22:36:05.159070162Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 22:36:05.159594 containerd[1573]: time="2024-11-12T22:36:05.159198105Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 22:36:05.159594 containerd[1573]: time="2024-11-12T22:36:05.159497686Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 22:36:05.159803 containerd[1573]: time="2024-11-12T22:36:05.159628774Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 22:36:05.159803 containerd[1573]: time="2024-11-12T22:36:05.159644893Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 22:36:05.159803 containerd[1573]: time="2024-11-12T22:36:05.159658604Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 22:36:05.159803 containerd[1573]: time="2024-11-12T22:36:05.159672315Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 22:36:05.159803 containerd[1573]: time="2024-11-12T22:36:05.159683424Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 22:36:05.159803 containerd[1573]: time="2024-11-12T22:36:05.159695892Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 22:36:05.159803 containerd[1573]: time="2024-11-12T22:36:05.159710224Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 22:36:05.159803 containerd[1573]: time="2024-11-12T22:36:05.159724984Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 22:36:05.159803 containerd[1573]: time="2024-11-12T22:36:05.159737296Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 22:36:05.159803 containerd[1573]: time="2024-11-12T22:36:05.159749764Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 22:36:05.159803 containerd[1573]: time="2024-11-12T22:36:05.159761844Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 22:36:05.159803 containerd[1573]: time="2024-11-12T22:36:05.159780992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 22:36:05.159803 containerd[1573]: time="2024-11-12T22:36:05.159797811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 22:36:05.159803 containerd[1573]: time="2024-11-12T22:36:05.159809657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 22:36:05.160041 containerd[1573]: time="2024-11-12T22:36:05.159822475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 22:36:05.160041 containerd[1573]: time="2024-11-12T22:36:05.159833933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 22:36:05.160041 containerd[1573]: time="2024-11-12T22:36:05.159845935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 22:36:05.160041 containerd[1573]: time="2024-11-12T22:36:05.159858209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 22:36:05.160041 containerd[1573]: time="2024-11-12T22:36:05.159870288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 22:36:05.160041 containerd[1573]: time="2024-11-12T22:36:05.159882290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 22:36:05.160041 containerd[1573]: time="2024-11-12T22:36:05.159896350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 22:36:05.160041 containerd[1573]: time="2024-11-12T22:36:05.159906954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 22:36:05.160041 containerd[1573]: time="2024-11-12T22:36:05.159918801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 22:36:05.160041 containerd[1573]: time="2024-11-12T22:36:05.159931191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 22:36:05.160041 containerd[1573]: time="2024-11-12T22:36:05.159955777Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 22:36:05.160041 containerd[1573]: time="2024-11-12T22:36:05.159985063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 22:36:05.160041 containerd[1573]: time="2024-11-12T22:36:05.159997803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 22:36:05.160041 containerd[1573]: time="2024-11-12T22:36:05.160008407Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 22:36:05.160258 containerd[1573]: time="2024-11-12T22:36:05.160177793Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 22:36:05.160258 containerd[1573]: time="2024-11-12T22:36:05.160193446Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 22:36:05.160258 containerd[1573]: time="2024-11-12T22:36:05.160202612Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 22:36:05.160258 containerd[1573]: time="2024-11-12T22:36:05.160214653Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 22:36:05.160258 containerd[1573]: time="2024-11-12T22:36:05.160223586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 22:36:05.160258 containerd[1573]: time="2024-11-12T22:36:05.160234656Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 22:36:05.160258 containerd[1573]: time="2024-11-12T22:36:05.160244327Z" level=info msg="NRI interface is disabled by configuration." Nov 12 22:36:05.160258 containerd[1573]: time="2024-11-12T22:36:05.160255125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 22:36:05.160904 containerd[1573]: time="2024-11-12T22:36:05.160572418Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 22:36:05.160904 containerd[1573]: time="2024-11-12T22:36:05.160622950Z" level=info msg="Connect containerd service" Nov 12 22:36:05.162979 containerd[1573]: time="2024-11-12T22:36:05.162602951Z" level=info msg="using legacy CRI server" Nov 12 22:36:05.162979 containerd[1573]: time="2024-11-12T22:36:05.162620079Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 22:36:05.162979 containerd[1573]: time="2024-11-12T22:36:05.162898026Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 22:36:05.163625 containerd[1573]: time="2024-11-12T22:36:05.163584581Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 22:36:05.164178 containerd[1573]: time="2024-11-12T22:36:05.164141911Z" level=info msg="Start subscribing containerd event" Nov 12 22:36:05.164214 containerd[1573]: time="2024-11-12T22:36:05.164192870Z" level=info msg="Start recovering state" Nov 12 22:36:05.164278 containerd[1573]: time="2024-11-12T22:36:05.164262240Z" level=info msg="Start event monitor" Nov 12 22:36:05.164278 containerd[1573]: time="2024-11-12T22:36:05.164277350Z" level=info msg="Start snapshots syncer" Nov 12 22:36:05.164319 containerd[1573]: time="2024-11-12T22:36:05.164286788Z" level=info msg="Start cni network conf syncer for default" Nov 12 22:36:05.164319 containerd[1573]: time="2024-11-12T22:36:05.164298207Z" level=info msg="Start streaming server" Nov 12 22:36:05.164974 containerd[1573]: time="2024-11-12T22:36:05.164941726Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 22:36:05.165035 containerd[1573]: time="2024-11-12T22:36:05.165015097Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 22:36:05.165089 containerd[1573]: time="2024-11-12T22:36:05.165074795Z" level=info msg="containerd successfully booted in 0.059899s" Nov 12 22:36:05.165192 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 22:36:05.271947 tar[1567]: linux-arm64/LICENSE Nov 12 22:36:05.271947 tar[1567]: linux-arm64/README.md Nov 12 22:36:05.288477 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 22:36:05.408880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:36:05.412354 (kubelet)[1654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:36:05.445967 sshd_keygen[1560]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 22:36:05.464051 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 22:36:05.472207 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 22:36:05.478394 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 22:36:05.478604 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 22:36:05.481371 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 22:36:05.491679 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 22:36:05.494388 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 22:36:05.496620 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 12 22:36:05.497863 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 22:36:05.498868 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 22:36:05.500297 systemd[1]: Startup finished in 5.268s (kernel) + 3.114s (userspace) = 8.383s. Nov 12 22:36:05.868719 kubelet[1654]: E1112 22:36:05.866843 1654 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:36:05.871200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:36:05.871401 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:36:10.249983 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 22:36:10.263177 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:36078.service - OpenSSH per-connection server daemon (10.0.0.1:36078). Nov 12 22:36:10.322462 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 36078 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:36:10.323977 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:36:10.332085 systemd-logind[1547]: New session 1 of user core. Nov 12 22:36:10.333014 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 22:36:10.345166 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 22:36:10.354101 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 22:36:10.356127 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 22:36:10.362046 (systemd)[1694]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 22:36:10.430900 systemd[1694]: Queued start job for default target default.target. Nov 12 22:36:10.431291 systemd[1694]: Created slice app.slice - User Application Slice. Nov 12 22:36:10.431315 systemd[1694]: Reached target paths.target - Paths. Nov 12 22:36:10.431326 systemd[1694]: Reached target timers.target - Timers. Nov 12 22:36:10.441035 systemd[1694]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 22:36:10.446402 systemd[1694]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 22:36:10.446461 systemd[1694]: Reached target sockets.target - Sockets. Nov 12 22:36:10.446473 systemd[1694]: Reached target basic.target - Basic System. Nov 12 22:36:10.446507 systemd[1694]: Reached target default.target - Main User Target. Nov 12 22:36:10.446529 systemd[1694]: Startup finished in 79ms. Nov 12 22:36:10.446832 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 22:36:10.448131 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 22:36:10.509256 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:36086.service - OpenSSH per-connection server daemon (10.0.0.1:36086). Nov 12 22:36:10.545568 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 36086 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:36:10.546664 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:36:10.550684 systemd-logind[1547]: New session 2 of user core. Nov 12 22:36:10.558182 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 22:36:10.606943 sshd[1709]: Connection closed by 10.0.0.1 port 36086 Nov 12 22:36:10.607349 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Nov 12 22:36:10.619158 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:36088.service - OpenSSH per-connection server daemon (10.0.0.1:36088). Nov 12 22:36:10.619554 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:36086.service: Deactivated successfully. Nov 12 22:36:10.620821 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 22:36:10.621945 systemd-logind[1547]: Session 2 logged out. Waiting for processes to exit. Nov 12 22:36:10.622825 systemd-logind[1547]: Removed session 2. Nov 12 22:36:10.654950 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 36088 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:36:10.656007 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:36:10.659800 systemd-logind[1547]: New session 3 of user core. Nov 12 22:36:10.671182 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 22:36:10.718008 sshd[1717]: Connection closed by 10.0.0.1 port 36088 Nov 12 22:36:10.718356 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Nov 12 22:36:10.727159 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:36090.service - OpenSSH per-connection server daemon (10.0.0.1:36090). Nov 12 22:36:10.727495 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:36088.service: Deactivated successfully. Nov 12 22:36:10.729232 systemd-logind[1547]: Session 3 logged out. Waiting for processes to exit. Nov 12 22:36:10.729633 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 22:36:10.730886 systemd-logind[1547]: Removed session 3. Nov 12 22:36:10.762867 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 36090 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:36:10.763863 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:36:10.767434 systemd-logind[1547]: New session 4 of user core. Nov 12 22:36:10.778230 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 22:36:10.827389 sshd[1725]: Connection closed by 10.0.0.1 port 36090 Nov 12 22:36:10.827761 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Nov 12 22:36:10.835165 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:36092.service - OpenSSH per-connection server daemon (10.0.0.1:36092). Nov 12 22:36:10.835504 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:36090.service: Deactivated successfully. Nov 12 22:36:10.837575 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 22:36:10.837996 systemd-logind[1547]: Session 4 logged out. Waiting for processes to exit. Nov 12 22:36:10.838820 systemd-logind[1547]: Removed session 4. Nov 12 22:36:10.870672 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 36092 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:36:10.871758 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:36:10.875013 systemd-logind[1547]: New session 5 of user core. Nov 12 22:36:10.885175 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 22:36:10.950891 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 22:36:10.951447 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:36:10.966609 sudo[1734]: pam_unix(sudo:session): session closed for user root Nov 12 22:36:10.967913 sshd[1733]: Connection closed by 10.0.0.1 port 36092 Nov 12 22:36:10.968231 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Nov 12 22:36:10.979163 systemd[1]: Started sshd@5-10.0.0.92:22-10.0.0.1:36098.service - OpenSSH per-connection server daemon (10.0.0.1:36098). Nov 12 22:36:10.979490 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:36092.service: Deactivated successfully. Nov 12 22:36:10.981598 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 22:36:10.982050 systemd-logind[1547]: Session 5 logged out. Waiting for processes to exit. Nov 12 22:36:10.982932 systemd-logind[1547]: Removed session 5. Nov 12 22:36:11.015070 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 36098 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:36:11.016456 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:36:11.020507 systemd-logind[1547]: New session 6 of user core. Nov 12 22:36:11.028181 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 22:36:11.078361 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 22:36:11.078615 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:36:11.082107 sudo[1744]: pam_unix(sudo:session): session closed for user root Nov 12 22:36:11.086181 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 12 22:36:11.086418 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:36:11.103266 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 12 22:36:11.125389 augenrules[1766]: No rules Nov 12 22:36:11.126005 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 22:36:11.126244 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 12 22:36:11.127195 sudo[1743]: pam_unix(sudo:session): session closed for user root Nov 12 22:36:11.128713 sshd[1742]: Connection closed by 10.0.0.1 port 36098 Nov 12 22:36:11.128624 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Nov 12 22:36:11.136224 systemd[1]: Started sshd@6-10.0.0.92:22-10.0.0.1:36110.service - OpenSSH per-connection server daemon (10.0.0.1:36110). Nov 12 22:36:11.136659 systemd[1]: sshd@5-10.0.0.92:22-10.0.0.1:36098.service: Deactivated successfully. Nov 12 22:36:11.138701 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 22:36:11.139168 systemd-logind[1547]: Session 6 logged out. Waiting for processes to exit. Nov 12 22:36:11.140263 systemd-logind[1547]: Removed session 6. Nov 12 22:36:11.172421 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 36110 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:36:11.173460 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:36:11.177240 systemd-logind[1547]: New session 7 of user core. Nov 12 22:36:11.184262 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 22:36:11.233479 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 22:36:11.233757 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:36:11.534385 (dockerd)[1799]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 22:36:11.534387 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 22:36:11.765656 dockerd[1799]: time="2024-11-12T22:36:11.765598260Z" level=info msg="Starting up" Nov 12 22:36:11.999686 dockerd[1799]: time="2024-11-12T22:36:11.999567149Z" level=info msg="Loading containers: start." Nov 12 22:36:12.146986 kernel: Initializing XFRM netlink socket Nov 12 22:36:12.210441 systemd-networkd[1230]: docker0: Link UP Nov 12 22:36:12.252245 dockerd[1799]: time="2024-11-12T22:36:12.252122854Z" level=info msg="Loading containers: done." Nov 12 22:36:12.266846 dockerd[1799]: time="2024-11-12T22:36:12.266792386Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 22:36:12.266992 dockerd[1799]: time="2024-11-12T22:36:12.266881797Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Nov 12 22:36:12.267026 dockerd[1799]: time="2024-11-12T22:36:12.266996635Z" level=info msg="Daemon has completed initialization" Nov 12 22:36:12.293353 dockerd[1799]: time="2024-11-12T22:36:12.293299166Z" level=info msg="API listen on /run/docker.sock" Nov 12 22:36:12.293505 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 22:36:12.929590 containerd[1573]: time="2024-11-12T22:36:12.929332450Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 22:36:13.712089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2823750973.mount: Deactivated successfully. Nov 12 22:36:15.198012 containerd[1573]: time="2024-11-12T22:36:15.197946387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:15.199093 containerd[1573]: time="2024-11-12T22:36:15.199054873Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=32201617" Nov 12 22:36:15.200210 containerd[1573]: time="2024-11-12T22:36:15.200096195Z" level=info msg="ImageCreate event name:\"sha256:001ac07c2bb7d0e08d405a19d935c926c393c971a2801756755b8958a7306ca0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:15.203703 containerd[1573]: time="2024-11-12T22:36:15.203669827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:15.205449 containerd[1573]: time="2024-11-12T22:36:15.205415657Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:001ac07c2bb7d0e08d405a19d935c926c393c971a2801756755b8958a7306ca0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"32198415\" in 2.276043359s" Nov 12 22:36:15.205484 containerd[1573]: time="2024-11-12T22:36:15.205452296Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:001ac07c2bb7d0e08d405a19d935c926c393c971a2801756755b8958a7306ca0\"" Nov 12 22:36:15.223915 containerd[1573]: time="2024-11-12T22:36:15.223879480Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 22:36:16.121613 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 22:36:16.135157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:36:16.231922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:36:16.236038 (kubelet)[2078]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:36:16.276774 kubelet[2078]: E1112 22:36:16.276721 2078 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:36:16.283268 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:36:16.283458 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:36:16.589017 containerd[1573]: time="2024-11-12T22:36:16.588878824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:16.590016 containerd[1573]: time="2024-11-12T22:36:16.589945581Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=29381046" Nov 12 22:36:16.590806 containerd[1573]: time="2024-11-12T22:36:16.590771992Z" level=info msg="ImageCreate event name:\"sha256:27bef186b28e50ade2a010ef9201877431fb732ef6e370cb79149e8bd65220d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:16.593764 containerd[1573]: time="2024-11-12T22:36:16.593733389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:16.594885 containerd[1573]: time="2024-11-12T22:36:16.594844608Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:27bef186b28e50ade2a010ef9201877431fb732ef6e370cb79149e8bd65220d7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"30783669\" in 1.370928401s" Nov 12 22:36:16.594885 containerd[1573]: time="2024-11-12T22:36:16.594883824Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:27bef186b28e50ade2a010ef9201877431fb732ef6e370cb79149e8bd65220d7\"" Nov 12 22:36:16.613203 containerd[1573]: time="2024-11-12T22:36:16.613155067Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 22:36:17.539824 containerd[1573]: time="2024-11-12T22:36:17.539767462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:17.540362 containerd[1573]: time="2024-11-12T22:36:17.540302442Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=15770290" Nov 12 22:36:17.541179 containerd[1573]: time="2024-11-12T22:36:17.541145016Z" level=info msg="ImageCreate event name:\"sha256:a8e5012443313f8a99b528b68845e2bcb151785ed5c057613dad7ca5b03c7e60\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:17.544799 containerd[1573]: time="2024-11-12T22:36:17.544759678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:17.545575 containerd[1573]: time="2024-11-12T22:36:17.545482395Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:a8e5012443313f8a99b528b68845e2bcb151785ed5c057613dad7ca5b03c7e60\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"17172931\" in 932.292117ms" Nov 12 22:36:17.545575 containerd[1573]: time="2024-11-12T22:36:17.545513731Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:a8e5012443313f8a99b528b68845e2bcb151785ed5c057613dad7ca5b03c7e60\"" Nov 12 22:36:17.565124 containerd[1573]: time="2024-11-12T22:36:17.565084013Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 22:36:18.487226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1425080213.mount: Deactivated successfully. Nov 12 22:36:18.798561 containerd[1573]: time="2024-11-12T22:36:18.798428399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:18.799366 containerd[1573]: time="2024-11-12T22:36:18.799274775Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=25272231" Nov 12 22:36:18.799975 containerd[1573]: time="2024-11-12T22:36:18.799922570Z" level=info msg="ImageCreate event name:\"sha256:4e66440765478454d48b169d648b000501e24066c0bad7c378bd9e8506bb919f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:18.801795 containerd[1573]: time="2024-11-12T22:36:18.801756955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:18.802761 containerd[1573]: time="2024-11-12T22:36:18.802718380Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:4e66440765478454d48b169d648b000501e24066c0bad7c378bd9e8506bb919f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"25271248\" in 1.237597737s" Nov 12 22:36:18.802801 containerd[1573]: time="2024-11-12T22:36:18.802761479Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:4e66440765478454d48b169d648b000501e24066c0bad7c378bd9e8506bb919f\"" Nov 12 22:36:18.820734 containerd[1573]: time="2024-11-12T22:36:18.820704506Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 22:36:19.507136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2619896612.mount: Deactivated successfully. Nov 12 22:36:20.101726 containerd[1573]: time="2024-11-12T22:36:20.101672415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:20.102124 containerd[1573]: time="2024-11-12T22:36:20.102073920Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Nov 12 22:36:20.103023 containerd[1573]: time="2024-11-12T22:36:20.102987773Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:20.105930 containerd[1573]: time="2024-11-12T22:36:20.105879224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:20.107203 containerd[1573]: time="2024-11-12T22:36:20.107166054Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.286427234s" Nov 12 22:36:20.107203 containerd[1573]: time="2024-11-12T22:36:20.107202670Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Nov 12 22:36:20.126148 containerd[1573]: time="2024-11-12T22:36:20.126113368Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 22:36:20.532559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount858180482.mount: Deactivated successfully. Nov 12 22:36:20.537358 containerd[1573]: time="2024-11-12T22:36:20.537315057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:20.537902 containerd[1573]: time="2024-11-12T22:36:20.537847408Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Nov 12 22:36:20.538610 containerd[1573]: time="2024-11-12T22:36:20.538575191Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:20.541676 containerd[1573]: time="2024-11-12T22:36:20.541638369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:20.542482 containerd[1573]: time="2024-11-12T22:36:20.542451936Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 416.301792ms" Nov 12 22:36:20.542522 containerd[1573]: time="2024-11-12T22:36:20.542484926Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Nov 12 22:36:20.559821 containerd[1573]: time="2024-11-12T22:36:20.559784348Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 22:36:21.075254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount537795062.mount: Deactivated successfully. Nov 12 22:36:22.708369 containerd[1573]: time="2024-11-12T22:36:22.708309086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:22.708815 containerd[1573]: time="2024-11-12T22:36:22.708749005Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Nov 12 22:36:22.709849 containerd[1573]: time="2024-11-12T22:36:22.709813769Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:22.712940 containerd[1573]: time="2024-11-12T22:36:22.712899664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:22.714329 containerd[1573]: time="2024-11-12T22:36:22.714290688Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.154472067s" Nov 12 22:36:22.714367 containerd[1573]: time="2024-11-12T22:36:22.714328614Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Nov 12 22:36:26.533722 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 22:36:26.543123 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:36:26.665194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:36:26.668289 (kubelet)[2310]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:36:26.706190 kubelet[2310]: E1112 22:36:26.706133 2310 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:36:26.708862 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:36:26.709020 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:36:27.336269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:36:27.349157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:36:27.361984 systemd[1]: Reloading requested from client PID 2327 ('systemctl') (unit session-7.scope)... Nov 12 22:36:27.361997 systemd[1]: Reloading... Nov 12 22:36:27.416989 zram_generator::config[2366]: No configuration found. Nov 12 22:36:27.576632 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:36:27.624775 systemd[1]: Reloading finished in 262 ms. Nov 12 22:36:27.663546 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:36:27.666809 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 22:36:27.667068 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:36:27.668928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:36:27.751892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:36:27.755747 (kubelet)[2426]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 22:36:27.806274 kubelet[2426]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:36:27.806274 kubelet[2426]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 22:36:27.806274 kubelet[2426]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:36:27.806635 kubelet[2426]: I1112 22:36:27.806344 2426 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 22:36:28.407868 kubelet[2426]: I1112 22:36:28.407790 2426 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 22:36:28.407868 kubelet[2426]: I1112 22:36:28.407822 2426 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 22:36:28.408122 kubelet[2426]: I1112 22:36:28.408065 2426 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 22:36:28.431129 kubelet[2426]: I1112 22:36:28.431095 2426 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 22:36:28.433049 kubelet[2426]: E1112 22:36:28.433029 2426 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.92:6443: connect: connection refused Nov 12 22:36:28.439251 kubelet[2426]: I1112 22:36:28.439227 2426 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 22:36:28.440261 kubelet[2426]: I1112 22:36:28.440230 2426 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 22:36:28.440445 kubelet[2426]: I1112 22:36:28.440422 2426 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 22:36:28.440527 kubelet[2426]: I1112 22:36:28.440447 2426 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 22:36:28.440527 kubelet[2426]: I1112 22:36:28.440457 2426 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 22:36:28.440573 kubelet[2426]: I1112 22:36:28.440559 2426 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:36:28.442597 kubelet[2426]: I1112 22:36:28.442574 2426 kubelet.go:396] "Attempting to sync node with API server" Nov 12 22:36:28.442647 kubelet[2426]: I1112 22:36:28.442602 2426 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 22:36:28.442647 kubelet[2426]: I1112 22:36:28.442624 2426 kubelet.go:312] "Adding apiserver pod source" Nov 12 22:36:28.442647 kubelet[2426]: I1112 22:36:28.442639 2426 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 22:36:28.443219 kubelet[2426]: W1112 22:36:28.443148 2426 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Nov 12 22:36:28.443219 kubelet[2426]: E1112 22:36:28.443195 2426 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Nov 12 22:36:28.444272 kubelet[2426]: W1112 22:36:28.444212 2426 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Nov 12 22:36:28.444272 kubelet[2426]: E1112 22:36:28.444257 2426 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Nov 12 22:36:28.444466 kubelet[2426]: I1112 22:36:28.444293 2426 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 12 22:36:28.444796 kubelet[2426]: I1112 22:36:28.444784 2426 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 22:36:28.444930 kubelet[2426]: W1112 22:36:28.444919 2426 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 22:36:28.447897 kubelet[2426]: I1112 22:36:28.447839 2426 server.go:1256] "Started kubelet" Nov 12 22:36:28.448014 kubelet[2426]: I1112 22:36:28.447997 2426 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 22:36:28.448680 kubelet[2426]: I1112 22:36:28.448049 2426 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 22:36:28.450956 kubelet[2426]: I1112 22:36:28.450936 2426 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 22:36:28.451396 kubelet[2426]: I1112 22:36:28.451350 2426 server.go:461] "Adding debug handlers to kubelet server" Nov 12 22:36:28.452111 kubelet[2426]: I1112 22:36:28.452086 2426 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 22:36:28.453244 kubelet[2426]: I1112 22:36:28.453219 2426 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 22:36:28.456119 kubelet[2426]: I1112 22:36:28.454364 2426 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 22:36:28.456119 kubelet[2426]: I1112 22:36:28.454448 2426 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 22:36:28.456119 kubelet[2426]: E1112 22:36:28.454611 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="200ms" Nov 12 22:36:28.456119 kubelet[2426]: W1112 22:36:28.454758 2426 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Nov 12 22:36:28.456119 kubelet[2426]: E1112 22:36:28.454800 2426 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Nov 12 22:36:28.459660 kubelet[2426]: I1112 22:36:28.459633 2426 factory.go:221] Registration of the containerd container factory successfully Nov 12 22:36:28.459660 kubelet[2426]: I1112 22:36:28.459650 2426 factory.go:221] Registration of the systemd container factory successfully Nov 12 22:36:28.459660 kubelet[2426]: E1112 22:36:28.459651 2426 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18075980186e4c97 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 22:36:28.447812759 +0000 UTC m=+0.688528791,LastTimestamp:2024-11-12 22:36:28.447812759 +0000 UTC m=+0.688528791,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 22:36:28.459831 kubelet[2426]: I1112 22:36:28.459709 2426 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 22:36:28.460949 kubelet[2426]: E1112 22:36:28.460925 2426 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 22:36:28.467634 kubelet[2426]: I1112 22:36:28.467478 2426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 22:36:28.468453 kubelet[2426]: I1112 22:36:28.468419 2426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 22:36:28.468453 kubelet[2426]: I1112 22:36:28.468447 2426 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 22:36:28.468526 kubelet[2426]: I1112 22:36:28.468464 2426 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 22:36:28.468526 kubelet[2426]: E1112 22:36:28.468520 2426 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 22:36:28.474279 kubelet[2426]: W1112 22:36:28.474244 2426 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Nov 12 22:36:28.474901 kubelet[2426]: E1112 22:36:28.474597 2426 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Nov 12 22:36:28.475814 kubelet[2426]: I1112 22:36:28.475794 2426 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 22:36:28.475814 kubelet[2426]: I1112 22:36:28.475811 2426 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 22:36:28.475893 kubelet[2426]: I1112 22:36:28.475827 2426 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:36:28.478052 kubelet[2426]: I1112 22:36:28.478032 2426 policy_none.go:49] "None policy: Start" Nov 12 22:36:28.478517 kubelet[2426]: I1112 22:36:28.478499 2426 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 22:36:28.478583 kubelet[2426]: I1112 22:36:28.478541 2426 state_mem.go:35] "Initializing new in-memory state store" Nov 12 22:36:28.484331 kubelet[2426]: I1112 22:36:28.484301 2426 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 22:36:28.484571 kubelet[2426]: I1112 22:36:28.484542 2426 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 22:36:28.486342 kubelet[2426]: E1112 22:36:28.486302 2426 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 22:36:28.555572 kubelet[2426]: I1112 22:36:28.555539 2426 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:36:28.556073 kubelet[2426]: E1112 22:36:28.556037 2426 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Nov 12 22:36:28.569187 kubelet[2426]: I1112 22:36:28.569155 2426 topology_manager.go:215] "Topology Admit Handler" podUID="1b7e51e6237d12e7cf0bb84e40c522c3" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 22:36:28.570031 kubelet[2426]: I1112 22:36:28.570006 2426 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 22:36:28.571827 kubelet[2426]: I1112 22:36:28.571239 2426 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 22:36:28.655059 kubelet[2426]: E1112 22:36:28.655020 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="400ms" Nov 12 22:36:28.755294 kubelet[2426]: I1112 22:36:28.755192 2426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 22:36:28.755294 kubelet[2426]: I1112 22:36:28.755227 2426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b7e51e6237d12e7cf0bb84e40c522c3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1b7e51e6237d12e7cf0bb84e40c522c3\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:36:28.755294 kubelet[2426]: I1112 22:36:28.755251 2426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b7e51e6237d12e7cf0bb84e40c522c3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1b7e51e6237d12e7cf0bb84e40c522c3\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:36:28.755294 kubelet[2426]: I1112 22:36:28.755287 2426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:36:28.756107 kubelet[2426]: I1112 22:36:28.755883 2426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:36:28.756107 kubelet[2426]: I1112 22:36:28.755934 2426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b7e51e6237d12e7cf0bb84e40c522c3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1b7e51e6237d12e7cf0bb84e40c522c3\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:36:28.756107 kubelet[2426]: I1112 22:36:28.755985 2426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:36:28.756107 kubelet[2426]: I1112 22:36:28.756040 2426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:36:28.756107 kubelet[2426]: I1112 22:36:28.756072 2426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:36:28.757320 kubelet[2426]: I1112 22:36:28.757272 2426 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:36:28.757580 kubelet[2426]: E1112 22:36:28.757565 2426 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Nov 12 22:36:28.875032 kubelet[2426]: E1112 22:36:28.874991 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:28.875346 kubelet[2426]: E1112 22:36:28.875275 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:28.879015 kubelet[2426]: E1112 22:36:28.875703 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:28.880954 containerd[1573]: time="2024-11-12T22:36:28.880910895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,}" Nov 12 22:36:28.881227 containerd[1573]: time="2024-11-12T22:36:28.880910696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1b7e51e6237d12e7cf0bb84e40c522c3,Namespace:kube-system,Attempt:0,}" Nov 12 22:36:28.881227 containerd[1573]: time="2024-11-12T22:36:28.880938259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,}" Nov 12 22:36:29.055510 kubelet[2426]: E1112 22:36:29.055428 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="800ms" Nov 12 22:36:29.158753 kubelet[2426]: I1112 22:36:29.158707 2426 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:36:29.159036 kubelet[2426]: E1112 22:36:29.159016 2426 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Nov 12 22:36:29.380901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983116989.mount: Deactivated successfully. Nov 12 22:36:29.384293 containerd[1573]: time="2024-11-12T22:36:29.384248152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:36:29.385354 containerd[1573]: time="2024-11-12T22:36:29.385307066Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Nov 12 22:36:29.387386 containerd[1573]: time="2024-11-12T22:36:29.387325371Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:36:29.389302 containerd[1573]: time="2024-11-12T22:36:29.389272919Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:36:29.390401 containerd[1573]: time="2024-11-12T22:36:29.390342660Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 22:36:29.391129 containerd[1573]: time="2024-11-12T22:36:29.391085226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:36:29.392079 containerd[1573]: time="2024-11-12T22:36:29.391915809Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 510.7169ms" Nov 12 22:36:29.392830 containerd[1573]: time="2024-11-12T22:36:29.392416180Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 22:36:29.392830 containerd[1573]: time="2024-11-12T22:36:29.392785945Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:36:29.397573 containerd[1573]: time="2024-11-12T22:36:29.397530082Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 516.515285ms" Nov 12 22:36:29.398499 containerd[1573]: time="2024-11-12T22:36:29.398473012Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 517.364581ms" Nov 12 22:36:29.503237 kubelet[2426]: W1112 22:36:29.503155 2426 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Nov 12 22:36:29.503237 kubelet[2426]: E1112 22:36:29.503243 2426 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Nov 12 22:36:29.518386 kubelet[2426]: W1112 22:36:29.518313 2426 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Nov 12 22:36:29.518386 kubelet[2426]: E1112 22:36:29.518367 2426 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Nov 12 22:36:29.526724 containerd[1573]: time="2024-11-12T22:36:29.526631874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:36:29.527005 containerd[1573]: time="2024-11-12T22:36:29.526730557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:36:29.527005 containerd[1573]: time="2024-11-12T22:36:29.526747178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:36:29.531118 containerd[1573]: time="2024-11-12T22:36:29.529513482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:36:29.531204 containerd[1573]: time="2024-11-12T22:36:29.528603633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:36:29.531204 containerd[1573]: time="2024-11-12T22:36:29.528737316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:36:29.531204 containerd[1573]: time="2024-11-12T22:36:29.528755295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:36:29.531204 containerd[1573]: time="2024-11-12T22:36:29.529140242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:36:29.531308 containerd[1573]: time="2024-11-12T22:36:29.528682980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:36:29.531308 containerd[1573]: time="2024-11-12T22:36:29.528733920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:36:29.531308 containerd[1573]: time="2024-11-12T22:36:29.528744268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:36:29.531308 containerd[1573]: time="2024-11-12T22:36:29.528822735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:36:29.554486 kubelet[2426]: W1112 22:36:29.554444 2426 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Nov 12 22:36:29.554747 kubelet[2426]: E1112 22:36:29.554713 2426 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Nov 12 22:36:29.576463 containerd[1573]: time="2024-11-12T22:36:29.576314566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0e0e333190e83f2eea155e98cd6cb371ab659b542752170e62ed73ea1f09fa0\"" Nov 12 22:36:29.577451 kubelet[2426]: E1112 22:36:29.577276 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:29.578426 containerd[1573]: time="2024-11-12T22:36:29.578403628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1b7e51e6237d12e7cf0bb84e40c522c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a37aa4b7c2f5db28c45a6db33512b097ed7d7b9034685651c63b40d1df394b1\"" Nov 12 22:36:29.579152 kubelet[2426]: E1112 22:36:29.579133 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:29.581945 containerd[1573]: time="2024-11-12T22:36:29.581906905Z" level=info msg="CreateContainer within sandbox \"e0e0e333190e83f2eea155e98cd6cb371ab659b542752170e62ed73ea1f09fa0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 22:36:29.582587 containerd[1573]: time="2024-11-12T22:36:29.582537643Z" level=info msg="CreateContainer within sandbox \"3a37aa4b7c2f5db28c45a6db33512b097ed7d7b9034685651c63b40d1df394b1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 22:36:29.584501 containerd[1573]: time="2024-11-12T22:36:29.584334728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e84fd9aa4fd36b690becbdb2ea985a3ca7baed9a138af3b18ed8c354858d0140\"" Nov 12 22:36:29.585072 kubelet[2426]: E1112 22:36:29.585048 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:29.587208 containerd[1573]: time="2024-11-12T22:36:29.587175026Z" level=info msg="CreateContainer within sandbox \"e84fd9aa4fd36b690becbdb2ea985a3ca7baed9a138af3b18ed8c354858d0140\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 22:36:29.599916 containerd[1573]: time="2024-11-12T22:36:29.599862255Z" level=info msg="CreateContainer within sandbox \"e0e0e333190e83f2eea155e98cd6cb371ab659b542752170e62ed73ea1f09fa0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a1ce101f2fec36d98591df9b7302715fcb426c2af9aac85478264d224f43d0de\"" Nov 12 22:36:29.600561 containerd[1573]: time="2024-11-12T22:36:29.600528471Z" level=info msg="StartContainer for \"a1ce101f2fec36d98591df9b7302715fcb426c2af9aac85478264d224f43d0de\"" Nov 12 22:36:29.600837 containerd[1573]: time="2024-11-12T22:36:29.600809501Z" level=info msg="CreateContainer within sandbox \"3a37aa4b7c2f5db28c45a6db33512b097ed7d7b9034685651c63b40d1df394b1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c48a4e7a726247df1d2ef4632b57781760717623b195530d1537e3ae1fba0d5b\"" Nov 12 22:36:29.601606 containerd[1573]: time="2024-11-12T22:36:29.601459256Z" level=info msg="StartContainer for \"c48a4e7a726247df1d2ef4632b57781760717623b195530d1537e3ae1fba0d5b\"" Nov 12 22:36:29.603832 containerd[1573]: time="2024-11-12T22:36:29.603780165Z" level=info msg="CreateContainer within sandbox \"e84fd9aa4fd36b690becbdb2ea985a3ca7baed9a138af3b18ed8c354858d0140\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f9851acd858bdc8031172be9b8f4ec1ec8df5354847b5cfc181ac8f59c584370\"" Nov 12 22:36:29.604174 containerd[1573]: time="2024-11-12T22:36:29.604106101Z" level=info msg="StartContainer for \"f9851acd858bdc8031172be9b8f4ec1ec8df5354847b5cfc181ac8f59c584370\"" Nov 12 22:36:29.665453 containerd[1573]: time="2024-11-12T22:36:29.665344155Z" level=info msg="StartContainer for \"a1ce101f2fec36d98591df9b7302715fcb426c2af9aac85478264d224f43d0de\" returns successfully" Nov 12 22:36:29.666227 containerd[1573]: time="2024-11-12T22:36:29.665608884Z" level=info msg="StartContainer for \"f9851acd858bdc8031172be9b8f4ec1ec8df5354847b5cfc181ac8f59c584370\" returns successfully" Nov 12 22:36:29.687884 containerd[1573]: time="2024-11-12T22:36:29.687845635Z" level=info msg="StartContainer for \"c48a4e7a726247df1d2ef4632b57781760717623b195530d1537e3ae1fba0d5b\" returns successfully" Nov 12 22:36:29.857391 kubelet[2426]: E1112 22:36:29.857355 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="1.6s" Nov 12 22:36:29.964082 kubelet[2426]: I1112 22:36:29.963304 2426 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:36:30.487443 kubelet[2426]: E1112 22:36:30.487415 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:30.490970 kubelet[2426]: E1112 22:36:30.489551 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:30.491725 kubelet[2426]: E1112 22:36:30.491706 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:31.276853 kubelet[2426]: I1112 22:36:31.276690 2426 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 22:36:31.291674 kubelet[2426]: E1112 22:36:31.291632 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 22:36:31.392186 kubelet[2426]: E1112 22:36:31.392148 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 22:36:31.492293 kubelet[2426]: E1112 22:36:31.492274 2426 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 22:36:31.493012 kubelet[2426]: E1112 22:36:31.492992 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:32.450047 kubelet[2426]: I1112 22:36:32.449792 2426 apiserver.go:52] "Watching apiserver" Nov 12 22:36:32.455304 kubelet[2426]: I1112 22:36:32.455269 2426 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 22:36:33.459948 kubelet[2426]: E1112 22:36:33.459858 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:33.494723 kubelet[2426]: E1112 22:36:33.494688 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:33.877267 systemd[1]: Reloading requested from client PID 2706 ('systemctl') (unit session-7.scope)... Nov 12 22:36:33.877281 systemd[1]: Reloading... Nov 12 22:36:33.932995 zram_generator::config[2746]: No configuration found. Nov 12 22:36:34.096934 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:36:34.150539 systemd[1]: Reloading finished in 272 ms. Nov 12 22:36:34.174114 kubelet[2426]: I1112 22:36:34.174029 2426 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 22:36:34.174155 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:36:34.183306 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 22:36:34.183584 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:36:34.193410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:36:34.278150 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:36:34.281484 (kubelet)[2797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 22:36:34.322747 kubelet[2797]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:36:34.322747 kubelet[2797]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 22:36:34.322747 kubelet[2797]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:36:34.323163 kubelet[2797]: I1112 22:36:34.322784 2797 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 22:36:34.327189 kubelet[2797]: I1112 22:36:34.327164 2797 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 22:36:34.327189 kubelet[2797]: I1112 22:36:34.327189 2797 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 22:36:34.327603 kubelet[2797]: I1112 22:36:34.327419 2797 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 22:36:34.329012 kubelet[2797]: I1112 22:36:34.328994 2797 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 22:36:34.331081 kubelet[2797]: I1112 22:36:34.331020 2797 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 22:36:34.339029 kubelet[2797]: I1112 22:36:34.339012 2797 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 22:36:34.339440 kubelet[2797]: I1112 22:36:34.339414 2797 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 22:36:34.339601 kubelet[2797]: I1112 22:36:34.339573 2797 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 22:36:34.339714 kubelet[2797]: I1112 22:36:34.339603 2797 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 22:36:34.339714 kubelet[2797]: I1112 22:36:34.339613 2797 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 22:36:34.339714 kubelet[2797]: I1112 22:36:34.339645 2797 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:36:34.339793 kubelet[2797]: I1112 22:36:34.339731 2797 kubelet.go:396] "Attempting to sync node with API server" Nov 12 22:36:34.339793 kubelet[2797]: I1112 22:36:34.339745 2797 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 22:36:34.339793 kubelet[2797]: I1112 22:36:34.339765 2797 kubelet.go:312] "Adding apiserver pod source" Nov 12 22:36:34.339793 kubelet[2797]: I1112 22:36:34.339778 2797 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 22:36:34.340643 kubelet[2797]: I1112 22:36:34.340584 2797 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 12 22:36:34.340787 kubelet[2797]: I1112 22:36:34.340761 2797 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 22:36:34.341205 kubelet[2797]: I1112 22:36:34.341131 2797 server.go:1256] "Started kubelet" Nov 12 22:36:34.346236 kubelet[2797]: I1112 22:36:34.341841 2797 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 22:36:34.346236 kubelet[2797]: I1112 22:36:34.342037 2797 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 22:36:34.346236 kubelet[2797]: I1112 22:36:34.342116 2797 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 22:36:34.346236 kubelet[2797]: I1112 22:36:34.342779 2797 server.go:461] "Adding debug handlers to kubelet server" Nov 12 22:36:34.346236 kubelet[2797]: I1112 22:36:34.343931 2797 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 22:36:34.353508 kubelet[2797]: E1112 22:36:34.351950 2797 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 22:36:34.353508 kubelet[2797]: I1112 22:36:34.352018 2797 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 22:36:34.353508 kubelet[2797]: I1112 22:36:34.352105 2797 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 22:36:34.353508 kubelet[2797]: I1112 22:36:34.352224 2797 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 22:36:34.356536 kubelet[2797]: I1112 22:36:34.355361 2797 factory.go:221] Registration of the systemd container factory successfully Nov 12 22:36:34.356536 kubelet[2797]: I1112 22:36:34.355445 2797 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 22:36:34.358088 sudo[2814]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 12 22:36:34.359091 sudo[2814]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 12 22:36:34.361977 kubelet[2797]: E1112 22:36:34.359475 2797 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 22:36:34.361977 kubelet[2797]: I1112 22:36:34.359581 2797 factory.go:221] Registration of the containerd container factory successfully Nov 12 22:36:34.373081 kubelet[2797]: I1112 22:36:34.372956 2797 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 22:36:34.374127 kubelet[2797]: I1112 22:36:34.374106 2797 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 22:36:34.374195 kubelet[2797]: I1112 22:36:34.374179 2797 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 22:36:34.374288 kubelet[2797]: I1112 22:36:34.374202 2797 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 22:36:34.374288 kubelet[2797]: E1112 22:36:34.374270 2797 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 22:36:34.404004 kubelet[2797]: I1112 22:36:34.403915 2797 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 22:36:34.404004 kubelet[2797]: I1112 22:36:34.403938 2797 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 22:36:34.404004 kubelet[2797]: I1112 22:36:34.403955 2797 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:36:34.404188 kubelet[2797]: I1112 22:36:34.404161 2797 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 22:36:34.404188 kubelet[2797]: I1112 22:36:34.404188 2797 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 22:36:34.404243 kubelet[2797]: I1112 22:36:34.404197 2797 policy_none.go:49] "None policy: Start" Nov 12 22:36:34.405647 kubelet[2797]: I1112 22:36:34.405622 2797 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 22:36:34.405647 kubelet[2797]: I1112 22:36:34.405650 2797 state_mem.go:35] "Initializing new in-memory state store" Nov 12 22:36:34.405853 kubelet[2797]: I1112 22:36:34.405819 2797 state_mem.go:75] "Updated machine memory state" Nov 12 22:36:34.407203 kubelet[2797]: I1112 22:36:34.406949 2797 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 22:36:34.407203 kubelet[2797]: I1112 22:36:34.407169 2797 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 22:36:34.455939 kubelet[2797]: I1112 22:36:34.455909 2797 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:36:34.462711 kubelet[2797]: I1112 22:36:34.462683 2797 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Nov 12 22:36:34.462785 kubelet[2797]: I1112 22:36:34.462752 2797 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 22:36:34.475046 kubelet[2797]: I1112 22:36:34.475011 2797 topology_manager.go:215] "Topology Admit Handler" podUID="1b7e51e6237d12e7cf0bb84e40c522c3" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 22:36:34.475176 kubelet[2797]: I1112 22:36:34.475158 2797 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 22:36:34.475233 kubelet[2797]: I1112 22:36:34.475220 2797 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 22:36:34.479937 kubelet[2797]: E1112 22:36:34.479904 2797 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 22:36:34.553478 kubelet[2797]: I1112 22:36:34.553442 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b7e51e6237d12e7cf0bb84e40c522c3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1b7e51e6237d12e7cf0bb84e40c522c3\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:36:34.553559 kubelet[2797]: I1112 22:36:34.553527 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:36:34.553559 kubelet[2797]: I1112 22:36:34.553550 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 22:36:34.554369 kubelet[2797]: I1112 22:36:34.553636 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:36:34.554369 kubelet[2797]: I1112 22:36:34.553665 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b7e51e6237d12e7cf0bb84e40c522c3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1b7e51e6237d12e7cf0bb84e40c522c3\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:36:34.554369 kubelet[2797]: I1112 22:36:34.553697 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b7e51e6237d12e7cf0bb84e40c522c3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1b7e51e6237d12e7cf0bb84e40c522c3\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:36:34.554369 kubelet[2797]: I1112 22:36:34.553717 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:36:34.554369 kubelet[2797]: I1112 22:36:34.553754 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:36:34.554529 kubelet[2797]: I1112 22:36:34.553775 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:36:34.781689 kubelet[2797]: E1112 22:36:34.781447 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:34.781689 kubelet[2797]: E1112 22:36:34.781479 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:34.781852 kubelet[2797]: E1112 22:36:34.781833 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:34.793289 sudo[2814]: pam_unix(sudo:session): session closed for user root Nov 12 22:36:35.340265 kubelet[2797]: I1112 22:36:35.340205 2797 apiserver.go:52] "Watching apiserver" Nov 12 22:36:35.352712 kubelet[2797]: I1112 22:36:35.352680 2797 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 22:36:35.384336 kubelet[2797]: E1112 22:36:35.384300 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:35.391838 kubelet[2797]: E1112 22:36:35.391817 2797 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 22:36:35.392956 kubelet[2797]: E1112 22:36:35.392937 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:35.395984 kubelet[2797]: E1112 22:36:35.393750 2797 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 12 22:36:35.395984 kubelet[2797]: E1112 22:36:35.394132 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:35.428504 kubelet[2797]: I1112 22:36:35.428462 2797 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.428404004 podStartE2EDuration="2.428404004s" podCreationTimestamp="2024-11-12 22:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:36:35.426570852 +0000 UTC m=+1.141872396" watchObservedRunningTime="2024-11-12 22:36:35.428404004 +0000 UTC m=+1.143705508" Nov 12 22:36:35.428636 kubelet[2797]: I1112 22:36:35.428574 2797 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.428557723 podStartE2EDuration="1.428557723s" podCreationTimestamp="2024-11-12 22:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:36:35.417038723 +0000 UTC m=+1.132340307" watchObservedRunningTime="2024-11-12 22:36:35.428557723 +0000 UTC m=+1.143859267" Nov 12 22:36:35.434858 kubelet[2797]: I1112 22:36:35.434807 2797 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.434781718 podStartE2EDuration="1.434781718s" podCreationTimestamp="2024-11-12 22:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:36:35.433624009 +0000 UTC m=+1.148925633" watchObservedRunningTime="2024-11-12 22:36:35.434781718 +0000 UTC m=+1.150083262" Nov 12 22:36:36.346144 sudo[1779]: pam_unix(sudo:session): session closed for user root Nov 12 22:36:36.347807 sshd[1778]: Connection closed by 10.0.0.1 port 36110 Nov 12 22:36:36.348183 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Nov 12 22:36:36.351508 systemd[1]: sshd@6-10.0.0.92:22-10.0.0.1:36110.service: Deactivated successfully. Nov 12 22:36:36.353336 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 22:36:36.354149 systemd-logind[1547]: Session 7 logged out. Waiting for processes to exit. Nov 12 22:36:36.355135 systemd-logind[1547]: Removed session 7. Nov 12 22:36:36.386052 kubelet[2797]: E1112 22:36:36.386013 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:36.386618 kubelet[2797]: E1112 22:36:36.386540 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:37.387723 kubelet[2797]: E1112 22:36:37.387686 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:37.659714 kubelet[2797]: E1112 22:36:37.659619 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:40.659281 kubelet[2797]: E1112 22:36:40.659206 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:41.392777 kubelet[2797]: E1112 22:36:41.392744 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:42.393680 kubelet[2797]: E1112 22:36:42.393641 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:44.915686 kubelet[2797]: E1112 22:36:44.915647 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:47.667539 kubelet[2797]: E1112 22:36:47.667496 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:49.172344 kubelet[2797]: I1112 22:36:49.172314 2797 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 22:36:49.172817 kubelet[2797]: I1112 22:36:49.172799 2797 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 22:36:49.172843 containerd[1573]: time="2024-11-12T22:36:49.172627647Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 22:36:49.854629 kubelet[2797]: I1112 22:36:49.854463 2797 topology_manager.go:215] "Topology Admit Handler" podUID="7e6fde54-b3d5-4650-a64f-2a0a0d515a77" podNamespace="kube-system" podName="kube-proxy-8stbh" Nov 12 22:36:49.857607 kubelet[2797]: I1112 22:36:49.857393 2797 topology_manager.go:215] "Topology Admit Handler" podUID="15fe2263-edd4-4a0a-af2b-9ddcbc189193" podNamespace="kube-system" podName="cilium-kr2w2" Nov 12 22:36:50.051791 kubelet[2797]: I1112 22:36:50.051700 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-host-proc-sys-kernel\") pod \"cilium-kr2w2\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " pod="kube-system/cilium-kr2w2" Nov 12 22:36:50.051791 kubelet[2797]: I1112 22:36:50.051747 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e6fde54-b3d5-4650-a64f-2a0a0d515a77-lib-modules\") pod \"kube-proxy-8stbh\" (UID: \"7e6fde54-b3d5-4650-a64f-2a0a0d515a77\") " pod="kube-system/kube-proxy-8stbh" Nov 12 22:36:50.051791 kubelet[2797]: I1112 22:36:50.051767 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-cilium-run\") pod \"cilium-kr2w2\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " pod="kube-system/cilium-kr2w2" Nov 12 22:36:50.051791 kubelet[2797]: I1112 22:36:50.051786 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-cni-path\") pod \"cilium-kr2w2\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " pod="kube-system/cilium-kr2w2" Nov 12 22:36:50.052014 kubelet[2797]: I1112 22:36:50.051809 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fx29\" (UniqueName: \"kubernetes.io/projected/7e6fde54-b3d5-4650-a64f-2a0a0d515a77-kube-api-access-2fx29\") pod \"kube-proxy-8stbh\" (UID: \"7e6fde54-b3d5-4650-a64f-2a0a0d515a77\") " pod="kube-system/kube-proxy-8stbh" Nov 12 22:36:50.052014 kubelet[2797]: I1112 22:36:50.051829 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-host-proc-sys-net\") pod \"cilium-kr2w2\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " pod="kube-system/cilium-kr2w2" Nov 12 22:36:50.052014 kubelet[2797]: I1112 22:36:50.051849 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-bpf-maps\") pod \"cilium-kr2w2\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " pod="kube-system/cilium-kr2w2" Nov 12 22:36:50.052014 kubelet[2797]: I1112 22:36:50.051890 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-cilium-cgroup\") pod \"cilium-kr2w2\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " pod="kube-system/cilium-kr2w2" Nov 12 22:36:50.052014 kubelet[2797]: I1112 22:36:50.051974 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15fe2263-edd4-4a0a-af2b-9ddcbc189193-cilium-config-path\") pod \"cilium-kr2w2\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " pod="kube-system/cilium-kr2w2" Nov 12 22:36:50.052122 kubelet[2797]: I1112 22:36:50.052010 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vj59\" (UniqueName: \"kubernetes.io/projected/15fe2263-edd4-4a0a-af2b-9ddcbc189193-kube-api-access-5vj59\") pod \"cilium-kr2w2\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " pod="kube-system/cilium-kr2w2" Nov 12 22:36:50.052122 kubelet[2797]: I1112 22:36:50.052064 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-hostproc\") pod \"cilium-kr2w2\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " pod="kube-system/cilium-kr2w2" Nov 12 22:36:50.052122 kubelet[2797]: I1112 22:36:50.052088 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-lib-modules\") pod \"cilium-kr2w2\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " pod="kube-system/cilium-kr2w2" Nov 12 22:36:50.052179 kubelet[2797]: I1112 22:36:50.052124 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-etc-cni-netd\") pod \"cilium-kr2w2\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " pod="kube-system/cilium-kr2w2" Nov 12 22:36:50.052179 kubelet[2797]: I1112 22:36:50.052149 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7e6fde54-b3d5-4650-a64f-2a0a0d515a77-kube-proxy\") pod \"kube-proxy-8stbh\" (UID: \"7e6fde54-b3d5-4650-a64f-2a0a0d515a77\") " pod="kube-system/kube-proxy-8stbh" Nov 12 22:36:50.052179 kubelet[2797]: I1112 22:36:50.052178 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-xtables-lock\") pod \"cilium-kr2w2\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " pod="kube-system/cilium-kr2w2" Nov 12 22:36:50.052280 kubelet[2797]: I1112 22:36:50.052207 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/15fe2263-edd4-4a0a-af2b-9ddcbc189193-hubble-tls\") pod \"cilium-kr2w2\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " pod="kube-system/cilium-kr2w2" Nov 12 22:36:50.052280 kubelet[2797]: I1112 22:36:50.052227 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e6fde54-b3d5-4650-a64f-2a0a0d515a77-xtables-lock\") pod \"kube-proxy-8stbh\" (UID: \"7e6fde54-b3d5-4650-a64f-2a0a0d515a77\") " pod="kube-system/kube-proxy-8stbh" Nov 12 22:36:50.052280 kubelet[2797]: I1112 22:36:50.052249 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/15fe2263-edd4-4a0a-af2b-9ddcbc189193-clustermesh-secrets\") pod \"cilium-kr2w2\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " pod="kube-system/cilium-kr2w2" Nov 12 22:36:50.121979 update_engine[1559]: I20241112 22:36:50.121819 1559 update_attempter.cc:509] Updating boot flags... Nov 12 22:36:50.149072 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2882) Nov 12 22:36:50.188990 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2886) Nov 12 22:36:50.216447 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2886) Nov 12 22:36:50.302662 kubelet[2797]: I1112 22:36:50.301858 2797 topology_manager.go:215] "Topology Admit Handler" podUID="b8b1ebc0-06d2-4557-9b8a-2e8858a2e220" podNamespace="kube-system" podName="cilium-operator-5cc964979-rlhz5" Nov 12 22:36:50.357413 kubelet[2797]: I1112 22:36:50.357379 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8b1ebc0-06d2-4557-9b8a-2e8858a2e220-cilium-config-path\") pod \"cilium-operator-5cc964979-rlhz5\" (UID: \"b8b1ebc0-06d2-4557-9b8a-2e8858a2e220\") " pod="kube-system/cilium-operator-5cc964979-rlhz5" Nov 12 22:36:50.357413 kubelet[2797]: I1112 22:36:50.357423 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l56d\" (UniqueName: \"kubernetes.io/projected/b8b1ebc0-06d2-4557-9b8a-2e8858a2e220-kube-api-access-5l56d\") pod \"cilium-operator-5cc964979-rlhz5\" (UID: \"b8b1ebc0-06d2-4557-9b8a-2e8858a2e220\") " pod="kube-system/cilium-operator-5cc964979-rlhz5" Nov 12 22:36:50.468333 kubelet[2797]: E1112 22:36:50.467817 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:50.469702 kubelet[2797]: E1112 22:36:50.469675 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:50.477260 containerd[1573]: time="2024-11-12T22:36:50.477217591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8stbh,Uid:7e6fde54-b3d5-4650-a64f-2a0a0d515a77,Namespace:kube-system,Attempt:0,}" Nov 12 22:36:50.478019 containerd[1573]: time="2024-11-12T22:36:50.477589609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kr2w2,Uid:15fe2263-edd4-4a0a-af2b-9ddcbc189193,Namespace:kube-system,Attempt:0,}" Nov 12 22:36:50.513000 containerd[1573]: time="2024-11-12T22:36:50.512789655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:36:50.513000 containerd[1573]: time="2024-11-12T22:36:50.512845017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:36:50.513000 containerd[1573]: time="2024-11-12T22:36:50.512868139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:36:50.513900 containerd[1573]: time="2024-11-12T22:36:50.513837064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:36:50.516959 containerd[1573]: time="2024-11-12T22:36:50.516881246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:36:50.516959 containerd[1573]: time="2024-11-12T22:36:50.516930769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:36:50.516959 containerd[1573]: time="2024-11-12T22:36:50.516941849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:36:50.517079 containerd[1573]: time="2024-11-12T22:36:50.517035693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:36:50.545837 containerd[1573]: time="2024-11-12T22:36:50.545709955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8stbh,Uid:7e6fde54-b3d5-4650-a64f-2a0a0d515a77,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7d08016cca429f15e9a19307f4196a7a642e3d07b85289a6b0bcefdfa599c6f\"" Nov 12 22:36:50.546181 containerd[1573]: time="2024-11-12T22:36:50.546059611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kr2w2,Uid:15fe2263-edd4-4a0a-af2b-9ddcbc189193,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a8c1d510b211ee379f9bd9ab88be2f5741163d4d9ac7900adda6eb8f35fa666\"" Nov 12 22:36:50.551764 kubelet[2797]: E1112 22:36:50.551725 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:50.552070 kubelet[2797]: E1112 22:36:50.551725 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:50.556264 containerd[1573]: time="2024-11-12T22:36:50.556199045Z" level=info msg="CreateContainer within sandbox \"d7d08016cca429f15e9a19307f4196a7a642e3d07b85289a6b0bcefdfa599c6f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 22:36:50.558639 containerd[1573]: time="2024-11-12T22:36:50.558577156Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 12 22:36:50.581647 containerd[1573]: time="2024-11-12T22:36:50.581600073Z" level=info msg="CreateContainer within sandbox \"d7d08016cca429f15e9a19307f4196a7a642e3d07b85289a6b0bcefdfa599c6f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"63543670a9b6947faf7c09b516401cb23cb7dcf102e05208a6000d793eb00f04\"" Nov 12 22:36:50.585210 containerd[1573]: time="2024-11-12T22:36:50.585175961Z" level=info msg="StartContainer for \"63543670a9b6947faf7c09b516401cb23cb7dcf102e05208a6000d793eb00f04\"" Nov 12 22:36:50.608178 kubelet[2797]: E1112 22:36:50.607363 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:50.608271 containerd[1573]: time="2024-11-12T22:36:50.607827860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-rlhz5,Uid:b8b1ebc0-06d2-4557-9b8a-2e8858a2e220,Namespace:kube-system,Attempt:0,}" Nov 12 22:36:50.630535 containerd[1573]: time="2024-11-12T22:36:50.630456958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:36:50.630655 containerd[1573]: time="2024-11-12T22:36:50.630514721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:36:50.630655 containerd[1573]: time="2024-11-12T22:36:50.630529842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:36:50.630655 containerd[1573]: time="2024-11-12T22:36:50.630610766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:36:50.635408 containerd[1573]: time="2024-11-12T22:36:50.635373908Z" level=info msg="StartContainer for \"63543670a9b6947faf7c09b516401cb23cb7dcf102e05208a6000d793eb00f04\" returns successfully" Nov 12 22:36:50.674080 containerd[1573]: time="2024-11-12T22:36:50.673982954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-rlhz5,Uid:b8b1ebc0-06d2-4557-9b8a-2e8858a2e220,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cca5624bc6b2becfe42aadaed573afe8855cee231be8112644ae84ac64da9a2\"" Nov 12 22:36:50.675478 kubelet[2797]: E1112 22:36:50.675377 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:51.409338 kubelet[2797]: E1112 22:36:51.409298 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:51.418009 kubelet[2797]: I1112 22:36:51.417759 2797 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8stbh" podStartSLOduration=2.41771741 podStartE2EDuration="2.41771741s" podCreationTimestamp="2024-11-12 22:36:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:36:51.417018179 +0000 UTC m=+17.132319723" watchObservedRunningTime="2024-11-12 22:36:51.41771741 +0000 UTC m=+17.133018954" Nov 12 22:36:55.677436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2660253384.mount: Deactivated successfully. Nov 12 22:36:56.915444 containerd[1573]: time="2024-11-12T22:36:56.915338546Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:56.916030 containerd[1573]: time="2024-11-12T22:36:56.915979328Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651494" Nov 12 22:36:56.916646 containerd[1573]: time="2024-11-12T22:36:56.916576149Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:36:56.918708 containerd[1573]: time="2024-11-12T22:36:56.918677983Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.360070986s" Nov 12 22:36:56.918907 containerd[1573]: time="2024-11-12T22:36:56.918802627Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Nov 12 22:36:56.921091 containerd[1573]: time="2024-11-12T22:36:56.920958663Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 12 22:36:56.926531 containerd[1573]: time="2024-11-12T22:36:56.925852715Z" level=info msg="CreateContainer within sandbox \"5a8c1d510b211ee379f9bd9ab88be2f5741163d4d9ac7900adda6eb8f35fa666\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 22:36:56.938396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3385702172.mount: Deactivated successfully. Nov 12 22:36:56.938987 containerd[1573]: time="2024-11-12T22:36:56.938760008Z" level=info msg="CreateContainer within sandbox \"5a8c1d510b211ee379f9bd9ab88be2f5741163d4d9ac7900adda6eb8f35fa666\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9926cad87f36e6b74c95df4cb5f36f3826b58a9aa05def72eab3e5c86fbcc987\"" Nov 12 22:36:56.939326 containerd[1573]: time="2024-11-12T22:36:56.939227465Z" level=info msg="StartContainer for \"9926cad87f36e6b74c95df4cb5f36f3826b58a9aa05def72eab3e5c86fbcc987\"" Nov 12 22:36:56.982048 containerd[1573]: time="2024-11-12T22:36:56.980581037Z" level=info msg="StartContainer for \"9926cad87f36e6b74c95df4cb5f36f3826b58a9aa05def72eab3e5c86fbcc987\" returns successfully" Nov 12 22:36:57.195339 containerd[1573]: time="2024-11-12T22:36:57.190058064Z" level=info msg="shim disconnected" id=9926cad87f36e6b74c95df4cb5f36f3826b58a9aa05def72eab3e5c86fbcc987 namespace=k8s.io Nov 12 22:36:57.195339 containerd[1573]: time="2024-11-12T22:36:57.194609817Z" level=warning msg="cleaning up after shim disconnected" id=9926cad87f36e6b74c95df4cb5f36f3826b58a9aa05def72eab3e5c86fbcc987 namespace=k8s.io Nov 12 22:36:57.195339 containerd[1573]: time="2024-11-12T22:36:57.194622097Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:36:57.434211 kubelet[2797]: E1112 22:36:57.434160 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:57.437733 containerd[1573]: time="2024-11-12T22:36:57.437016398Z" level=info msg="CreateContainer within sandbox \"5a8c1d510b211ee379f9bd9ab88be2f5741163d4d9ac7900adda6eb8f35fa666\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 22:36:57.448194 containerd[1573]: time="2024-11-12T22:36:57.447955846Z" level=info msg="CreateContainer within sandbox \"5a8c1d510b211ee379f9bd9ab88be2f5741163d4d9ac7900adda6eb8f35fa666\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"eb1c04f077bb87ede551382dca5dbdbb0e9ad98dbd43315cc6bab5f3f2b91036\"" Nov 12 22:36:57.449747 containerd[1573]: time="2024-11-12T22:36:57.449187607Z" level=info msg="StartContainer for \"eb1c04f077bb87ede551382dca5dbdbb0e9ad98dbd43315cc6bab5f3f2b91036\"" Nov 12 22:36:57.495141 containerd[1573]: time="2024-11-12T22:36:57.495047707Z" level=info msg="StartContainer for \"eb1c04f077bb87ede551382dca5dbdbb0e9ad98dbd43315cc6bab5f3f2b91036\" returns successfully" Nov 12 22:36:57.507284 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 22:36:57.507651 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:36:57.507711 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:36:57.513238 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:36:57.524908 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:36:57.528914 containerd[1573]: time="2024-11-12T22:36:57.528855763Z" level=info msg="shim disconnected" id=eb1c04f077bb87ede551382dca5dbdbb0e9ad98dbd43315cc6bab5f3f2b91036 namespace=k8s.io Nov 12 22:36:57.528914 containerd[1573]: time="2024-11-12T22:36:57.528911004Z" level=warning msg="cleaning up after shim disconnected" id=eb1c04f077bb87ede551382dca5dbdbb0e9ad98dbd43315cc6bab5f3f2b91036 namespace=k8s.io Nov 12 22:36:57.529031 containerd[1573]: time="2024-11-12T22:36:57.528919245Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:36:57.936186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9926cad87f36e6b74c95df4cb5f36f3826b58a9aa05def72eab3e5c86fbcc987-rootfs.mount: Deactivated successfully. Nov 12 22:36:58.436044 kubelet[2797]: E1112 22:36:58.435983 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:58.438088 containerd[1573]: time="2024-11-12T22:36:58.438049030Z" level=info msg="CreateContainer within sandbox \"5a8c1d510b211ee379f9bd9ab88be2f5741163d4d9ac7900adda6eb8f35fa666\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 22:36:58.452957 containerd[1573]: time="2024-11-12T22:36:58.452918668Z" level=info msg="CreateContainer within sandbox \"5a8c1d510b211ee379f9bd9ab88be2f5741163d4d9ac7900adda6eb8f35fa666\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2146865bae188232fbbe18a53cd6cd398c07c85488d772d580ed68d3020fc854\"" Nov 12 22:36:58.453607 containerd[1573]: time="2024-11-12T22:36:58.453543248Z" level=info msg="StartContainer for \"2146865bae188232fbbe18a53cd6cd398c07c85488d772d580ed68d3020fc854\"" Nov 12 22:36:58.501137 containerd[1573]: time="2024-11-12T22:36:58.501104817Z" level=info msg="StartContainer for \"2146865bae188232fbbe18a53cd6cd398c07c85488d772d580ed68d3020fc854\" returns successfully" Nov 12 22:36:58.537631 containerd[1573]: time="2024-11-12T22:36:58.537581349Z" level=info msg="shim disconnected" id=2146865bae188232fbbe18a53cd6cd398c07c85488d772d580ed68d3020fc854 namespace=k8s.io Nov 12 22:36:58.537631 containerd[1573]: time="2024-11-12T22:36:58.537629031Z" level=warning msg="cleaning up after shim disconnected" id=2146865bae188232fbbe18a53cd6cd398c07c85488d772d580ed68d3020fc854 namespace=k8s.io Nov 12 22:36:58.537631 containerd[1573]: time="2024-11-12T22:36:58.537636791Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:36:58.936126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2146865bae188232fbbe18a53cd6cd398c07c85488d772d580ed68d3020fc854-rootfs.mount: Deactivated successfully. Nov 12 22:36:59.439428 kubelet[2797]: E1112 22:36:59.439359 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:36:59.442506 containerd[1573]: time="2024-11-12T22:36:59.442227469Z" level=info msg="CreateContainer within sandbox \"5a8c1d510b211ee379f9bd9ab88be2f5741163d4d9ac7900adda6eb8f35fa666\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 22:36:59.457599 containerd[1573]: time="2024-11-12T22:36:59.457549821Z" level=info msg="CreateContainer within sandbox \"5a8c1d510b211ee379f9bd9ab88be2f5741163d4d9ac7900adda6eb8f35fa666\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0f4425ae75a1e88f1cb991ff5372506c89c0313b3685ded61878fa717e5f7343\"" Nov 12 22:36:59.458178 containerd[1573]: time="2024-11-12T22:36:59.458148759Z" level=info msg="StartContainer for \"0f4425ae75a1e88f1cb991ff5372506c89c0313b3685ded61878fa717e5f7343\"" Nov 12 22:36:59.503472 containerd[1573]: time="2024-11-12T22:36:59.503424833Z" level=info msg="StartContainer for \"0f4425ae75a1e88f1cb991ff5372506c89c0313b3685ded61878fa717e5f7343\" returns successfully" Nov 12 22:36:59.520371 containerd[1573]: time="2024-11-12T22:36:59.520310313Z" level=info msg="shim disconnected" id=0f4425ae75a1e88f1cb991ff5372506c89c0313b3685ded61878fa717e5f7343 namespace=k8s.io Nov 12 22:36:59.520371 containerd[1573]: time="2024-11-12T22:36:59.520360395Z" level=warning msg="cleaning up after shim disconnected" id=0f4425ae75a1e88f1cb991ff5372506c89c0313b3685ded61878fa717e5f7343 namespace=k8s.io Nov 12 22:36:59.520371 containerd[1573]: time="2024-11-12T22:36:59.520368395Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:36:59.936134 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f4425ae75a1e88f1cb991ff5372506c89c0313b3685ded61878fa717e5f7343-rootfs.mount: Deactivated successfully. Nov 12 22:37:00.375229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1570790081.mount: Deactivated successfully. Nov 12 22:37:00.443442 kubelet[2797]: E1112 22:37:00.443218 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:00.446821 containerd[1573]: time="2024-11-12T22:37:00.446753832Z" level=info msg="CreateContainer within sandbox \"5a8c1d510b211ee379f9bd9ab88be2f5741163d4d9ac7900adda6eb8f35fa666\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 22:37:00.459579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1567192337.mount: Deactivated successfully. Nov 12 22:37:00.461012 containerd[1573]: time="2024-11-12T22:37:00.460957612Z" level=info msg="CreateContainer within sandbox \"5a8c1d510b211ee379f9bd9ab88be2f5741163d4d9ac7900adda6eb8f35fa666\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff\"" Nov 12 22:37:00.462058 containerd[1573]: time="2024-11-12T22:37:00.461998802Z" level=info msg="StartContainer for \"218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff\"" Nov 12 22:37:00.509513 containerd[1573]: time="2024-11-12T22:37:00.509408162Z" level=info msg="StartContainer for \"218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff\" returns successfully" Nov 12 22:37:00.635258 kubelet[2797]: I1112 22:37:00.635167 2797 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 22:37:00.660024 kubelet[2797]: I1112 22:37:00.659954 2797 topology_manager.go:215] "Topology Admit Handler" podUID="6bc6f948-c68c-4c50-a2f3-b22ad31a7d69" podNamespace="kube-system" podName="coredns-76f75df574-xm76h" Nov 12 22:37:00.660176 kubelet[2797]: I1112 22:37:00.660164 2797 topology_manager.go:215] "Topology Admit Handler" podUID="63f48b84-f6c1-437b-ba18-9aea36da811f" podNamespace="kube-system" podName="coredns-76f75df574-lmtdr" Nov 12 22:37:00.749904 containerd[1573]: time="2024-11-12T22:37:00.749829499Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:37:00.750640 containerd[1573]: time="2024-11-12T22:37:00.750484079Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138302" Nov 12 22:37:00.751436 containerd[1573]: time="2024-11-12T22:37:00.751375865Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:37:00.752849 containerd[1573]: time="2024-11-12T22:37:00.752810627Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.831793002s" Nov 12 22:37:00.752849 containerd[1573]: time="2024-11-12T22:37:00.752848268Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Nov 12 22:37:00.755679 containerd[1573]: time="2024-11-12T22:37:00.755629431Z" level=info msg="CreateContainer within sandbox \"6cca5624bc6b2becfe42aadaed573afe8855cee231be8112644ae84ac64da9a2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 12 22:37:00.765891 containerd[1573]: time="2024-11-12T22:37:00.765851492Z" level=info msg="CreateContainer within sandbox \"6cca5624bc6b2becfe42aadaed573afe8855cee231be8112644ae84ac64da9a2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1f5472ec1507afc91f08473728f3969c609342c6520f7a7f4f4ad316be1386b2\"" Nov 12 22:37:00.766273 containerd[1573]: time="2024-11-12T22:37:00.766240104Z" level=info msg="StartContainer for \"1f5472ec1507afc91f08473728f3969c609342c6520f7a7f4f4ad316be1386b2\"" Nov 12 22:37:00.815875 containerd[1573]: time="2024-11-12T22:37:00.815828048Z" level=info msg="StartContainer for \"1f5472ec1507afc91f08473728f3969c609342c6520f7a7f4f4ad316be1386b2\" returns successfully" Nov 12 22:37:00.827400 kubelet[2797]: I1112 22:37:00.827356 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bc6f948-c68c-4c50-a2f3-b22ad31a7d69-config-volume\") pod \"coredns-76f75df574-xm76h\" (UID: \"6bc6f948-c68c-4c50-a2f3-b22ad31a7d69\") " pod="kube-system/coredns-76f75df574-xm76h" Nov 12 22:37:00.827540 kubelet[2797]: I1112 22:37:00.827469 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx65w\" (UniqueName: \"kubernetes.io/projected/63f48b84-f6c1-437b-ba18-9aea36da811f-kube-api-access-mx65w\") pod \"coredns-76f75df574-lmtdr\" (UID: \"63f48b84-f6c1-437b-ba18-9aea36da811f\") " pod="kube-system/coredns-76f75df574-lmtdr" Nov 12 22:37:00.827540 kubelet[2797]: I1112 22:37:00.827512 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63f48b84-f6c1-437b-ba18-9aea36da811f-config-volume\") pod \"coredns-76f75df574-lmtdr\" (UID: \"63f48b84-f6c1-437b-ba18-9aea36da811f\") " pod="kube-system/coredns-76f75df574-lmtdr" Nov 12 22:37:00.827540 kubelet[2797]: I1112 22:37:00.827537 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjggr\" (UniqueName: \"kubernetes.io/projected/6bc6f948-c68c-4c50-a2f3-b22ad31a7d69-kube-api-access-wjggr\") pod \"coredns-76f75df574-xm76h\" (UID: \"6bc6f948-c68c-4c50-a2f3-b22ad31a7d69\") " pod="kube-system/coredns-76f75df574-xm76h" Nov 12 22:37:00.860239 systemd[1]: Started sshd@7-10.0.0.92:22-10.0.0.1:37220.service - OpenSSH per-connection server daemon (10.0.0.1:37220). Nov 12 22:37:00.911081 sshd[3567]: Accepted publickey for core from 10.0.0.1 port 37220 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:37:00.913279 sshd-session[3567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:37:00.919527 systemd-logind[1547]: New session 8 of user core. Nov 12 22:37:00.926332 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 22:37:00.967858 kubelet[2797]: E1112 22:37:00.967828 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:00.969206 kubelet[2797]: E1112 22:37:00.969186 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:00.970354 containerd[1573]: time="2024-11-12T22:37:00.970083961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-lmtdr,Uid:63f48b84-f6c1-437b-ba18-9aea36da811f,Namespace:kube-system,Attempt:0,}" Nov 12 22:37:00.971103 containerd[1573]: time="2024-11-12T22:37:00.970210325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xm76h,Uid:6bc6f948-c68c-4c50-a2f3-b22ad31a7d69,Namespace:kube-system,Attempt:0,}" Nov 12 22:37:01.190263 sshd[3585]: Connection closed by 10.0.0.1 port 37220 Nov 12 22:37:01.191293 sshd-session[3567]: pam_unix(sshd:session): session closed for user core Nov 12 22:37:01.196677 systemd-logind[1547]: Session 8 logged out. Waiting for processes to exit. Nov 12 22:37:01.197418 systemd[1]: sshd@7-10.0.0.92:22-10.0.0.1:37220.service: Deactivated successfully. Nov 12 22:37:01.200580 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 22:37:01.202499 systemd-logind[1547]: Removed session 8. Nov 12 22:37:01.449669 kubelet[2797]: E1112 22:37:01.449343 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:01.457957 kubelet[2797]: E1112 22:37:01.457914 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:01.464350 kubelet[2797]: I1112 22:37:01.464291 2797 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-rlhz5" podStartSLOduration=1.388511077 podStartE2EDuration="11.464254919s" podCreationTimestamp="2024-11-12 22:36:50 +0000 UTC" firstStartedPulling="2024-11-12 22:36:50.677441476 +0000 UTC m=+16.392743020" lastFinishedPulling="2024-11-12 22:37:00.753185318 +0000 UTC m=+26.468486862" observedRunningTime="2024-11-12 22:37:01.46360122 +0000 UTC m=+27.178902724" watchObservedRunningTime="2024-11-12 22:37:01.464254919 +0000 UTC m=+27.179556463" Nov 12 22:37:01.481660 kubelet[2797]: I1112 22:37:01.481614 2797 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-kr2w2" podStartSLOduration=6.117204805 podStartE2EDuration="12.481573049s" podCreationTimestamp="2024-11-12 22:36:49 +0000 UTC" firstStartedPulling="2024-11-12 22:36:50.556185805 +0000 UTC m=+16.271487349" lastFinishedPulling="2024-11-12 22:36:56.920554049 +0000 UTC m=+22.635855593" observedRunningTime="2024-11-12 22:37:01.481136477 +0000 UTC m=+27.196437981" watchObservedRunningTime="2024-11-12 22:37:01.481573049 +0000 UTC m=+27.196874593" Nov 12 22:37:02.463345 kubelet[2797]: E1112 22:37:02.463057 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:02.463739 kubelet[2797]: E1112 22:37:02.463408 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:03.465045 kubelet[2797]: E1112 22:37:03.464946 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:04.784319 systemd-networkd[1230]: cilium_host: Link UP Nov 12 22:37:04.784712 systemd-networkd[1230]: cilium_net: Link UP Nov 12 22:37:04.784887 systemd-networkd[1230]: cilium_net: Gained carrier Nov 12 22:37:04.785042 systemd-networkd[1230]: cilium_host: Gained carrier Nov 12 22:37:04.859990 systemd-networkd[1230]: cilium_vxlan: Link UP Nov 12 22:37:04.859997 systemd-networkd[1230]: cilium_vxlan: Gained carrier Nov 12 22:37:05.152032 kernel: NET: Registered PF_ALG protocol family Nov 12 22:37:05.501169 systemd-networkd[1230]: cilium_net: Gained IPv6LL Nov 12 22:37:05.690546 systemd-networkd[1230]: lxc_health: Link UP Nov 12 22:37:05.702884 systemd-networkd[1230]: cilium_host: Gained IPv6LL Nov 12 22:37:05.703165 systemd-networkd[1230]: lxc_health: Gained carrier Nov 12 22:37:05.885142 systemd-networkd[1230]: cilium_vxlan: Gained IPv6LL Nov 12 22:37:06.135053 systemd-networkd[1230]: lxcd8ef18617844: Link UP Nov 12 22:37:06.155072 kernel: eth0: renamed from tmpd61d7 Nov 12 22:37:06.159008 kernel: eth0: renamed from tmpa5934 Nov 12 22:37:06.165651 systemd-networkd[1230]: lxce97ae114f1eb: Link UP Nov 12 22:37:06.166183 systemd-networkd[1230]: lxce97ae114f1eb: Gained carrier Nov 12 22:37:06.167515 systemd-networkd[1230]: lxcd8ef18617844: Gained carrier Nov 12 22:37:06.196181 systemd[1]: Started sshd@8-10.0.0.92:22-10.0.0.1:45068.service - OpenSSH per-connection server daemon (10.0.0.1:45068). Nov 12 22:37:06.277633 sshd[4038]: Accepted publickey for core from 10.0.0.1 port 45068 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:37:06.278865 sshd-session[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:37:06.282786 systemd-logind[1547]: New session 9 of user core. Nov 12 22:37:06.293260 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 22:37:06.419959 sshd[4044]: Connection closed by 10.0.0.1 port 45068 Nov 12 22:37:06.421228 sshd-session[4038]: pam_unix(sshd:session): session closed for user core Nov 12 22:37:06.424088 systemd-logind[1547]: Session 9 logged out. Waiting for processes to exit. Nov 12 22:37:06.424907 systemd[1]: sshd@8-10.0.0.92:22-10.0.0.1:45068.service: Deactivated successfully. Nov 12 22:37:06.428447 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 22:37:06.429495 systemd-logind[1547]: Removed session 9. Nov 12 22:37:06.496418 kubelet[2797]: E1112 22:37:06.496370 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:07.229098 systemd-networkd[1230]: lxcd8ef18617844: Gained IPv6LL Nov 12 22:37:07.293128 systemd-networkd[1230]: lxce97ae114f1eb: Gained IPv6LL Nov 12 22:37:07.472953 kubelet[2797]: E1112 22:37:07.472862 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:07.549215 systemd-networkd[1230]: lxc_health: Gained IPv6LL Nov 12 22:37:08.477944 kubelet[2797]: E1112 22:37:08.477913 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:09.591656 containerd[1573]: time="2024-11-12T22:37:09.591557097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:37:09.591656 containerd[1573]: time="2024-11-12T22:37:09.591614738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:37:09.591656 containerd[1573]: time="2024-11-12T22:37:09.591630339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:37:09.595420 containerd[1573]: time="2024-11-12T22:37:09.591745821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:37:09.602596 containerd[1573]: time="2024-11-12T22:37:09.602421207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:37:09.602596 containerd[1573]: time="2024-11-12T22:37:09.602488808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:37:09.602596 containerd[1573]: time="2024-11-12T22:37:09.602502528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:37:09.602776 containerd[1573]: time="2024-11-12T22:37:09.602694692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:37:09.616655 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 22:37:09.620431 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 22:37:09.640217 containerd[1573]: time="2024-11-12T22:37:09.640178205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xm76h,Uid:6bc6f948-c68c-4c50-a2f3-b22ad31a7d69,Namespace:kube-system,Attempt:0,} returns sandbox id \"d61d72be8163944671c9750e9addeb5d421e6e7846530080ee1b68a4fb6bec80\"" Nov 12 22:37:09.641126 kubelet[2797]: E1112 22:37:09.641011 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:09.644478 containerd[1573]: time="2024-11-12T22:37:09.644435295Z" level=info msg="CreateContainer within sandbox \"d61d72be8163944671c9750e9addeb5d421e6e7846530080ee1b68a4fb6bec80\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 22:37:09.648210 containerd[1573]: time="2024-11-12T22:37:09.648142613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-lmtdr,Uid:63f48b84-f6c1-437b-ba18-9aea36da811f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a593438f86e2146546b71ae93443549c6a53167715eb5079a293b7d9107485dd\"" Nov 12 22:37:09.649376 kubelet[2797]: E1112 22:37:09.649351 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:09.652055 containerd[1573]: time="2024-11-12T22:37:09.652023255Z" level=info msg="CreateContainer within sandbox \"a593438f86e2146546b71ae93443549c6a53167715eb5079a293b7d9107485dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 22:37:09.661383 containerd[1573]: time="2024-11-12T22:37:09.661347172Z" level=info msg="CreateContainer within sandbox \"d61d72be8163944671c9750e9addeb5d421e6e7846530080ee1b68a4fb6bec80\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e8504860b2cdc3098cd521e39d7862f27943c7596b1d5187b3f50129aace157b\"" Nov 12 22:37:09.662224 containerd[1573]: time="2024-11-12T22:37:09.662185870Z" level=info msg="StartContainer for \"e8504860b2cdc3098cd521e39d7862f27943c7596b1d5187b3f50129aace157b\"" Nov 12 22:37:09.666354 containerd[1573]: time="2024-11-12T22:37:09.666320597Z" level=info msg="CreateContainer within sandbox \"a593438f86e2146546b71ae93443549c6a53167715eb5079a293b7d9107485dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bd1936f37a5eef9186ee39c714f6ef0691618eb532f822f86c8b8054ddfb988c\"" Nov 12 22:37:09.668001 containerd[1573]: time="2024-11-12T22:37:09.667545543Z" level=info msg="StartContainer for \"bd1936f37a5eef9186ee39c714f6ef0691618eb532f822f86c8b8054ddfb988c\"" Nov 12 22:37:09.714947 containerd[1573]: time="2024-11-12T22:37:09.712205687Z" level=info msg="StartContainer for \"e8504860b2cdc3098cd521e39d7862f27943c7596b1d5187b3f50129aace157b\" returns successfully" Nov 12 22:37:09.719262 containerd[1573]: time="2024-11-12T22:37:09.716518778Z" level=info msg="StartContainer for \"bd1936f37a5eef9186ee39c714f6ef0691618eb532f822f86c8b8054ddfb988c\" returns successfully" Nov 12 22:37:10.481481 kubelet[2797]: E1112 22:37:10.481445 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:10.484225 kubelet[2797]: E1112 22:37:10.484167 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:10.493462 kubelet[2797]: I1112 22:37:10.493289 2797 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-lmtdr" podStartSLOduration=20.493244709 podStartE2EDuration="20.493244709s" podCreationTimestamp="2024-11-12 22:36:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:37:10.49277274 +0000 UTC m=+36.208074284" watchObservedRunningTime="2024-11-12 22:37:10.493244709 +0000 UTC m=+36.208546253" Nov 12 22:37:11.432189 systemd[1]: Started sshd@9-10.0.0.92:22-10.0.0.1:45076.service - OpenSSH per-connection server daemon (10.0.0.1:45076). Nov 12 22:37:11.473735 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 45076 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:37:11.475096 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:37:11.478530 systemd-logind[1547]: New session 10 of user core. Nov 12 22:37:11.484845 kubelet[2797]: E1112 22:37:11.484781 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:11.484845 kubelet[2797]: E1112 22:37:11.484784 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:11.486258 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 22:37:11.598015 sshd[4244]: Connection closed by 10.0.0.1 port 45076 Nov 12 22:37:11.598200 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Nov 12 22:37:11.601258 systemd[1]: sshd@9-10.0.0.92:22-10.0.0.1:45076.service: Deactivated successfully. Nov 12 22:37:11.603259 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 22:37:11.603269 systemd-logind[1547]: Session 10 logged out. Waiting for processes to exit. Nov 12 22:37:11.604579 systemd-logind[1547]: Removed session 10. Nov 12 22:37:12.487074 kubelet[2797]: E1112 22:37:12.486996 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:12.487074 kubelet[2797]: E1112 22:37:12.487064 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:16.613221 systemd[1]: Started sshd@10-10.0.0.92:22-10.0.0.1:51726.service - OpenSSH per-connection server daemon (10.0.0.1:51726). Nov 12 22:37:16.651528 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 51726 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:37:16.652813 sshd-session[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:37:16.657089 systemd-logind[1547]: New session 11 of user core. Nov 12 22:37:16.666302 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 22:37:16.781764 sshd[4261]: Connection closed by 10.0.0.1 port 51726 Nov 12 22:37:16.782866 sshd-session[4258]: pam_unix(sshd:session): session closed for user core Nov 12 22:37:16.792196 systemd[1]: Started sshd@11-10.0.0.92:22-10.0.0.1:51740.service - OpenSSH per-connection server daemon (10.0.0.1:51740). Nov 12 22:37:16.792928 systemd[1]: sshd@10-10.0.0.92:22-10.0.0.1:51726.service: Deactivated successfully. Nov 12 22:37:16.795336 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 22:37:16.796139 systemd-logind[1547]: Session 11 logged out. Waiting for processes to exit. Nov 12 22:37:16.797284 systemd-logind[1547]: Removed session 11. Nov 12 22:37:16.832562 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 51740 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:37:16.833882 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:37:16.837880 systemd-logind[1547]: New session 12 of user core. Nov 12 22:37:16.847220 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 22:37:16.993533 sshd[4277]: Connection closed by 10.0.0.1 port 51740 Nov 12 22:37:16.993988 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Nov 12 22:37:17.004260 systemd[1]: Started sshd@12-10.0.0.92:22-10.0.0.1:51752.service - OpenSSH per-connection server daemon (10.0.0.1:51752). Nov 12 22:37:17.005272 systemd[1]: sshd@11-10.0.0.92:22-10.0.0.1:51740.service: Deactivated successfully. Nov 12 22:37:17.012110 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 22:37:17.013828 systemd-logind[1547]: Session 12 logged out. Waiting for processes to exit. Nov 12 22:37:17.015120 systemd-logind[1547]: Removed session 12. Nov 12 22:37:17.050213 sshd[4285]: Accepted publickey for core from 10.0.0.1 port 51752 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:37:17.051433 sshd-session[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:37:17.055821 systemd-logind[1547]: New session 13 of user core. Nov 12 22:37:17.069322 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 22:37:17.182809 sshd[4291]: Connection closed by 10.0.0.1 port 51752 Nov 12 22:37:17.184653 sshd-session[4285]: pam_unix(sshd:session): session closed for user core Nov 12 22:37:17.188532 systemd[1]: sshd@12-10.0.0.92:22-10.0.0.1:51752.service: Deactivated successfully. Nov 12 22:37:17.190454 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 22:37:17.190466 systemd-logind[1547]: Session 13 logged out. Waiting for processes to exit. Nov 12 22:37:17.192080 systemd-logind[1547]: Removed session 13. Nov 12 22:37:22.194270 systemd[1]: Started sshd@13-10.0.0.92:22-10.0.0.1:51756.service - OpenSSH per-connection server daemon (10.0.0.1:51756). Nov 12 22:37:22.232625 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 51756 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:37:22.233800 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:37:22.237685 systemd-logind[1547]: New session 14 of user core. Nov 12 22:37:22.247271 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 22:37:22.355470 sshd[4309]: Connection closed by 10.0.0.1 port 51756 Nov 12 22:37:22.355342 sshd-session[4306]: pam_unix(sshd:session): session closed for user core Nov 12 22:37:22.358722 systemd[1]: sshd@13-10.0.0.92:22-10.0.0.1:51756.service: Deactivated successfully. Nov 12 22:37:22.360751 systemd-logind[1547]: Session 14 logged out. Waiting for processes to exit. Nov 12 22:37:22.360835 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 22:37:22.361778 systemd-logind[1547]: Removed session 14. Nov 12 22:37:27.367243 systemd[1]: Started sshd@14-10.0.0.92:22-10.0.0.1:46848.service - OpenSSH per-connection server daemon (10.0.0.1:46848). Nov 12 22:37:27.403784 sshd[4322]: Accepted publickey for core from 10.0.0.1 port 46848 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:37:27.404854 sshd-session[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:37:27.408402 systemd-logind[1547]: New session 15 of user core. Nov 12 22:37:27.421242 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 22:37:27.529012 sshd[4325]: Connection closed by 10.0.0.1 port 46848 Nov 12 22:37:27.529435 sshd-session[4322]: pam_unix(sshd:session): session closed for user core Nov 12 22:37:27.539245 systemd[1]: Started sshd@15-10.0.0.92:22-10.0.0.1:46860.service - OpenSSH per-connection server daemon (10.0.0.1:46860). Nov 12 22:37:27.539655 systemd[1]: sshd@14-10.0.0.92:22-10.0.0.1:46848.service: Deactivated successfully. Nov 12 22:37:27.541276 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 22:37:27.544288 systemd-logind[1547]: Session 15 logged out. Waiting for processes to exit. Nov 12 22:37:27.545525 systemd-logind[1547]: Removed session 15. Nov 12 22:37:27.583272 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 46860 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:37:27.584608 sshd-session[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:37:27.589026 systemd-logind[1547]: New session 16 of user core. Nov 12 22:37:27.600221 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 22:37:27.803295 sshd[4340]: Connection closed by 10.0.0.1 port 46860 Nov 12 22:37:27.805554 sshd-session[4335]: pam_unix(sshd:session): session closed for user core Nov 12 22:37:27.813238 systemd[1]: Started sshd@16-10.0.0.92:22-10.0.0.1:46874.service - OpenSSH per-connection server daemon (10.0.0.1:46874). Nov 12 22:37:27.813802 systemd[1]: sshd@15-10.0.0.92:22-10.0.0.1:46860.service: Deactivated successfully. Nov 12 22:37:27.816564 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 22:37:27.817408 systemd-logind[1547]: Session 16 logged out. Waiting for processes to exit. Nov 12 22:37:27.818395 systemd-logind[1547]: Removed session 16. Nov 12 22:37:27.852574 sshd[4348]: Accepted publickey for core from 10.0.0.1 port 46874 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:37:27.853833 sshd-session[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:37:27.858009 systemd-logind[1547]: New session 17 of user core. Nov 12 22:37:27.864220 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 22:37:29.110814 sshd[4354]: Connection closed by 10.0.0.1 port 46874 Nov 12 22:37:29.111516 sshd-session[4348]: pam_unix(sshd:session): session closed for user core Nov 12 22:37:29.123067 systemd[1]: Started sshd@17-10.0.0.92:22-10.0.0.1:46884.service - OpenSSH per-connection server daemon (10.0.0.1:46884). Nov 12 22:37:29.123550 systemd[1]: sshd@16-10.0.0.92:22-10.0.0.1:46874.service: Deactivated successfully. Nov 12 22:37:29.131114 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 22:37:29.136807 systemd-logind[1547]: Session 17 logged out. Waiting for processes to exit. Nov 12 22:37:29.138650 systemd-logind[1547]: Removed session 17. Nov 12 22:37:29.174334 sshd[4372]: Accepted publickey for core from 10.0.0.1 port 46884 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:37:29.175694 sshd-session[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:37:29.181093 systemd-logind[1547]: New session 18 of user core. Nov 12 22:37:29.192244 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 22:37:29.411367 sshd[4380]: Connection closed by 10.0.0.1 port 46884 Nov 12 22:37:29.412165 sshd-session[4372]: pam_unix(sshd:session): session closed for user core Nov 12 22:37:29.421338 systemd[1]: Started sshd@18-10.0.0.92:22-10.0.0.1:46892.service - OpenSSH per-connection server daemon (10.0.0.1:46892). Nov 12 22:37:29.422885 systemd[1]: sshd@17-10.0.0.92:22-10.0.0.1:46884.service: Deactivated successfully. Nov 12 22:37:29.425839 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 22:37:29.426650 systemd-logind[1547]: Session 18 logged out. Waiting for processes to exit. Nov 12 22:37:29.427450 systemd-logind[1547]: Removed session 18. Nov 12 22:37:29.458651 sshd[4388]: Accepted publickey for core from 10.0.0.1 port 46892 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:37:29.459959 sshd-session[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:37:29.464131 systemd-logind[1547]: New session 19 of user core. Nov 12 22:37:29.475341 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 22:37:29.619151 sshd[4394]: Connection closed by 10.0.0.1 port 46892 Nov 12 22:37:29.619713 sshd-session[4388]: pam_unix(sshd:session): session closed for user core Nov 12 22:37:29.623096 systemd[1]: sshd@18-10.0.0.92:22-10.0.0.1:46892.service: Deactivated successfully. Nov 12 22:37:29.625152 systemd-logind[1547]: Session 19 logged out. Waiting for processes to exit. Nov 12 22:37:29.625236 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 22:37:29.627228 systemd-logind[1547]: Removed session 19. Nov 12 22:37:34.635193 systemd[1]: Started sshd@19-10.0.0.92:22-10.0.0.1:48178.service - OpenSSH per-connection server daemon (10.0.0.1:48178). Nov 12 22:37:34.671587 sshd[4412]: Accepted publickey for core from 10.0.0.1 port 48178 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:37:34.672670 sshd-session[4412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:37:34.676324 systemd-logind[1547]: New session 20 of user core. Nov 12 22:37:34.684204 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 22:37:34.790270 sshd[4415]: Connection closed by 10.0.0.1 port 48178 Nov 12 22:37:34.790358 sshd-session[4412]: pam_unix(sshd:session): session closed for user core Nov 12 22:37:34.793604 systemd-logind[1547]: Session 20 logged out. Waiting for processes to exit. Nov 12 22:37:34.794145 systemd[1]: sshd@19-10.0.0.92:22-10.0.0.1:48178.service: Deactivated successfully. Nov 12 22:37:34.796298 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 22:37:34.797339 systemd-logind[1547]: Removed session 20. Nov 12 22:37:39.805264 systemd[1]: Started sshd@20-10.0.0.92:22-10.0.0.1:48194.service - OpenSSH per-connection server daemon (10.0.0.1:48194). Nov 12 22:37:39.842423 sshd[4428]: Accepted publickey for core from 10.0.0.1 port 48194 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:37:39.843635 sshd-session[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:37:39.847139 systemd-logind[1547]: New session 21 of user core. Nov 12 22:37:39.856182 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 22:37:39.962598 sshd[4431]: Connection closed by 10.0.0.1 port 48194 Nov 12 22:37:39.962899 sshd-session[4428]: pam_unix(sshd:session): session closed for user core Nov 12 22:37:39.965246 systemd[1]: sshd@20-10.0.0.92:22-10.0.0.1:48194.service: Deactivated successfully. Nov 12 22:37:39.967806 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 22:37:39.968298 systemd-logind[1547]: Session 21 logged out. Waiting for processes to exit. Nov 12 22:37:39.969361 systemd-logind[1547]: Removed session 21. Nov 12 22:37:44.975234 systemd[1]: Started sshd@21-10.0.0.92:22-10.0.0.1:56672.service - OpenSSH per-connection server daemon (10.0.0.1:56672). Nov 12 22:37:45.011645 sshd[4443]: Accepted publickey for core from 10.0.0.1 port 56672 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:37:45.012801 sshd-session[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:37:45.016296 systemd-logind[1547]: New session 22 of user core. Nov 12 22:37:45.028214 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 22:37:45.135870 sshd[4446]: Connection closed by 10.0.0.1 port 56672 Nov 12 22:37:45.136830 sshd-session[4443]: pam_unix(sshd:session): session closed for user core Nov 12 22:37:45.147242 systemd[1]: Started sshd@22-10.0.0.92:22-10.0.0.1:56686.service - OpenSSH per-connection server daemon (10.0.0.1:56686). Nov 12 22:37:45.147621 systemd[1]: sshd@21-10.0.0.92:22-10.0.0.1:56672.service: Deactivated successfully. Nov 12 22:37:45.149566 systemd-logind[1547]: Session 22 logged out. Waiting for processes to exit. Nov 12 22:37:45.150115 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 22:37:45.151523 systemd-logind[1547]: Removed session 22. Nov 12 22:37:45.186298 sshd[4456]: Accepted publickey for core from 10.0.0.1 port 56686 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:37:45.187566 sshd-session[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:37:45.191324 systemd-logind[1547]: New session 23 of user core. Nov 12 22:37:45.204184 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 22:37:46.945999 kubelet[2797]: I1112 22:37:46.945939 2797 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xm76h" podStartSLOduration=56.945896852 podStartE2EDuration="56.945896852s" podCreationTimestamp="2024-11-12 22:36:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:37:10.51525972 +0000 UTC m=+36.230561264" watchObservedRunningTime="2024-11-12 22:37:46.945896852 +0000 UTC m=+72.661198356" Nov 12 22:37:46.956514 containerd[1573]: time="2024-11-12T22:37:46.956374160Z" level=info msg="StopContainer for \"1f5472ec1507afc91f08473728f3969c609342c6520f7a7f4f4ad316be1386b2\" with timeout 30 (s)" Nov 12 22:37:46.956941 containerd[1573]: time="2024-11-12T22:37:46.956721919Z" level=info msg="Stop container \"1f5472ec1507afc91f08473728f3969c609342c6520f7a7f4f4ad316be1386b2\" with signal terminated" Nov 12 22:37:46.973443 systemd[1]: run-containerd-runc-k8s.io-218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff-runc.vqo8BB.mount: Deactivated successfully. Nov 12 22:37:46.986584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f5472ec1507afc91f08473728f3969c609342c6520f7a7f4f4ad316be1386b2-rootfs.mount: Deactivated successfully. Nov 12 22:37:46.992052 containerd[1573]: time="2024-11-12T22:37:46.992015397Z" level=info msg="StopContainer for \"218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff\" with timeout 2 (s)" Nov 12 22:37:46.992259 containerd[1573]: time="2024-11-12T22:37:46.992238516Z" level=info msg="Stop container \"218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff\" with signal terminated" Nov 12 22:37:46.993670 containerd[1573]: time="2024-11-12T22:37:46.993590835Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 22:37:46.997727 systemd-networkd[1230]: lxc_health: Link DOWN Nov 12 22:37:46.997730 systemd-networkd[1230]: lxc_health: Lost carrier Nov 12 22:37:47.003313 containerd[1573]: time="2024-11-12T22:37:47.003266184Z" level=info msg="shim disconnected" id=1f5472ec1507afc91f08473728f3969c609342c6520f7a7f4f4ad316be1386b2 namespace=k8s.io Nov 12 22:37:47.003313 containerd[1573]: time="2024-11-12T22:37:47.003313264Z" level=warning msg="cleaning up after shim disconnected" id=1f5472ec1507afc91f08473728f3969c609342c6520f7a7f4f4ad316be1386b2 namespace=k8s.io Nov 12 22:37:47.003411 containerd[1573]: time="2024-11-12T22:37:47.003323504Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:37:47.046078 containerd[1573]: time="2024-11-12T22:37:47.045494906Z" level=info msg="StopContainer for \"1f5472ec1507afc91f08473728f3969c609342c6520f7a7f4f4ad316be1386b2\" returns successfully" Nov 12 22:37:47.046998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff-rootfs.mount: Deactivated successfully. Nov 12 22:37:47.051264 containerd[1573]: time="2024-11-12T22:37:47.051216701Z" level=info msg="StopPodSandbox for \"6cca5624bc6b2becfe42aadaed573afe8855cee231be8112644ae84ac64da9a2\"" Nov 12 22:37:47.051392 containerd[1573]: time="2024-11-12T22:37:47.051267301Z" level=info msg="Container to stop \"1f5472ec1507afc91f08473728f3969c609342c6520f7a7f4f4ad316be1386b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:37:47.052886 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6cca5624bc6b2becfe42aadaed573afe8855cee231be8112644ae84ac64da9a2-shm.mount: Deactivated successfully. Nov 12 22:37:47.054006 containerd[1573]: time="2024-11-12T22:37:47.053940139Z" level=info msg="shim disconnected" id=218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff namespace=k8s.io Nov 12 22:37:47.054120 containerd[1573]: time="2024-11-12T22:37:47.054006139Z" level=warning msg="cleaning up after shim disconnected" id=218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff namespace=k8s.io Nov 12 22:37:47.054120 containerd[1573]: time="2024-11-12T22:37:47.054015939Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:37:47.069855 containerd[1573]: time="2024-11-12T22:37:47.069792925Z" level=info msg="StopContainer for \"218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff\" returns successfully" Nov 12 22:37:47.070588 containerd[1573]: time="2024-11-12T22:37:47.070564604Z" level=info msg="StopPodSandbox for \"5a8c1d510b211ee379f9bd9ab88be2f5741163d4d9ac7900adda6eb8f35fa666\"" Nov 12 22:37:47.070635 containerd[1573]: time="2024-11-12T22:37:47.070604324Z" level=info msg="Container to stop \"2146865bae188232fbbe18a53cd6cd398c07c85488d772d580ed68d3020fc854\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:37:47.070635 containerd[1573]: time="2024-11-12T22:37:47.070616364Z" level=info msg="Container to stop \"218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:37:47.070635 containerd[1573]: time="2024-11-12T22:37:47.070625124Z" level=info msg="Container to stop \"0f4425ae75a1e88f1cb991ff5372506c89c0313b3685ded61878fa717e5f7343\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:37:47.070635 containerd[1573]: time="2024-11-12T22:37:47.070633644Z" level=info msg="Container to stop \"9926cad87f36e6b74c95df4cb5f36f3826b58a9aa05def72eab3e5c86fbcc987\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:37:47.070742 containerd[1573]: time="2024-11-12T22:37:47.070642164Z" level=info msg="Container to stop \"eb1c04f077bb87ede551382dca5dbdbb0e9ad98dbd43315cc6bab5f3f2b91036\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:37:47.092795 containerd[1573]: time="2024-11-12T22:37:47.092736065Z" level=info msg="shim disconnected" id=6cca5624bc6b2becfe42aadaed573afe8855cee231be8112644ae84ac64da9a2 namespace=k8s.io Nov 12 22:37:47.092958 containerd[1573]: time="2024-11-12T22:37:47.092794105Z" level=warning msg="cleaning up after shim disconnected" id=6cca5624bc6b2becfe42aadaed573afe8855cee231be8112644ae84ac64da9a2 namespace=k8s.io Nov 12 22:37:47.092958 containerd[1573]: time="2024-11-12T22:37:47.092816145Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:37:47.093632 containerd[1573]: time="2024-11-12T22:37:47.093581464Z" level=info msg="shim disconnected" id=5a8c1d510b211ee379f9bd9ab88be2f5741163d4d9ac7900adda6eb8f35fa666 namespace=k8s.io Nov 12 22:37:47.093689 containerd[1573]: time="2024-11-12T22:37:47.093632944Z" level=warning msg="cleaning up after shim disconnected" id=5a8c1d510b211ee379f9bd9ab88be2f5741163d4d9ac7900adda6eb8f35fa666 namespace=k8s.io Nov 12 22:37:47.093689 containerd[1573]: time="2024-11-12T22:37:47.093640784Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:37:47.106713 containerd[1573]: time="2024-11-12T22:37:47.106665212Z" level=info msg="TearDown network for sandbox \"6cca5624bc6b2becfe42aadaed573afe8855cee231be8112644ae84ac64da9a2\" successfully" Nov 12 22:37:47.106713 containerd[1573]: time="2024-11-12T22:37:47.106699132Z" level=info msg="StopPodSandbox for \"6cca5624bc6b2becfe42aadaed573afe8855cee231be8112644ae84ac64da9a2\" returns successfully" Nov 12 22:37:47.108009 containerd[1573]: time="2024-11-12T22:37:47.107020252Z" level=warning msg="cleanup warnings time=\"2024-11-12T22:37:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 12 22:37:47.108009 containerd[1573]: time="2024-11-12T22:37:47.107926011Z" level=info msg="TearDown network for sandbox \"5a8c1d510b211ee379f9bd9ab88be2f5741163d4d9ac7900adda6eb8f35fa666\" successfully" Nov 12 22:37:47.108009 containerd[1573]: time="2024-11-12T22:37:47.107944891Z" level=info msg="StopPodSandbox for \"5a8c1d510b211ee379f9bd9ab88be2f5741163d4d9ac7900adda6eb8f35fa666\" returns successfully" Nov 12 22:37:47.186710 kubelet[2797]: I1112 22:37:47.186656 2797 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-host-proc-sys-net\") pod \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " Nov 12 22:37:47.186710 kubelet[2797]: I1112 22:37:47.186696 2797 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-cilium-run\") pod \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " Nov 12 22:37:47.186710 kubelet[2797]: I1112 22:37:47.186717 2797 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-cilium-cgroup\") pod \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " Nov 12 22:37:47.186919 kubelet[2797]: I1112 22:37:47.186735 2797 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-cni-path\") pod \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " Nov 12 22:37:47.186919 kubelet[2797]: I1112 22:37:47.186753 2797 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-hostproc\") pod \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " Nov 12 22:37:47.186919 kubelet[2797]: I1112 22:37:47.186769 2797 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-xtables-lock\") pod \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " Nov 12 22:37:47.186919 kubelet[2797]: I1112 22:37:47.186790 2797 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/15fe2263-edd4-4a0a-af2b-9ddcbc189193-hubble-tls\") pod \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " Nov 12 22:37:47.186919 kubelet[2797]: I1112 22:37:47.186815 2797 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-lib-modules\") pod \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " Nov 12 22:37:47.186919 kubelet[2797]: I1112 22:37:47.186838 2797 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-host-proc-sys-kernel\") pod \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " Nov 12 22:37:47.187184 kubelet[2797]: I1112 22:37:47.186860 2797 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8b1ebc0-06d2-4557-9b8a-2e8858a2e220-cilium-config-path\") pod \"b8b1ebc0-06d2-4557-9b8a-2e8858a2e220\" (UID: \"b8b1ebc0-06d2-4557-9b8a-2e8858a2e220\") " Nov 12 22:37:47.187184 kubelet[2797]: I1112 22:37:47.186880 2797 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/15fe2263-edd4-4a0a-af2b-9ddcbc189193-clustermesh-secrets\") pod \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " Nov 12 22:37:47.187184 kubelet[2797]: I1112 22:37:47.186900 2797 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l56d\" (UniqueName: \"kubernetes.io/projected/b8b1ebc0-06d2-4557-9b8a-2e8858a2e220-kube-api-access-5l56d\") pod \"b8b1ebc0-06d2-4557-9b8a-2e8858a2e220\" (UID: \"b8b1ebc0-06d2-4557-9b8a-2e8858a2e220\") " Nov 12 22:37:47.187184 kubelet[2797]: I1112 22:37:47.186942 2797 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15fe2263-edd4-4a0a-af2b-9ddcbc189193-cilium-config-path\") pod \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " Nov 12 22:37:47.187184 kubelet[2797]: I1112 22:37:47.186958 2797 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-bpf-maps\") pod \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " Nov 12 22:37:47.187184 kubelet[2797]: I1112 22:37:47.186989 2797 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vj59\" (UniqueName: \"kubernetes.io/projected/15fe2263-edd4-4a0a-af2b-9ddcbc189193-kube-api-access-5vj59\") pod \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " Nov 12 22:37:47.187918 kubelet[2797]: I1112 22:37:47.187009 2797 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-etc-cni-netd\") pod \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\" (UID: \"15fe2263-edd4-4a0a-af2b-9ddcbc189193\") " Nov 12 22:37:47.191829 kubelet[2797]: I1112 22:37:47.191445 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "15fe2263-edd4-4a0a-af2b-9ddcbc189193" (UID: "15fe2263-edd4-4a0a-af2b-9ddcbc189193"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:37:47.191829 kubelet[2797]: I1112 22:37:47.191447 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "15fe2263-edd4-4a0a-af2b-9ddcbc189193" (UID: "15fe2263-edd4-4a0a-af2b-9ddcbc189193"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:37:47.191829 kubelet[2797]: I1112 22:37:47.191454 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "15fe2263-edd4-4a0a-af2b-9ddcbc189193" (UID: "15fe2263-edd4-4a0a-af2b-9ddcbc189193"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:37:47.191829 kubelet[2797]: I1112 22:37:47.191524 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "15fe2263-edd4-4a0a-af2b-9ddcbc189193" (UID: "15fe2263-edd4-4a0a-af2b-9ddcbc189193"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:37:47.191829 kubelet[2797]: I1112 22:37:47.191542 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "15fe2263-edd4-4a0a-af2b-9ddcbc189193" (UID: "15fe2263-edd4-4a0a-af2b-9ddcbc189193"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:37:47.191997 kubelet[2797]: I1112 22:37:47.191557 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-cni-path" (OuterVolumeSpecName: "cni-path") pod "15fe2263-edd4-4a0a-af2b-9ddcbc189193" (UID: "15fe2263-edd4-4a0a-af2b-9ddcbc189193"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:37:47.191997 kubelet[2797]: I1112 22:37:47.191572 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-hostproc" (OuterVolumeSpecName: "hostproc") pod "15fe2263-edd4-4a0a-af2b-9ddcbc189193" (UID: "15fe2263-edd4-4a0a-af2b-9ddcbc189193"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:37:47.191997 kubelet[2797]: I1112 22:37:47.191586 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "15fe2263-edd4-4a0a-af2b-9ddcbc189193" (UID: "15fe2263-edd4-4a0a-af2b-9ddcbc189193"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:37:47.193955 kubelet[2797]: I1112 22:37:47.193598 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15fe2263-edd4-4a0a-af2b-9ddcbc189193-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "15fe2263-edd4-4a0a-af2b-9ddcbc189193" (UID: "15fe2263-edd4-4a0a-af2b-9ddcbc189193"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 22:37:47.193955 kubelet[2797]: I1112 22:37:47.193650 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "15fe2263-edd4-4a0a-af2b-9ddcbc189193" (UID: "15fe2263-edd4-4a0a-af2b-9ddcbc189193"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:37:47.193955 kubelet[2797]: I1112 22:37:47.193658 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8b1ebc0-06d2-4557-9b8a-2e8858a2e220-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b8b1ebc0-06d2-4557-9b8a-2e8858a2e220" (UID: "b8b1ebc0-06d2-4557-9b8a-2e8858a2e220"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 22:37:47.193955 kubelet[2797]: I1112 22:37:47.193697 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "15fe2263-edd4-4a0a-af2b-9ddcbc189193" (UID: "15fe2263-edd4-4a0a-af2b-9ddcbc189193"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:37:47.194729 kubelet[2797]: I1112 22:37:47.194659 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15fe2263-edd4-4a0a-af2b-9ddcbc189193-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "15fe2263-edd4-4a0a-af2b-9ddcbc189193" (UID: "15fe2263-edd4-4a0a-af2b-9ddcbc189193"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 22:37:47.194787 kubelet[2797]: I1112 22:37:47.194729 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8b1ebc0-06d2-4557-9b8a-2e8858a2e220-kube-api-access-5l56d" (OuterVolumeSpecName: "kube-api-access-5l56d") pod "b8b1ebc0-06d2-4557-9b8a-2e8858a2e220" (UID: "b8b1ebc0-06d2-4557-9b8a-2e8858a2e220"). InnerVolumeSpecName "kube-api-access-5l56d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 22:37:47.195524 kubelet[2797]: I1112 22:37:47.195498 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15fe2263-edd4-4a0a-af2b-9ddcbc189193-kube-api-access-5vj59" (OuterVolumeSpecName: "kube-api-access-5vj59") pod "15fe2263-edd4-4a0a-af2b-9ddcbc189193" (UID: "15fe2263-edd4-4a0a-af2b-9ddcbc189193"). InnerVolumeSpecName "kube-api-access-5vj59". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 22:37:47.197437 kubelet[2797]: I1112 22:37:47.197214 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15fe2263-edd4-4a0a-af2b-9ddcbc189193-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "15fe2263-edd4-4a0a-af2b-9ddcbc189193" (UID: "15fe2263-edd4-4a0a-af2b-9ddcbc189193"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 12 22:37:47.287632 kubelet[2797]: I1112 22:37:47.287589 2797 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 12 22:37:47.287632 kubelet[2797]: I1112 22:37:47.287640 2797 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 12 22:37:47.287771 kubelet[2797]: I1112 22:37:47.287654 2797 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 12 22:37:47.287771 kubelet[2797]: I1112 22:37:47.287664 2797 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/15fe2263-edd4-4a0a-af2b-9ddcbc189193-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 12 22:37:47.287771 kubelet[2797]: I1112 22:37:47.287674 2797 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 12 22:37:47.287771 kubelet[2797]: I1112 22:37:47.287684 2797 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 12 22:37:47.287771 kubelet[2797]: I1112 22:37:47.287694 2797 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8b1ebc0-06d2-4557-9b8a-2e8858a2e220-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 22:37:47.287771 kubelet[2797]: I1112 22:37:47.287703 2797 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/15fe2263-edd4-4a0a-af2b-9ddcbc189193-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 12 22:37:47.287771 kubelet[2797]: I1112 22:37:47.287713 2797 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5l56d\" (UniqueName: \"kubernetes.io/projected/b8b1ebc0-06d2-4557-9b8a-2e8858a2e220-kube-api-access-5l56d\") on node \"localhost\" DevicePath \"\"" Nov 12 22:37:47.287771 kubelet[2797]: I1112 22:37:47.287721 2797 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15fe2263-edd4-4a0a-af2b-9ddcbc189193-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 22:37:47.287943 kubelet[2797]: I1112 22:37:47.287730 2797 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5vj59\" (UniqueName: \"kubernetes.io/projected/15fe2263-edd4-4a0a-af2b-9ddcbc189193-kube-api-access-5vj59\") on node \"localhost\" DevicePath \"\"" Nov 12 22:37:47.287943 kubelet[2797]: I1112 22:37:47.287740 2797 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 12 22:37:47.287943 kubelet[2797]: I1112 22:37:47.287749 2797 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 12 22:37:47.287943 kubelet[2797]: I1112 22:37:47.287758 2797 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 12 22:37:47.287943 kubelet[2797]: I1112 22:37:47.287767 2797 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 12 22:37:47.287943 kubelet[2797]: I1112 22:37:47.287776 2797 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/15fe2263-edd4-4a0a-af2b-9ddcbc189193-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 12 22:37:47.547627 kubelet[2797]: I1112 22:37:47.547592 2797 scope.go:117] "RemoveContainer" containerID="1f5472ec1507afc91f08473728f3969c609342c6520f7a7f4f4ad316be1386b2" Nov 12 22:37:47.551245 containerd[1573]: time="2024-11-12T22:37:47.550989899Z" level=info msg="RemoveContainer for \"1f5472ec1507afc91f08473728f3969c609342c6520f7a7f4f4ad316be1386b2\"" Nov 12 22:37:47.555484 containerd[1573]: time="2024-11-12T22:37:47.555238656Z" level=info msg="RemoveContainer for \"1f5472ec1507afc91f08473728f3969c609342c6520f7a7f4f4ad316be1386b2\" returns successfully" Nov 12 22:37:47.555543 kubelet[2797]: I1112 22:37:47.555414 2797 scope.go:117] "RemoveContainer" containerID="1f5472ec1507afc91f08473728f3969c609342c6520f7a7f4f4ad316be1386b2" Nov 12 22:37:47.577225 containerd[1573]: time="2024-11-12T22:37:47.577178436Z" level=error msg="ContainerStatus for \"1f5472ec1507afc91f08473728f3969c609342c6520f7a7f4f4ad316be1386b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f5472ec1507afc91f08473728f3969c609342c6520f7a7f4f4ad316be1386b2\": not found" Nov 12 22:37:47.584516 kubelet[2797]: E1112 22:37:47.584492 2797 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f5472ec1507afc91f08473728f3969c609342c6520f7a7f4f4ad316be1386b2\": not found" containerID="1f5472ec1507afc91f08473728f3969c609342c6520f7a7f4f4ad316be1386b2" Nov 12 22:37:47.587619 kubelet[2797]: I1112 22:37:47.587580 2797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f5472ec1507afc91f08473728f3969c609342c6520f7a7f4f4ad316be1386b2"} err="failed to get container status \"1f5472ec1507afc91f08473728f3969c609342c6520f7a7f4f4ad316be1386b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f5472ec1507afc91f08473728f3969c609342c6520f7a7f4f4ad316be1386b2\": not found" Nov 12 22:37:47.587619 kubelet[2797]: I1112 22:37:47.587615 2797 scope.go:117] "RemoveContainer" containerID="218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff" Nov 12 22:37:47.588644 containerd[1573]: time="2024-11-12T22:37:47.588607186Z" level=info msg="RemoveContainer for \"218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff\"" Nov 12 22:37:47.599765 containerd[1573]: time="2024-11-12T22:37:47.599727096Z" level=info msg="RemoveContainer for \"218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff\" returns successfully" Nov 12 22:37:47.599956 kubelet[2797]: I1112 22:37:47.599924 2797 scope.go:117] "RemoveContainer" containerID="0f4425ae75a1e88f1cb991ff5372506c89c0313b3685ded61878fa717e5f7343" Nov 12 22:37:47.600911 containerd[1573]: time="2024-11-12T22:37:47.600886255Z" level=info msg="RemoveContainer for \"0f4425ae75a1e88f1cb991ff5372506c89c0313b3685ded61878fa717e5f7343\"" Nov 12 22:37:47.603263 containerd[1573]: time="2024-11-12T22:37:47.603226253Z" level=info msg="RemoveContainer for \"0f4425ae75a1e88f1cb991ff5372506c89c0313b3685ded61878fa717e5f7343\" returns successfully" Nov 12 22:37:47.603411 kubelet[2797]: I1112 22:37:47.603389 2797 scope.go:117] "RemoveContainer" containerID="2146865bae188232fbbe18a53cd6cd398c07c85488d772d580ed68d3020fc854" Nov 12 22:37:47.604397 containerd[1573]: time="2024-11-12T22:37:47.604183212Z" level=info msg="RemoveContainer for \"2146865bae188232fbbe18a53cd6cd398c07c85488d772d580ed68d3020fc854\"" Nov 12 22:37:47.606428 containerd[1573]: time="2024-11-12T22:37:47.606386850Z" level=info msg="RemoveContainer for \"2146865bae188232fbbe18a53cd6cd398c07c85488d772d580ed68d3020fc854\" returns successfully" Nov 12 22:37:47.606721 kubelet[2797]: I1112 22:37:47.606674 2797 scope.go:117] "RemoveContainer" containerID="eb1c04f077bb87ede551382dca5dbdbb0e9ad98dbd43315cc6bab5f3f2b91036" Nov 12 22:37:47.607777 containerd[1573]: time="2024-11-12T22:37:47.607569489Z" level=info msg="RemoveContainer for \"eb1c04f077bb87ede551382dca5dbdbb0e9ad98dbd43315cc6bab5f3f2b91036\"" Nov 12 22:37:47.613469 containerd[1573]: time="2024-11-12T22:37:47.613440524Z" level=info msg="RemoveContainer for \"eb1c04f077bb87ede551382dca5dbdbb0e9ad98dbd43315cc6bab5f3f2b91036\" returns successfully" Nov 12 22:37:47.613712 kubelet[2797]: I1112 22:37:47.613688 2797 scope.go:117] "RemoveContainer" containerID="9926cad87f36e6b74c95df4cb5f36f3826b58a9aa05def72eab3e5c86fbcc987" Nov 12 22:37:47.614557 containerd[1573]: time="2024-11-12T22:37:47.614535803Z" level=info msg="RemoveContainer for \"9926cad87f36e6b74c95df4cb5f36f3826b58a9aa05def72eab3e5c86fbcc987\"" Nov 12 22:37:47.616722 containerd[1573]: time="2024-11-12T22:37:47.616689641Z" level=info msg="RemoveContainer for \"9926cad87f36e6b74c95df4cb5f36f3826b58a9aa05def72eab3e5c86fbcc987\" returns successfully" Nov 12 22:37:47.616877 kubelet[2797]: I1112 22:37:47.616847 2797 scope.go:117] "RemoveContainer" containerID="218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff" Nov 12 22:37:47.617055 containerd[1573]: time="2024-11-12T22:37:47.617025401Z" level=error msg="ContainerStatus for \"218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff\": not found" Nov 12 22:37:47.617306 kubelet[2797]: E1112 22:37:47.617196 2797 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff\": not found" containerID="218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff" Nov 12 22:37:47.617306 kubelet[2797]: I1112 22:37:47.617231 2797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff"} err="failed to get container status \"218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff\": rpc error: code = NotFound desc = an error occurred when try to find container \"218b42640dde341c76c2416ad837ccdd07fe553efc117b3c31ab540d4e324aff\": not found" Nov 12 22:37:47.617306 kubelet[2797]: I1112 22:37:47.617241 2797 scope.go:117] "RemoveContainer" containerID="0f4425ae75a1e88f1cb991ff5372506c89c0313b3685ded61878fa717e5f7343" Nov 12 22:37:47.617415 containerd[1573]: time="2024-11-12T22:37:47.617394561Z" level=error msg="ContainerStatus for \"0f4425ae75a1e88f1cb991ff5372506c89c0313b3685ded61878fa717e5f7343\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f4425ae75a1e88f1cb991ff5372506c89c0313b3685ded61878fa717e5f7343\": not found" Nov 12 22:37:47.617506 kubelet[2797]: E1112 22:37:47.617493 2797 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f4425ae75a1e88f1cb991ff5372506c89c0313b3685ded61878fa717e5f7343\": not found" containerID="0f4425ae75a1e88f1cb991ff5372506c89c0313b3685ded61878fa717e5f7343" Nov 12 22:37:47.617538 kubelet[2797]: I1112 22:37:47.617522 2797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0f4425ae75a1e88f1cb991ff5372506c89c0313b3685ded61878fa717e5f7343"} err="failed to get container status \"0f4425ae75a1e88f1cb991ff5372506c89c0313b3685ded61878fa717e5f7343\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f4425ae75a1e88f1cb991ff5372506c89c0313b3685ded61878fa717e5f7343\": not found" Nov 12 22:37:47.617538 kubelet[2797]: I1112 22:37:47.617532 2797 scope.go:117] "RemoveContainer" containerID="2146865bae188232fbbe18a53cd6cd398c07c85488d772d580ed68d3020fc854" Nov 12 22:37:47.617743 containerd[1573]: time="2024-11-12T22:37:47.617684240Z" level=error msg="ContainerStatus for \"2146865bae188232fbbe18a53cd6cd398c07c85488d772d580ed68d3020fc854\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2146865bae188232fbbe18a53cd6cd398c07c85488d772d580ed68d3020fc854\": not found" Nov 12 22:37:47.617878 kubelet[2797]: E1112 22:37:47.617848 2797 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2146865bae188232fbbe18a53cd6cd398c07c85488d772d580ed68d3020fc854\": not found" containerID="2146865bae188232fbbe18a53cd6cd398c07c85488d772d580ed68d3020fc854" Nov 12 22:37:47.617911 kubelet[2797]: I1112 22:37:47.617893 2797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2146865bae188232fbbe18a53cd6cd398c07c85488d772d580ed68d3020fc854"} err="failed to get container status \"2146865bae188232fbbe18a53cd6cd398c07c85488d772d580ed68d3020fc854\": rpc error: code = NotFound desc = an error occurred when try to find container \"2146865bae188232fbbe18a53cd6cd398c07c85488d772d580ed68d3020fc854\": not found" Nov 12 22:37:47.617911 kubelet[2797]: I1112 22:37:47.617907 2797 scope.go:117] "RemoveContainer" containerID="eb1c04f077bb87ede551382dca5dbdbb0e9ad98dbd43315cc6bab5f3f2b91036" Nov 12 22:37:47.618095 containerd[1573]: time="2024-11-12T22:37:47.618069840Z" level=error msg="ContainerStatus for \"eb1c04f077bb87ede551382dca5dbdbb0e9ad98dbd43315cc6bab5f3f2b91036\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb1c04f077bb87ede551382dca5dbdbb0e9ad98dbd43315cc6bab5f3f2b91036\": not found" Nov 12 22:37:47.618212 kubelet[2797]: E1112 22:37:47.618197 2797 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb1c04f077bb87ede551382dca5dbdbb0e9ad98dbd43315cc6bab5f3f2b91036\": not found" containerID="eb1c04f077bb87ede551382dca5dbdbb0e9ad98dbd43315cc6bab5f3f2b91036" Nov 12 22:37:47.618246 kubelet[2797]: I1112 22:37:47.618226 2797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb1c04f077bb87ede551382dca5dbdbb0e9ad98dbd43315cc6bab5f3f2b91036"} err="failed to get container status \"eb1c04f077bb87ede551382dca5dbdbb0e9ad98dbd43315cc6bab5f3f2b91036\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb1c04f077bb87ede551382dca5dbdbb0e9ad98dbd43315cc6bab5f3f2b91036\": not found" Nov 12 22:37:47.618246 kubelet[2797]: I1112 22:37:47.618238 2797 scope.go:117] "RemoveContainer" containerID="9926cad87f36e6b74c95df4cb5f36f3826b58a9aa05def72eab3e5c86fbcc987" Nov 12 22:37:47.618459 containerd[1573]: time="2024-11-12T22:37:47.618431400Z" level=error msg="ContainerStatus for \"9926cad87f36e6b74c95df4cb5f36f3826b58a9aa05def72eab3e5c86fbcc987\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9926cad87f36e6b74c95df4cb5f36f3826b58a9aa05def72eab3e5c86fbcc987\": not found" Nov 12 22:37:47.618568 kubelet[2797]: E1112 22:37:47.618555 2797 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9926cad87f36e6b74c95df4cb5f36f3826b58a9aa05def72eab3e5c86fbcc987\": not found" containerID="9926cad87f36e6b74c95df4cb5f36f3826b58a9aa05def72eab3e5c86fbcc987" Nov 12 22:37:47.618604 kubelet[2797]: I1112 22:37:47.618580 2797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9926cad87f36e6b74c95df4cb5f36f3826b58a9aa05def72eab3e5c86fbcc987"} err="failed to get container status \"9926cad87f36e6b74c95df4cb5f36f3826b58a9aa05def72eab3e5c86fbcc987\": rpc error: code = NotFound desc = an error occurred when try to find container \"9926cad87f36e6b74c95df4cb5f36f3826b58a9aa05def72eab3e5c86fbcc987\": not found" Nov 12 22:37:47.969354 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6cca5624bc6b2becfe42aadaed573afe8855cee231be8112644ae84ac64da9a2-rootfs.mount: Deactivated successfully. Nov 12 22:37:47.969508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a8c1d510b211ee379f9bd9ab88be2f5741163d4d9ac7900adda6eb8f35fa666-rootfs.mount: Deactivated successfully. Nov 12 22:37:47.969597 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5a8c1d510b211ee379f9bd9ab88be2f5741163d4d9ac7900adda6eb8f35fa666-shm.mount: Deactivated successfully. Nov 12 22:37:47.969684 systemd[1]: var-lib-kubelet-pods-b8b1ebc0\x2d06d2\x2d4557\x2d9b8a\x2d2e8858a2e220-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5l56d.mount: Deactivated successfully. Nov 12 22:37:47.969764 systemd[1]: var-lib-kubelet-pods-15fe2263\x2dedd4\x2d4a0a\x2daf2b\x2d9ddcbc189193-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5vj59.mount: Deactivated successfully. Nov 12 22:37:47.969867 systemd[1]: var-lib-kubelet-pods-15fe2263\x2dedd4\x2d4a0a\x2daf2b\x2d9ddcbc189193-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 12 22:37:47.970174 systemd[1]: var-lib-kubelet-pods-15fe2263\x2dedd4\x2d4a0a\x2daf2b\x2d9ddcbc189193-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 12 22:37:48.377461 kubelet[2797]: I1112 22:37:48.377424 2797 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="15fe2263-edd4-4a0a-af2b-9ddcbc189193" path="/var/lib/kubelet/pods/15fe2263-edd4-4a0a-af2b-9ddcbc189193/volumes" Nov 12 22:37:48.378008 kubelet[2797]: I1112 22:37:48.377991 2797 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b8b1ebc0-06d2-4557-9b8a-2e8858a2e220" path="/var/lib/kubelet/pods/b8b1ebc0-06d2-4557-9b8a-2e8858a2e220/volumes" Nov 12 22:37:48.916000 sshd[4462]: Connection closed by 10.0.0.1 port 56686 Nov 12 22:37:48.915917 sshd-session[4456]: pam_unix(sshd:session): session closed for user core Nov 12 22:37:48.939275 systemd[1]: Started sshd@23-10.0.0.92:22-10.0.0.1:56702.service - OpenSSH per-connection server daemon (10.0.0.1:56702). Nov 12 22:37:48.939662 systemd[1]: sshd@22-10.0.0.92:22-10.0.0.1:56686.service: Deactivated successfully. Nov 12 22:37:48.942903 systemd-logind[1547]: Session 23 logged out. Waiting for processes to exit. Nov 12 22:37:48.943198 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 22:37:48.945349 systemd-logind[1547]: Removed session 23. Nov 12 22:37:48.977060 sshd[4623]: Accepted publickey for core from 10.0.0.1 port 56702 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:37:48.978225 sshd-session[4623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:37:48.982688 systemd-logind[1547]: New session 24 of user core. Nov 12 22:37:48.992257 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 22:37:49.423613 kubelet[2797]: E1112 22:37:49.423585 2797 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 12 22:37:49.571115 sshd[4629]: Connection closed by 10.0.0.1 port 56702 Nov 12 22:37:49.572313 sshd-session[4623]: pam_unix(sshd:session): session closed for user core Nov 12 22:37:49.583325 systemd[1]: Started sshd@24-10.0.0.92:22-10.0.0.1:56714.service - OpenSSH per-connection server daemon (10.0.0.1:56714). Nov 12 22:37:49.583807 systemd[1]: sshd@23-10.0.0.92:22-10.0.0.1:56702.service: Deactivated successfully. Nov 12 22:37:49.587944 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 22:37:49.590889 systemd-logind[1547]: Session 24 logged out. Waiting for processes to exit. Nov 12 22:37:49.591846 kubelet[2797]: I1112 22:37:49.591823 2797 topology_manager.go:215] "Topology Admit Handler" podUID="2fa0e70e-0809-43ae-ac3e-dad7585c647b" podNamespace="kube-system" podName="cilium-6f8cp" Nov 12 22:37:49.591993 kubelet[2797]: E1112 22:37:49.591952 2797 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15fe2263-edd4-4a0a-af2b-9ddcbc189193" containerName="mount-bpf-fs" Nov 12 22:37:49.592171 kubelet[2797]: E1112 22:37:49.592045 2797 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b8b1ebc0-06d2-4557-9b8a-2e8858a2e220" containerName="cilium-operator" Nov 12 22:37:49.592171 kubelet[2797]: E1112 22:37:49.592059 2797 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15fe2263-edd4-4a0a-af2b-9ddcbc189193" containerName="mount-cgroup" Nov 12 22:37:49.592171 kubelet[2797]: E1112 22:37:49.592066 2797 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15fe2263-edd4-4a0a-af2b-9ddcbc189193" containerName="apply-sysctl-overwrites" Nov 12 22:37:49.592171 kubelet[2797]: E1112 22:37:49.592073 2797 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15fe2263-edd4-4a0a-af2b-9ddcbc189193" containerName="clean-cilium-state" Nov 12 22:37:49.592171 kubelet[2797]: E1112 22:37:49.592079 2797 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15fe2263-edd4-4a0a-af2b-9ddcbc189193" containerName="cilium-agent" Nov 12 22:37:49.598046 systemd-logind[1547]: Removed session 24. Nov 12 22:37:49.607819 kubelet[2797]: I1112 22:37:49.607455 2797 memory_manager.go:354] "RemoveStaleState removing state" podUID="15fe2263-edd4-4a0a-af2b-9ddcbc189193" containerName="cilium-agent" Nov 12 22:37:49.607819 kubelet[2797]: I1112 22:37:49.607522 2797 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8b1ebc0-06d2-4557-9b8a-2e8858a2e220" containerName="cilium-operator" Nov 12 22:37:49.638214 sshd[4638]: Accepted publickey for core from 10.0.0.1 port 56714 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:37:49.640029 sshd-session[4638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:37:49.644561 systemd-logind[1547]: New session 25 of user core. Nov 12 22:37:49.652258 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 22:37:49.701269 kubelet[2797]: I1112 22:37:49.701160 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2fa0e70e-0809-43ae-ac3e-dad7585c647b-host-proc-sys-kernel\") pod \"cilium-6f8cp\" (UID: \"2fa0e70e-0809-43ae-ac3e-dad7585c647b\") " pod="kube-system/cilium-6f8cp" Nov 12 22:37:49.701269 kubelet[2797]: I1112 22:37:49.701214 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fa0e70e-0809-43ae-ac3e-dad7585c647b-etc-cni-netd\") pod \"cilium-6f8cp\" (UID: \"2fa0e70e-0809-43ae-ac3e-dad7585c647b\") " pod="kube-system/cilium-6f8cp" Nov 12 22:37:49.701382 kubelet[2797]: I1112 22:37:49.701278 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2fa0e70e-0809-43ae-ac3e-dad7585c647b-cilium-config-path\") pod \"cilium-6f8cp\" (UID: \"2fa0e70e-0809-43ae-ac3e-dad7585c647b\") " pod="kube-system/cilium-6f8cp" Nov 12 22:37:49.701382 kubelet[2797]: I1112 22:37:49.701348 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2fa0e70e-0809-43ae-ac3e-dad7585c647b-cilium-ipsec-secrets\") pod \"cilium-6f8cp\" (UID: \"2fa0e70e-0809-43ae-ac3e-dad7585c647b\") " pod="kube-system/cilium-6f8cp" Nov 12 22:37:49.701382 kubelet[2797]: I1112 22:37:49.701370 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2fa0e70e-0809-43ae-ac3e-dad7585c647b-hubble-tls\") pod \"cilium-6f8cp\" (UID: \"2fa0e70e-0809-43ae-ac3e-dad7585c647b\") " pod="kube-system/cilium-6f8cp" Nov 12 22:37:49.701463 kubelet[2797]: I1112 22:37:49.701392 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2fa0e70e-0809-43ae-ac3e-dad7585c647b-bpf-maps\") pod \"cilium-6f8cp\" (UID: \"2fa0e70e-0809-43ae-ac3e-dad7585c647b\") " pod="kube-system/cilium-6f8cp" Nov 12 22:37:49.701463 kubelet[2797]: I1112 22:37:49.701411 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2fa0e70e-0809-43ae-ac3e-dad7585c647b-cni-path\") pod \"cilium-6f8cp\" (UID: \"2fa0e70e-0809-43ae-ac3e-dad7585c647b\") " pod="kube-system/cilium-6f8cp" Nov 12 22:37:49.701463 kubelet[2797]: I1112 22:37:49.701449 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2fa0e70e-0809-43ae-ac3e-dad7585c647b-hostproc\") pod \"cilium-6f8cp\" (UID: \"2fa0e70e-0809-43ae-ac3e-dad7585c647b\") " pod="kube-system/cilium-6f8cp" Nov 12 22:37:49.701548 kubelet[2797]: I1112 22:37:49.701518 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fa0e70e-0809-43ae-ac3e-dad7585c647b-lib-modules\") pod \"cilium-6f8cp\" (UID: \"2fa0e70e-0809-43ae-ac3e-dad7585c647b\") " pod="kube-system/cilium-6f8cp" Nov 12 22:37:49.701692 kubelet[2797]: I1112 22:37:49.701666 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2fa0e70e-0809-43ae-ac3e-dad7585c647b-cilium-cgroup\") pod \"cilium-6f8cp\" (UID: \"2fa0e70e-0809-43ae-ac3e-dad7585c647b\") " pod="kube-system/cilium-6f8cp" Nov 12 22:37:49.701726 kubelet[2797]: I1112 22:37:49.701708 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fa0e70e-0809-43ae-ac3e-dad7585c647b-xtables-lock\") pod \"cilium-6f8cp\" (UID: \"2fa0e70e-0809-43ae-ac3e-dad7585c647b\") " pod="kube-system/cilium-6f8cp" Nov 12 22:37:49.701750 kubelet[2797]: I1112 22:37:49.701733 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phgxq\" (UniqueName: \"kubernetes.io/projected/2fa0e70e-0809-43ae-ac3e-dad7585c647b-kube-api-access-phgxq\") pod \"cilium-6f8cp\" (UID: \"2fa0e70e-0809-43ae-ac3e-dad7585c647b\") " pod="kube-system/cilium-6f8cp" Nov 12 22:37:49.701777 kubelet[2797]: I1112 22:37:49.701755 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2fa0e70e-0809-43ae-ac3e-dad7585c647b-host-proc-sys-net\") pod \"cilium-6f8cp\" (UID: \"2fa0e70e-0809-43ae-ac3e-dad7585c647b\") " pod="kube-system/cilium-6f8cp" Nov 12 22:37:49.701806 kubelet[2797]: I1112 22:37:49.701785 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2fa0e70e-0809-43ae-ac3e-dad7585c647b-cilium-run\") pod \"cilium-6f8cp\" (UID: \"2fa0e70e-0809-43ae-ac3e-dad7585c647b\") " pod="kube-system/cilium-6f8cp" Nov 12 22:37:49.701833 kubelet[2797]: I1112 22:37:49.701816 2797 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2fa0e70e-0809-43ae-ac3e-dad7585c647b-clustermesh-secrets\") pod \"cilium-6f8cp\" (UID: \"2fa0e70e-0809-43ae-ac3e-dad7585c647b\") " pod="kube-system/cilium-6f8cp" Nov 12 22:37:49.703270 sshd[4644]: Connection closed by 10.0.0.1 port 56714 Nov 12 22:37:49.703597 sshd-session[4638]: pam_unix(sshd:session): session closed for user core Nov 12 22:37:49.712366 systemd[1]: Started sshd@25-10.0.0.92:22-10.0.0.1:56720.service - OpenSSH per-connection server daemon (10.0.0.1:56720). Nov 12 22:37:49.712984 systemd[1]: sshd@24-10.0.0.92:22-10.0.0.1:56714.service: Deactivated successfully. Nov 12 22:37:49.714585 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 22:37:49.716196 systemd-logind[1547]: Session 25 logged out. Waiting for processes to exit. Nov 12 22:37:49.717034 systemd-logind[1547]: Removed session 25. Nov 12 22:37:49.748623 sshd[4648]: Accepted publickey for core from 10.0.0.1 port 56720 ssh2: RSA SHA256:KPgpt/5uXhXBYNY6jU95wYzOWgpCWHnSiDnDh5jQRRc Nov 12 22:37:49.749862 sshd-session[4648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:37:49.753432 systemd-logind[1547]: New session 26 of user core. Nov 12 22:37:49.768254 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 22:37:49.913252 kubelet[2797]: E1112 22:37:49.913213 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:49.913764 containerd[1573]: time="2024-11-12T22:37:49.913729673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6f8cp,Uid:2fa0e70e-0809-43ae-ac3e-dad7585c647b,Namespace:kube-system,Attempt:0,}" Nov 12 22:37:49.937134 containerd[1573]: time="2024-11-12T22:37:49.937002746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:37:49.937134 containerd[1573]: time="2024-11-12T22:37:49.937094066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:37:49.937857 containerd[1573]: time="2024-11-12T22:37:49.937116946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:37:49.937924 containerd[1573]: time="2024-11-12T22:37:49.937856186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:37:49.974230 containerd[1573]: time="2024-11-12T22:37:49.974119096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6f8cp,Uid:2fa0e70e-0809-43ae-ac3e-dad7585c647b,Namespace:kube-system,Attempt:0,} returns sandbox id \"daa91626e0fe3c39ad66895b481442d50806a92b5bdf8b232789a3cde71e6db7\"" Nov 12 22:37:49.975463 kubelet[2797]: E1112 22:37:49.975440 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:49.979212 containerd[1573]: time="2024-11-12T22:37:49.979159295Z" level=info msg="CreateContainer within sandbox \"daa91626e0fe3c39ad66895b481442d50806a92b5bdf8b232789a3cde71e6db7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 22:37:49.989021 containerd[1573]: time="2024-11-12T22:37:49.988985132Z" level=info msg="CreateContainer within sandbox \"daa91626e0fe3c39ad66895b481442d50806a92b5bdf8b232789a3cde71e6db7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2f1722081f7c332078e46cebf452493f78a0fbf0c81211ea1b8c49c0eceb185a\"" Nov 12 22:37:49.989472 containerd[1573]: time="2024-11-12T22:37:49.989432572Z" level=info msg="StartContainer for \"2f1722081f7c332078e46cebf452493f78a0fbf0c81211ea1b8c49c0eceb185a\"" Nov 12 22:37:50.036399 containerd[1573]: time="2024-11-12T22:37:50.036356969Z" level=info msg="StartContainer for \"2f1722081f7c332078e46cebf452493f78a0fbf0c81211ea1b8c49c0eceb185a\" returns successfully" Nov 12 22:37:50.087312 containerd[1573]: time="2024-11-12T22:37:50.087245850Z" level=info msg="shim disconnected" id=2f1722081f7c332078e46cebf452493f78a0fbf0c81211ea1b8c49c0eceb185a namespace=k8s.io Nov 12 22:37:50.087312 containerd[1573]: time="2024-11-12T22:37:50.087301490Z" level=warning msg="cleaning up after shim disconnected" id=2f1722081f7c332078e46cebf452493f78a0fbf0c81211ea1b8c49c0eceb185a namespace=k8s.io Nov 12 22:37:50.087312 containerd[1573]: time="2024-11-12T22:37:50.087309690Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:37:50.563842 kubelet[2797]: E1112 22:37:50.563440 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:50.566137 containerd[1573]: time="2024-11-12T22:37:50.566085136Z" level=info msg="CreateContainer within sandbox \"daa91626e0fe3c39ad66895b481442d50806a92b5bdf8b232789a3cde71e6db7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 22:37:50.583570 containerd[1573]: time="2024-11-12T22:37:50.583506736Z" level=info msg="CreateContainer within sandbox \"daa91626e0fe3c39ad66895b481442d50806a92b5bdf8b232789a3cde71e6db7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"beeaaffab22211076845dd4db4b6ebe84310bf34e473cc5c92e15f1e43384973\"" Nov 12 22:37:50.584050 containerd[1573]: time="2024-11-12T22:37:50.584025897Z" level=info msg="StartContainer for \"beeaaffab22211076845dd4db4b6ebe84310bf34e473cc5c92e15f1e43384973\"" Nov 12 22:37:50.627105 containerd[1573]: time="2024-11-12T22:37:50.627059057Z" level=info msg="StartContainer for \"beeaaffab22211076845dd4db4b6ebe84310bf34e473cc5c92e15f1e43384973\" returns successfully" Nov 12 22:37:50.652055 containerd[1573]: time="2024-11-12T22:37:50.651998897Z" level=info msg="shim disconnected" id=beeaaffab22211076845dd4db4b6ebe84310bf34e473cc5c92e15f1e43384973 namespace=k8s.io Nov 12 22:37:50.652055 containerd[1573]: time="2024-11-12T22:37:50.652053017Z" level=warning msg="cleaning up after shim disconnected" id=beeaaffab22211076845dd4db4b6ebe84310bf34e473cc5c92e15f1e43384973 namespace=k8s.io Nov 12 22:37:50.652055 containerd[1573]: time="2024-11-12T22:37:50.652062417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:37:51.566928 kubelet[2797]: E1112 22:37:51.566553 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:51.571106 containerd[1573]: time="2024-11-12T22:37:51.571067869Z" level=info msg="CreateContainer within sandbox \"daa91626e0fe3c39ad66895b481442d50806a92b5bdf8b232789a3cde71e6db7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 22:37:51.584567 containerd[1573]: time="2024-11-12T22:37:51.584530593Z" level=info msg="CreateContainer within sandbox \"daa91626e0fe3c39ad66895b481442d50806a92b5bdf8b232789a3cde71e6db7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"857015ffbba404e2a05d10760837871e0597ff007dd4b542953ffda5226d208f\"" Nov 12 22:37:51.585046 containerd[1573]: time="2024-11-12T22:37:51.584990513Z" level=info msg="StartContainer for \"857015ffbba404e2a05d10760837871e0597ff007dd4b542953ffda5226d208f\"" Nov 12 22:37:51.633175 containerd[1573]: time="2024-11-12T22:37:51.633108327Z" level=info msg="StartContainer for \"857015ffbba404e2a05d10760837871e0597ff007dd4b542953ffda5226d208f\" returns successfully" Nov 12 22:37:51.652312 containerd[1573]: time="2024-11-12T22:37:51.652258853Z" level=info msg="shim disconnected" id=857015ffbba404e2a05d10760837871e0597ff007dd4b542953ffda5226d208f namespace=k8s.io Nov 12 22:37:51.652312 containerd[1573]: time="2024-11-12T22:37:51.652310413Z" level=warning msg="cleaning up after shim disconnected" id=857015ffbba404e2a05d10760837871e0597ff007dd4b542953ffda5226d208f namespace=k8s.io Nov 12 22:37:51.652312 containerd[1573]: time="2024-11-12T22:37:51.652319413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:37:51.807918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-857015ffbba404e2a05d10760837871e0597ff007dd4b542953ffda5226d208f-rootfs.mount: Deactivated successfully. Nov 12 22:37:52.570122 kubelet[2797]: E1112 22:37:52.570049 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:52.575028 containerd[1573]: time="2024-11-12T22:37:52.574543479Z" level=info msg="CreateContainer within sandbox \"daa91626e0fe3c39ad66895b481442d50806a92b5bdf8b232789a3cde71e6db7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 22:37:52.587410 containerd[1573]: time="2024-11-12T22:37:52.587361887Z" level=info msg="CreateContainer within sandbox \"daa91626e0fe3c39ad66895b481442d50806a92b5bdf8b232789a3cde71e6db7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"86a714a7835fc0fa7fc6bbc0340b7c866d6920870748a48f12ca4e274db0db82\"" Nov 12 22:37:52.587911 containerd[1573]: time="2024-11-12T22:37:52.587884807Z" level=info msg="StartContainer for \"86a714a7835fc0fa7fc6bbc0340b7c866d6920870748a48f12ca4e274db0db82\"" Nov 12 22:37:52.588752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3858218456.mount: Deactivated successfully. Nov 12 22:37:52.635016 containerd[1573]: time="2024-11-12T22:37:52.634026073Z" level=info msg="StartContainer for \"86a714a7835fc0fa7fc6bbc0340b7c866d6920870748a48f12ca4e274db0db82\" returns successfully" Nov 12 22:37:52.651950 containerd[1573]: time="2024-11-12T22:37:52.651876643Z" level=info msg="shim disconnected" id=86a714a7835fc0fa7fc6bbc0340b7c866d6920870748a48f12ca4e274db0db82 namespace=k8s.io Nov 12 22:37:52.651950 containerd[1573]: time="2024-11-12T22:37:52.651928163Z" level=warning msg="cleaning up after shim disconnected" id=86a714a7835fc0fa7fc6bbc0340b7c866d6920870748a48f12ca4e274db0db82 namespace=k8s.io Nov 12 22:37:52.651950 containerd[1573]: time="2024-11-12T22:37:52.651937083Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:37:52.808017 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86a714a7835fc0fa7fc6bbc0340b7c866d6920870748a48f12ca4e274db0db82-rootfs.mount: Deactivated successfully. Nov 12 22:37:53.573905 kubelet[2797]: E1112 22:37:53.573547 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:53.580850 containerd[1573]: time="2024-11-12T22:37:53.580797442Z" level=info msg="CreateContainer within sandbox \"daa91626e0fe3c39ad66895b481442d50806a92b5bdf8b232789a3cde71e6db7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 22:37:53.598383 containerd[1573]: time="2024-11-12T22:37:53.598333657Z" level=info msg="CreateContainer within sandbox \"daa91626e0fe3c39ad66895b481442d50806a92b5bdf8b232789a3cde71e6db7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3f80007600a245d8f9210627ff709709696b396ee7c190f08d0e3ed9bf41a4fc\"" Nov 12 22:37:53.598951 containerd[1573]: time="2024-11-12T22:37:53.598916937Z" level=info msg="StartContainer for \"3f80007600a245d8f9210627ff709709696b396ee7c190f08d0e3ed9bf41a4fc\"" Nov 12 22:37:53.653403 containerd[1573]: time="2024-11-12T22:37:53.653352982Z" level=info msg="StartContainer for \"3f80007600a245d8f9210627ff709709696b396ee7c190f08d0e3ed9bf41a4fc\" returns successfully" Nov 12 22:37:53.909992 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Nov 12 22:37:54.580983 kubelet[2797]: E1112 22:37:54.580653 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:55.375571 kubelet[2797]: E1112 22:37:55.375518 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:55.915591 kubelet[2797]: E1112 22:37:55.915408 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:56.670270 systemd-networkd[1230]: lxc_health: Link UP Nov 12 22:37:56.675039 systemd-networkd[1230]: lxc_health: Gained carrier Nov 12 22:37:57.916051 kubelet[2797]: E1112 22:37:57.916002 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:57.931040 kubelet[2797]: I1112 22:37:57.930995 2797 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6f8cp" podStartSLOduration=8.930930217 podStartE2EDuration="8.930930217s" podCreationTimestamp="2024-11-12 22:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:37:54.592623873 +0000 UTC m=+80.307925417" watchObservedRunningTime="2024-11-12 22:37:57.930930217 +0000 UTC m=+83.646231761" Nov 12 22:37:58.586321 kubelet[2797]: E1112 22:37:58.586289 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:37:58.685118 systemd-networkd[1230]: lxc_health: Gained IPv6LL Nov 12 22:37:59.588638 kubelet[2797]: E1112 22:37:59.588441 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:38:00.375599 kubelet[2797]: E1112 22:38:00.375570 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:38:02.483045 sshd[4653]: Connection closed by 10.0.0.1 port 56720 Nov 12 22:38:02.483402 sshd-session[4648]: pam_unix(sshd:session): session closed for user core Nov 12 22:38:02.485873 systemd[1]: sshd@25-10.0.0.92:22-10.0.0.1:56720.service: Deactivated successfully. Nov 12 22:38:02.488451 systemd-logind[1547]: Session 26 logged out. Waiting for processes to exit. Nov 12 22:38:02.488576 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 22:38:02.490412 systemd-logind[1547]: Removed session 26.