Nov 12 18:00:13.905835 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 12 18:00:13.905857 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Nov 12 16:24:35 -00 2024 Nov 12 18:00:13.905866 kernel: KASLR enabled Nov 12 18:00:13.905872 kernel: efi: EFI v2.7 by EDK II Nov 12 18:00:13.905878 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Nov 12 18:00:13.905884 kernel: random: crng init done Nov 12 18:00:13.905890 kernel: ACPI: Early table checksum verification disabled Nov 12 18:00:13.905896 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Nov 12 18:00:13.905902 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 12 18:00:13.905910 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 18:00:13.905916 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 18:00:13.905922 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 18:00:13.905928 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 18:00:13.905934 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 18:00:13.905941 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 18:00:13.905949 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 18:00:13.905956 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 18:00:13.905962 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 18:00:13.905977 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 12 18:00:13.905984 kernel: NUMA: Failed to initialise from firmware Nov 12 18:00:13.905990 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 18:00:13.905997 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Nov 12 18:00:13.906003 kernel: Zone ranges: Nov 12 18:00:13.906009 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 18:00:13.906015 kernel: DMA32 empty Nov 12 18:00:13.906023 kernel: Normal empty Nov 12 18:00:13.906030 kernel: Movable zone start for each node Nov 12 18:00:13.906036 kernel: Early memory node ranges Nov 12 18:00:13.906042 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Nov 12 18:00:13.906049 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Nov 12 18:00:13.906055 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Nov 12 18:00:13.906061 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 12 18:00:13.906067 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 12 18:00:13.906074 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 12 18:00:13.906080 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 12 18:00:13.906086 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 18:00:13.906093 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 12 18:00:13.906100 kernel: psci: probing for conduit method from ACPI. Nov 12 18:00:13.906106 kernel: psci: PSCIv1.1 detected in firmware. Nov 12 18:00:13.906113 kernel: psci: Using standard PSCI v0.2 function IDs Nov 12 18:00:13.906122 kernel: psci: Trusted OS migration not required Nov 12 18:00:13.906129 kernel: psci: SMC Calling Convention v1.1 Nov 12 18:00:13.906135 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 12 18:00:13.906143 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Nov 12 18:00:13.906150 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Nov 12 18:00:13.906157 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 12 18:00:13.906164 kernel: Detected PIPT I-cache on CPU0 Nov 12 18:00:13.906170 kernel: CPU features: detected: GIC system register CPU interface Nov 12 18:00:13.906177 kernel: CPU features: detected: Hardware dirty bit management Nov 12 18:00:13.906184 kernel: CPU features: detected: Spectre-v4 Nov 12 18:00:13.906190 kernel: CPU features: detected: Spectre-BHB Nov 12 18:00:13.906197 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 12 18:00:13.906204 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 12 18:00:13.906212 kernel: CPU features: detected: ARM erratum 1418040 Nov 12 18:00:13.906219 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 12 18:00:13.906226 kernel: alternatives: applying boot alternatives Nov 12 18:00:13.906234 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8c276c03cfeb31103ba0b5f1af613bdc698463ad3d29e6750e34154929bf187e Nov 12 18:00:13.906241 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 18:00:13.906248 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 18:00:13.906255 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 18:00:13.906262 kernel: Fallback order for Node 0: 0 Nov 12 18:00:13.906268 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Nov 12 18:00:13.906275 kernel: Policy zone: DMA Nov 12 18:00:13.906282 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 18:00:13.906290 kernel: software IO TLB: area num 4. Nov 12 18:00:13.906297 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Nov 12 18:00:13.906304 kernel: Memory: 2386528K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185760K reserved, 0K cma-reserved) Nov 12 18:00:13.906311 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 18:00:13.906318 kernel: trace event string verifier disabled Nov 12 18:00:13.906325 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 18:00:13.906332 kernel: rcu: RCU event tracing is enabled. Nov 12 18:00:13.906339 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 18:00:13.906346 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 18:00:13.906353 kernel: Tracing variant of Tasks RCU enabled. Nov 12 18:00:13.906359 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 18:00:13.906366 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 18:00:13.906374 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 12 18:00:13.906381 kernel: GICv3: 256 SPIs implemented Nov 12 18:00:13.906387 kernel: GICv3: 0 Extended SPIs implemented Nov 12 18:00:13.906394 kernel: Root IRQ handler: gic_handle_irq Nov 12 18:00:13.906400 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 12 18:00:13.906407 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 12 18:00:13.906414 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 12 18:00:13.906421 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Nov 12 18:00:13.906428 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Nov 12 18:00:13.906434 kernel: GICv3: using LPI property table @0x00000000400f0000 Nov 12 18:00:13.906441 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Nov 12 18:00:13.906449 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 18:00:13.906456 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 18:00:13.906463 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 12 18:00:13.906470 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 12 18:00:13.906476 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 12 18:00:13.906483 kernel: arm-pv: using stolen time PV Nov 12 18:00:13.906490 kernel: Console: colour dummy device 80x25 Nov 12 18:00:13.906497 kernel: ACPI: Core revision 20230628 Nov 12 18:00:13.906504 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 12 18:00:13.906511 kernel: pid_max: default: 32768 minimum: 301 Nov 12 18:00:13.906520 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 18:00:13.906527 kernel: landlock: Up and running. Nov 12 18:00:13.906534 kernel: SELinux: Initializing. Nov 12 18:00:13.906541 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 18:00:13.906606 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 18:00:13.906614 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 18:00:13.906621 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 18:00:13.906628 kernel: rcu: Hierarchical SRCU implementation. Nov 12 18:00:13.906635 kernel: rcu: Max phase no-delay instances is 400. Nov 12 18:00:13.906645 kernel: Platform MSI: ITS@0x8080000 domain created Nov 12 18:00:13.906652 kernel: PCI/MSI: ITS@0x8080000 domain created Nov 12 18:00:13.906659 kernel: Remapping and enabling EFI services. Nov 12 18:00:13.906665 kernel: smp: Bringing up secondary CPUs ... Nov 12 18:00:13.906672 kernel: Detected PIPT I-cache on CPU1 Nov 12 18:00:13.906679 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 12 18:00:13.906686 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Nov 12 18:00:13.906693 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 18:00:13.906700 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 12 18:00:13.906707 kernel: Detected PIPT I-cache on CPU2 Nov 12 18:00:13.906716 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 12 18:00:13.906723 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Nov 12 18:00:13.906735 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 18:00:13.906743 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 12 18:00:13.906750 kernel: Detected PIPT I-cache on CPU3 Nov 12 18:00:13.906758 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 12 18:00:13.906765 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Nov 12 18:00:13.906772 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 18:00:13.906780 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 12 18:00:13.906788 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 18:00:13.906796 kernel: SMP: Total of 4 processors activated. Nov 12 18:00:13.906803 kernel: CPU features: detected: 32-bit EL0 Support Nov 12 18:00:13.906810 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 12 18:00:13.906818 kernel: CPU features: detected: Common not Private translations Nov 12 18:00:13.906825 kernel: CPU features: detected: CRC32 instructions Nov 12 18:00:13.906832 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 12 18:00:13.906839 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 12 18:00:13.906848 kernel: CPU features: detected: LSE atomic instructions Nov 12 18:00:13.906855 kernel: CPU features: detected: Privileged Access Never Nov 12 18:00:13.906862 kernel: CPU features: detected: RAS Extension Support Nov 12 18:00:13.906869 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 12 18:00:13.906877 kernel: CPU: All CPU(s) started at EL1 Nov 12 18:00:13.906884 kernel: alternatives: applying system-wide alternatives Nov 12 18:00:13.906891 kernel: devtmpfs: initialized Nov 12 18:00:13.906899 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 18:00:13.906906 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 18:00:13.906915 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 18:00:13.906922 kernel: SMBIOS 3.0.0 present. Nov 12 18:00:13.906930 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Nov 12 18:00:13.906937 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 18:00:13.906944 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 12 18:00:13.906952 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 12 18:00:13.906959 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 12 18:00:13.906972 kernel: audit: initializing netlink subsys (disabled) Nov 12 18:00:13.906979 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Nov 12 18:00:13.906988 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 18:00:13.906995 kernel: cpuidle: using governor menu Nov 12 18:00:13.907003 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 12 18:00:13.907010 kernel: ASID allocator initialised with 32768 entries Nov 12 18:00:13.907017 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 18:00:13.907025 kernel: Serial: AMBA PL011 UART driver Nov 12 18:00:13.907032 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 12 18:00:13.907039 kernel: Modules: 0 pages in range for non-PLT usage Nov 12 18:00:13.907046 kernel: Modules: 509040 pages in range for PLT usage Nov 12 18:00:13.907055 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 18:00:13.907062 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 18:00:13.907069 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 12 18:00:13.907077 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 12 18:00:13.907084 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 18:00:13.907091 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 18:00:13.907099 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 12 18:00:13.907106 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 12 18:00:13.907113 kernel: ACPI: Added _OSI(Module Device) Nov 12 18:00:13.907122 kernel: ACPI: Added _OSI(Processor Device) Nov 12 18:00:13.907129 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 18:00:13.907136 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 18:00:13.907144 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 18:00:13.907151 kernel: ACPI: Interpreter enabled Nov 12 18:00:13.907158 kernel: ACPI: Using GIC for interrupt routing Nov 12 18:00:13.907165 kernel: ACPI: MCFG table detected, 1 entries Nov 12 18:00:13.907173 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 12 18:00:13.907180 kernel: printk: console [ttyAMA0] enabled Nov 12 18:00:13.907189 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 18:00:13.907324 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 18:00:13.907399 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 12 18:00:13.907465 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 12 18:00:13.907529 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 12 18:00:13.907606 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 12 18:00:13.907617 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 12 18:00:13.907627 kernel: PCI host bridge to bus 0000:00 Nov 12 18:00:13.907697 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 12 18:00:13.907756 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 12 18:00:13.907815 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 12 18:00:13.907890 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 18:00:13.907977 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Nov 12 18:00:13.908057 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 18:00:13.908128 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Nov 12 18:00:13.908194 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Nov 12 18:00:13.908259 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Nov 12 18:00:13.908325 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Nov 12 18:00:13.908390 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Nov 12 18:00:13.908456 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Nov 12 18:00:13.908514 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 12 18:00:13.908585 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 12 18:00:13.908644 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 12 18:00:13.908654 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 12 18:00:13.908662 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 12 18:00:13.908675 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 12 18:00:13.908682 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 12 18:00:13.908689 kernel: iommu: Default domain type: Translated Nov 12 18:00:13.908697 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 12 18:00:13.908706 kernel: efivars: Registered efivars operations Nov 12 18:00:13.908713 kernel: vgaarb: loaded Nov 12 18:00:13.908720 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 12 18:00:13.908728 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 18:00:13.908735 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 18:00:13.908742 kernel: pnp: PnP ACPI init Nov 12 18:00:13.908833 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 12 18:00:13.908844 kernel: pnp: PnP ACPI: found 1 devices Nov 12 18:00:13.908854 kernel: NET: Registered PF_INET protocol family Nov 12 18:00:13.908861 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 18:00:13.908869 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 18:00:13.908876 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 18:00:13.908883 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 18:00:13.908891 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 18:00:13.908898 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 18:00:13.908905 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 18:00:13.908913 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 18:00:13.908922 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 18:00:13.908929 kernel: PCI: CLS 0 bytes, default 64 Nov 12 18:00:13.908936 kernel: kvm [1]: HYP mode not available Nov 12 18:00:13.908943 kernel: Initialise system trusted keyrings Nov 12 18:00:13.908951 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 18:00:13.908958 kernel: Key type asymmetric registered Nov 12 18:00:13.908973 kernel: Asymmetric key parser 'x509' registered Nov 12 18:00:13.908980 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 12 18:00:13.908988 kernel: io scheduler mq-deadline registered Nov 12 18:00:13.908997 kernel: io scheduler kyber registered Nov 12 18:00:13.909004 kernel: io scheduler bfq registered Nov 12 18:00:13.909012 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 12 18:00:13.909019 kernel: ACPI: button: Power Button [PWRB] Nov 12 18:00:13.909027 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 12 18:00:13.909104 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 12 18:00:13.909114 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 18:00:13.909121 kernel: thunder_xcv, ver 1.0 Nov 12 18:00:13.909128 kernel: thunder_bgx, ver 1.0 Nov 12 18:00:13.909138 kernel: nicpf, ver 1.0 Nov 12 18:00:13.909145 kernel: nicvf, ver 1.0 Nov 12 18:00:13.909236 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 12 18:00:13.909300 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-11-12T18:00:13 UTC (1731434413) Nov 12 18:00:13.909310 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 12 18:00:13.909318 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Nov 12 18:00:13.909325 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 12 18:00:13.909332 kernel: watchdog: Hard watchdog permanently disabled Nov 12 18:00:13.909342 kernel: NET: Registered PF_INET6 protocol family Nov 12 18:00:13.909349 kernel: Segment Routing with IPv6 Nov 12 18:00:13.909356 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 18:00:13.909363 kernel: NET: Registered PF_PACKET protocol family Nov 12 18:00:13.909371 kernel: Key type dns_resolver registered Nov 12 18:00:13.909378 kernel: registered taskstats version 1 Nov 12 18:00:13.909385 kernel: Loading compiled-in X.509 certificates Nov 12 18:00:13.909393 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 277bea35d8d47c9841f307ab609d4271c3622dcb' Nov 12 18:00:13.909400 kernel: Key type .fscrypt registered Nov 12 18:00:13.909409 kernel: Key type fscrypt-provisioning registered Nov 12 18:00:13.909416 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 18:00:13.909423 kernel: ima: Allocated hash algorithm: sha1 Nov 12 18:00:13.909430 kernel: ima: No architecture policies found Nov 12 18:00:13.909438 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 12 18:00:13.909445 kernel: clk: Disabling unused clocks Nov 12 18:00:13.909452 kernel: Freeing unused kernel memory: 39360K Nov 12 18:00:13.909459 kernel: Run /init as init process Nov 12 18:00:13.909467 kernel: with arguments: Nov 12 18:00:13.909475 kernel: /init Nov 12 18:00:13.909483 kernel: with environment: Nov 12 18:00:13.909490 kernel: HOME=/ Nov 12 18:00:13.909497 kernel: TERM=linux Nov 12 18:00:13.909504 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 18:00:13.909513 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 18:00:13.909523 systemd[1]: Detected virtualization kvm. Nov 12 18:00:13.909531 systemd[1]: Detected architecture arm64. Nov 12 18:00:13.909540 systemd[1]: Running in initrd. Nov 12 18:00:13.909583 systemd[1]: No hostname configured, using default hostname. Nov 12 18:00:13.909590 systemd[1]: Hostname set to . Nov 12 18:00:13.909599 systemd[1]: Initializing machine ID from VM UUID. Nov 12 18:00:13.909606 systemd[1]: Queued start job for default target initrd.target. Nov 12 18:00:13.909614 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 18:00:13.909622 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 18:00:13.909630 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 18:00:13.909641 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 18:00:13.909649 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 18:00:13.909657 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 18:00:13.909667 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 18:00:13.909675 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 18:00:13.909683 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 18:00:13.909692 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 18:00:13.909700 systemd[1]: Reached target paths.target - Path Units. Nov 12 18:00:13.909708 systemd[1]: Reached target slices.target - Slice Units. Nov 12 18:00:13.909715 systemd[1]: Reached target swap.target - Swaps. Nov 12 18:00:13.909723 systemd[1]: Reached target timers.target - Timer Units. Nov 12 18:00:13.909731 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 18:00:13.909739 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 18:00:13.909747 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 18:00:13.909755 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 18:00:13.909764 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 18:00:13.909773 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 18:00:13.909781 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 18:00:13.909789 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 18:00:13.909797 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 18:00:13.909817 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 18:00:13.909825 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 18:00:13.909833 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 18:00:13.909841 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 18:00:13.909851 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 18:00:13.909859 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 18:00:13.909866 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 18:00:13.909874 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 18:00:13.909882 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 18:00:13.909892 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 18:00:13.909900 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 18:00:13.909926 systemd-journald[239]: Collecting audit messages is disabled. Nov 12 18:00:13.909948 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 18:00:13.909957 systemd-journald[239]: Journal started Nov 12 18:00:13.909982 systemd-journald[239]: Runtime Journal (/run/log/journal/480369f1a6f445469252f1e2e4a336a7) is 5.9M, max 47.3M, 41.4M free. Nov 12 18:00:13.901389 systemd-modules-load[240]: Inserted module 'overlay' Nov 12 18:00:13.912172 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 18:00:13.914578 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 18:00:13.916401 systemd-modules-load[240]: Inserted module 'br_netfilter' Nov 12 18:00:13.917635 kernel: Bridge firewalling registered Nov 12 18:00:13.917268 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 18:00:13.919630 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 18:00:13.921951 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 18:00:13.926209 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 18:00:13.927570 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 18:00:13.928520 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 18:00:13.934405 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 18:00:13.936708 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 18:00:13.944295 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 18:00:13.945318 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 18:00:13.949594 dracut-cmdline[273]: dracut-dracut-053 Nov 12 18:00:13.952344 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8c276c03cfeb31103ba0b5f1af613bdc698463ad3d29e6750e34154929bf187e Nov 12 18:00:13.959702 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 18:00:13.985168 systemd-resolved[286]: Positive Trust Anchors: Nov 12 18:00:13.985185 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 18:00:13.985217 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 18:00:13.989911 systemd-resolved[286]: Defaulting to hostname 'linux'. Nov 12 18:00:13.990924 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 18:00:13.992530 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 18:00:14.031579 kernel: SCSI subsystem initialized Nov 12 18:00:14.036564 kernel: Loading iSCSI transport class v2.0-870. Nov 12 18:00:14.044571 kernel: iscsi: registered transport (tcp) Nov 12 18:00:14.061601 kernel: iscsi: registered transport (qla4xxx) Nov 12 18:00:14.061614 kernel: QLogic iSCSI HBA Driver Nov 12 18:00:14.111873 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 18:00:14.123672 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 18:00:14.142679 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 18:00:14.142735 kernel: device-mapper: uevent: version 1.0.3 Nov 12 18:00:14.142759 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 18:00:14.190579 kernel: raid6: neonx8 gen() 15769 MB/s Nov 12 18:00:14.207579 kernel: raid6: neonx4 gen() 15641 MB/s Nov 12 18:00:14.224578 kernel: raid6: neonx2 gen() 13237 MB/s Nov 12 18:00:14.241566 kernel: raid6: neonx1 gen() 10473 MB/s Nov 12 18:00:14.258567 kernel: raid6: int64x8 gen() 6953 MB/s Nov 12 18:00:14.275565 kernel: raid6: int64x4 gen() 7338 MB/s Nov 12 18:00:14.292568 kernel: raid6: int64x2 gen() 6125 MB/s Nov 12 18:00:14.309763 kernel: raid6: int64x1 gen() 5055 MB/s Nov 12 18:00:14.309776 kernel: raid6: using algorithm neonx8 gen() 15769 MB/s Nov 12 18:00:14.327727 kernel: raid6: .... xor() 11912 MB/s, rmw enabled Nov 12 18:00:14.327754 kernel: raid6: using neon recovery algorithm Nov 12 18:00:14.332564 kernel: xor: measuring software checksum speed Nov 12 18:00:14.333918 kernel: 8regs : 17421 MB/sec Nov 12 18:00:14.333931 kernel: 32regs : 19622 MB/sec Nov 12 18:00:14.334568 kernel: arm64_neon : 25600 MB/sec Nov 12 18:00:14.334579 kernel: xor: using function: arm64_neon (25600 MB/sec) Nov 12 18:00:14.385577 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 18:00:14.395606 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 18:00:14.401750 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 18:00:14.416601 systemd-udevd[460]: Using default interface naming scheme 'v255'. Nov 12 18:00:14.419797 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 18:00:14.427709 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 18:00:14.439844 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Nov 12 18:00:14.467447 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 18:00:14.479677 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 18:00:14.526825 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 18:00:14.533836 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 18:00:14.553035 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 18:00:14.554573 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 18:00:14.555738 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 18:00:14.556521 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 18:00:14.564701 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 18:00:14.576600 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 18:00:14.592155 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 12 18:00:14.600074 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 18:00:14.600174 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 18:00:14.600186 kernel: GPT:9289727 != 19775487 Nov 12 18:00:14.600195 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 18:00:14.600211 kernel: GPT:9289727 != 19775487 Nov 12 18:00:14.600220 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 18:00:14.600231 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 18:00:14.595906 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 18:00:14.596032 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 18:00:14.599523 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 18:00:14.600635 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 18:00:14.600950 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 18:00:14.603651 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 18:00:14.618570 kernel: BTRFS: device fsid 93a9d474-e751-47b7-a65f-e39ca9abd47a devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (520) Nov 12 18:00:14.618610 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (512) Nov 12 18:00:14.619852 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 18:00:14.631072 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 18:00:14.632372 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 18:00:14.637279 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 18:00:14.647400 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 18:00:14.648352 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 18:00:14.653450 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 18:00:14.665697 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 18:00:14.667191 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 18:00:14.673265 disk-uuid[551]: Primary Header is updated. Nov 12 18:00:14.673265 disk-uuid[551]: Secondary Entries is updated. Nov 12 18:00:14.673265 disk-uuid[551]: Secondary Header is updated. Nov 12 18:00:14.678575 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 18:00:14.687742 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 18:00:15.691580 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 18:00:15.692445 disk-uuid[554]: The operation has completed successfully. Nov 12 18:00:15.724862 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 18:00:15.724951 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 18:00:15.740764 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 18:00:15.744667 sh[573]: Success Nov 12 18:00:15.759568 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 12 18:00:15.803015 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 18:00:15.804637 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 18:00:15.805421 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 18:00:15.817683 kernel: BTRFS info (device dm-0): first mount of filesystem 93a9d474-e751-47b7-a65f-e39ca9abd47a Nov 12 18:00:15.817725 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 12 18:00:15.817737 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 18:00:15.819566 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 18:00:15.819584 kernel: BTRFS info (device dm-0): using free space tree Nov 12 18:00:15.827738 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 18:00:15.828944 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 18:00:15.836690 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 18:00:15.838145 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 18:00:15.846762 kernel: BTRFS info (device vda6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 18:00:15.846814 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 18:00:15.846826 kernel: BTRFS info (device vda6): using free space tree Nov 12 18:00:15.850583 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 18:00:15.859539 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 18:00:15.861568 kernel: BTRFS info (device vda6): last unmount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 18:00:15.870825 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 18:00:15.879751 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 18:00:15.939719 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 18:00:15.948769 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 18:00:15.978561 systemd-networkd[763]: lo: Link UP Nov 12 18:00:15.978571 systemd-networkd[763]: lo: Gained carrier Nov 12 18:00:15.979279 systemd-networkd[763]: Enumeration completed Nov 12 18:00:15.979809 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 18:00:15.979812 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 18:00:15.980618 systemd-networkd[763]: eth0: Link UP Nov 12 18:00:15.980621 systemd-networkd[763]: eth0: Gained carrier Nov 12 18:00:15.980628 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 18:00:15.981610 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 18:00:15.983379 systemd[1]: Reached target network.target - Network. Nov 12 18:00:15.989347 ignition[675]: Ignition 2.19.0 Nov 12 18:00:15.989353 ignition[675]: Stage: fetch-offline Nov 12 18:00:15.989390 ignition[675]: no configs at "/usr/lib/ignition/base.d" Nov 12 18:00:15.989398 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 18:00:15.989573 ignition[675]: parsed url from cmdline: "" Nov 12 18:00:15.989577 ignition[675]: no config URL provided Nov 12 18:00:15.989581 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 18:00:15.989589 ignition[675]: no config at "/usr/lib/ignition/user.ign" Nov 12 18:00:15.989610 ignition[675]: op(1): [started] loading QEMU firmware config module Nov 12 18:00:15.989615 ignition[675]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 18:00:15.996805 ignition[675]: op(1): [finished] loading QEMU firmware config module Nov 12 18:00:16.000588 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 18:00:16.034494 ignition[675]: parsing config with SHA512: 34579ae3219d2527d5f4b2efe1ef2603345f305adbe1e35df2808df7f4ab1f860be1e755f1228783be0171c1b3f951a48446f4e443ad978054eb0d6f1490a65d Nov 12 18:00:16.038645 unknown[675]: fetched base config from "system" Nov 12 18:00:16.038655 unknown[675]: fetched user config from "qemu" Nov 12 18:00:16.039061 ignition[675]: fetch-offline: fetch-offline passed Nov 12 18:00:16.039123 ignition[675]: Ignition finished successfully Nov 12 18:00:16.042213 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 18:00:16.044293 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 18:00:16.050742 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 18:00:16.060671 ignition[771]: Ignition 2.19.0 Nov 12 18:00:16.060679 ignition[771]: Stage: kargs Nov 12 18:00:16.060837 ignition[771]: no configs at "/usr/lib/ignition/base.d" Nov 12 18:00:16.060846 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 18:00:16.061765 ignition[771]: kargs: kargs passed Nov 12 18:00:16.061810 ignition[771]: Ignition finished successfully Nov 12 18:00:16.063863 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 18:00:16.066104 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 18:00:16.078355 ignition[779]: Ignition 2.19.0 Nov 12 18:00:16.078364 ignition[779]: Stage: disks Nov 12 18:00:16.078520 ignition[779]: no configs at "/usr/lib/ignition/base.d" Nov 12 18:00:16.078529 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 18:00:16.079410 ignition[779]: disks: disks passed Nov 12 18:00:16.079454 ignition[779]: Ignition finished successfully Nov 12 18:00:16.081917 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 18:00:16.082920 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 18:00:16.084187 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 18:00:16.085747 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 18:00:16.087281 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 18:00:16.088689 systemd[1]: Reached target basic.target - Basic System. Nov 12 18:00:16.103693 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 18:00:16.114636 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 18:00:16.119927 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 18:00:16.132668 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 18:00:16.176562 kernel: EXT4-fs (vda9): mounted filesystem b3af0fd7-3c7c-4cdc-9b88-dae3d10ea922 r/w with ordered data mode. Quota mode: none. Nov 12 18:00:16.176896 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 18:00:16.178014 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 18:00:16.188636 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 18:00:16.190264 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 18:00:16.191180 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 18:00:16.191221 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 18:00:16.191244 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 18:00:16.197994 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (797) Nov 12 18:00:16.197390 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 18:00:16.199587 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 18:00:16.203144 kernel: BTRFS info (device vda6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 18:00:16.203173 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 18:00:16.203183 kernel: BTRFS info (device vda6): using free space tree Nov 12 18:00:16.205568 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 18:00:16.206433 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 18:00:16.247467 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 18:00:16.252049 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Nov 12 18:00:16.255579 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 18:00:16.259319 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 18:00:16.331609 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 18:00:16.341643 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 18:00:16.342983 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 18:00:16.348557 kernel: BTRFS info (device vda6): last unmount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 18:00:16.363372 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 18:00:16.364736 ignition[910]: INFO : Ignition 2.19.0 Nov 12 18:00:16.364736 ignition[910]: INFO : Stage: mount Nov 12 18:00:16.364736 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 18:00:16.364736 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 18:00:16.367303 ignition[910]: INFO : mount: mount passed Nov 12 18:00:16.367303 ignition[910]: INFO : Ignition finished successfully Nov 12 18:00:16.366830 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 18:00:16.377647 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 18:00:16.816278 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 18:00:16.825715 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 18:00:16.833190 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (923) Nov 12 18:00:16.833232 kernel: BTRFS info (device vda6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 18:00:16.833244 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 18:00:16.834765 kernel: BTRFS info (device vda6): using free space tree Nov 12 18:00:16.837560 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 18:00:16.838294 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 18:00:16.854505 ignition[940]: INFO : Ignition 2.19.0 Nov 12 18:00:16.854505 ignition[940]: INFO : Stage: files Nov 12 18:00:16.855701 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 18:00:16.855701 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 18:00:16.855701 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Nov 12 18:00:16.858320 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 18:00:16.858320 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 18:00:16.858320 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 18:00:16.858320 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 18:00:16.862364 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 18:00:16.862364 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 18:00:16.862364 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Nov 12 18:00:16.858604 unknown[940]: wrote ssh authorized keys file for user: core Nov 12 18:00:16.906539 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 18:00:17.098599 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 18:00:17.098599 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 18:00:17.098599 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Nov 12 18:00:17.368368 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 18:00:17.425567 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 18:00:17.425567 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 12 18:00:17.431684 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 18:00:17.431684 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 18:00:17.431684 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 18:00:17.431684 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 18:00:17.431684 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 18:00:17.431684 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 18:00:17.431684 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 18:00:17.431684 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 18:00:17.431684 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 18:00:17.431684 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Nov 12 18:00:17.431684 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Nov 12 18:00:17.431684 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Nov 12 18:00:17.431684 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Nov 12 18:00:17.712214 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 12 18:00:17.955278 systemd-networkd[763]: eth0: Gained IPv6LL Nov 12 18:00:18.024289 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Nov 12 18:00:18.026342 ignition[940]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 12 18:00:18.026342 ignition[940]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 18:00:18.026342 ignition[940]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 18:00:18.026342 ignition[940]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 12 18:00:18.026342 ignition[940]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 12 18:00:18.026342 ignition[940]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 18:00:18.026342 ignition[940]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 18:00:18.026342 ignition[940]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 12 18:00:18.026342 ignition[940]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 18:00:18.049744 ignition[940]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 18:00:18.053129 ignition[940]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 18:00:18.055438 ignition[940]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 18:00:18.055438 ignition[940]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 12 18:00:18.055438 ignition[940]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 18:00:18.055438 ignition[940]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 18:00:18.055438 ignition[940]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 18:00:18.055438 ignition[940]: INFO : files: files passed Nov 12 18:00:18.055438 ignition[940]: INFO : Ignition finished successfully Nov 12 18:00:18.056177 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 18:00:18.067710 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 18:00:18.069243 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 18:00:18.072314 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 18:00:18.072406 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 18:00:18.077091 initrd-setup-root-after-ignition[968]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 18:00:18.079258 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 18:00:18.079258 initrd-setup-root-after-ignition[970]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 18:00:18.081465 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 18:00:18.082289 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 18:00:18.084624 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 18:00:18.098744 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 18:00:18.116864 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 18:00:18.117637 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 18:00:18.118705 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 18:00:18.120167 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 18:00:18.121435 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 18:00:18.122157 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 18:00:18.136427 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 18:00:18.138492 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 18:00:18.149331 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 18:00:18.150315 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 18:00:18.151257 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 18:00:18.152642 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 18:00:18.152760 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 18:00:18.154894 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 18:00:18.156292 systemd[1]: Stopped target basic.target - Basic System. Nov 12 18:00:18.157663 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 18:00:18.159032 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 18:00:18.160449 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 18:00:18.161950 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 18:00:18.163357 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 18:00:18.164871 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 18:00:18.166238 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 18:00:18.167621 systemd[1]: Stopped target swap.target - Swaps. Nov 12 18:00:18.168789 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 18:00:18.168906 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 18:00:18.170615 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 18:00:18.171989 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 18:00:18.173337 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 18:00:18.176595 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 18:00:18.177482 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 18:00:18.177637 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 18:00:18.179771 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 18:00:18.179893 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 18:00:18.181293 systemd[1]: Stopped target paths.target - Path Units. Nov 12 18:00:18.182434 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 18:00:18.185600 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 18:00:18.186513 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 18:00:18.188070 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 18:00:18.189224 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 18:00:18.189310 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 18:00:18.190385 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 18:00:18.190464 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 18:00:18.191583 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 18:00:18.191691 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 18:00:18.193055 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 18:00:18.193149 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 18:00:18.207776 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 18:00:18.208475 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 18:00:18.208623 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 18:00:18.213745 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 18:00:18.214394 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 18:00:18.214512 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 18:00:18.215829 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 18:00:18.215933 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 18:00:18.220165 ignition[994]: INFO : Ignition 2.19.0 Nov 12 18:00:18.220165 ignition[994]: INFO : Stage: umount Nov 12 18:00:18.221402 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 18:00:18.221402 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 18:00:18.221402 ignition[994]: INFO : umount: umount passed Nov 12 18:00:18.221402 ignition[994]: INFO : Ignition finished successfully Nov 12 18:00:18.222623 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 18:00:18.222724 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 18:00:18.223954 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 18:00:18.224032 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 18:00:18.225836 systemd[1]: Stopped target network.target - Network. Nov 12 18:00:18.227143 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 18:00:18.227199 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 18:00:18.228586 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 18:00:18.228626 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 18:00:18.229925 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 18:00:18.229972 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 18:00:18.231163 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 18:00:18.231202 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 18:00:18.232747 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 18:00:18.234056 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 18:00:18.236093 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 18:00:18.237582 systemd-networkd[763]: eth0: DHCPv6 lease lost Nov 12 18:00:18.239652 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 18:00:18.239756 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 18:00:18.243921 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 18:00:18.244016 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 18:00:18.246455 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 18:00:18.246498 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 18:00:18.261676 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 18:00:18.262338 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 18:00:18.262403 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 18:00:18.263810 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 18:00:18.263851 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 18:00:18.265119 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 18:00:18.265160 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 18:00:18.266701 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 18:00:18.266740 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 18:00:18.268190 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 18:00:18.278501 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 18:00:18.278662 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 18:00:18.287330 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 18:00:18.287478 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 18:00:18.289370 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 18:00:18.289411 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 18:00:18.290865 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 18:00:18.290895 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 18:00:18.292231 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 18:00:18.292273 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 18:00:18.294360 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 18:00:18.294401 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 18:00:18.296440 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 18:00:18.296481 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 18:00:18.306761 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 18:00:18.307576 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 18:00:18.307629 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 18:00:18.309259 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 18:00:18.309299 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 18:00:18.311023 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 18:00:18.312567 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 18:00:18.314243 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 18:00:18.314317 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 18:00:18.316149 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 18:00:18.317562 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 18:00:18.317615 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 18:00:18.319893 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 18:00:18.328999 systemd[1]: Switching root. Nov 12 18:00:18.358522 systemd-journald[239]: Journal stopped Nov 12 18:00:19.062269 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Nov 12 18:00:19.062324 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 18:00:19.062336 kernel: SELinux: policy capability open_perms=1 Nov 12 18:00:19.062348 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 18:00:19.062357 kernel: SELinux: policy capability always_check_network=0 Nov 12 18:00:19.062367 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 18:00:19.062376 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 18:00:19.062388 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 18:00:19.062398 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 18:00:19.062411 kernel: audit: type=1403 audit(1731434418.507:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 18:00:19.062422 systemd[1]: Successfully loaded SELinux policy in 32.525ms. Nov 12 18:00:19.062439 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.694ms. Nov 12 18:00:19.062450 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 18:00:19.062461 systemd[1]: Detected virtualization kvm. Nov 12 18:00:19.062472 systemd[1]: Detected architecture arm64. Nov 12 18:00:19.062482 systemd[1]: Detected first boot. Nov 12 18:00:19.062494 systemd[1]: Initializing machine ID from VM UUID. Nov 12 18:00:19.062505 zram_generator::config[1039]: No configuration found. Nov 12 18:00:19.062531 systemd[1]: Populated /etc with preset unit settings. Nov 12 18:00:19.062598 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 18:00:19.062613 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 18:00:19.062630 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 18:00:19.062642 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 18:00:19.062653 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 18:00:19.062666 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 18:00:19.062676 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 18:00:19.062687 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 18:00:19.062698 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 18:00:19.062708 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 18:00:19.062758 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 18:00:19.062772 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 18:00:19.062783 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 18:00:19.062794 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 18:00:19.062807 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 18:00:19.062818 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 18:00:19.062829 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 18:00:19.062839 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 12 18:00:19.062850 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 18:00:19.062861 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 18:00:19.062872 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 18:00:19.062883 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 18:00:19.062895 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 18:00:19.062905 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 18:00:19.062917 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 18:00:19.062927 systemd[1]: Reached target slices.target - Slice Units. Nov 12 18:00:19.062938 systemd[1]: Reached target swap.target - Swaps. Nov 12 18:00:19.062953 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 18:00:19.062966 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 18:00:19.062976 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 18:00:19.062986 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 18:00:19.063001 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 18:00:19.063011 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 18:00:19.063022 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 18:00:19.063032 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 18:00:19.063042 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 18:00:19.063052 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 18:00:19.063063 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 18:00:19.063073 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 18:00:19.063085 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 18:00:19.063097 systemd[1]: Reached target machines.target - Containers. Nov 12 18:00:19.063107 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 18:00:19.063118 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 18:00:19.063129 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 18:00:19.063139 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 18:00:19.063150 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 18:00:19.063160 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 18:00:19.063170 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 18:00:19.063182 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 18:00:19.063193 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 18:00:19.063203 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 18:00:19.063213 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 18:00:19.063225 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 18:00:19.063236 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 18:00:19.063246 kernel: fuse: init (API version 7.39) Nov 12 18:00:19.063256 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 18:00:19.063268 kernel: loop: module loaded Nov 12 18:00:19.063278 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 18:00:19.063289 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 18:00:19.063299 kernel: ACPI: bus type drm_connector registered Nov 12 18:00:19.063309 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 18:00:19.063319 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 18:00:19.063330 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 18:00:19.063341 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 18:00:19.063351 systemd[1]: Stopped verity-setup.service. Nov 12 18:00:19.063361 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 18:00:19.063373 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 18:00:19.063403 systemd-journald[1106]: Collecting audit messages is disabled. Nov 12 18:00:19.063429 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 18:00:19.063441 systemd-journald[1106]: Journal started Nov 12 18:00:19.063462 systemd-journald[1106]: Runtime Journal (/run/log/journal/480369f1a6f445469252f1e2e4a336a7) is 5.9M, max 47.3M, 41.4M free. Nov 12 18:00:18.863939 systemd[1]: Queued start job for default target multi-user.target. Nov 12 18:00:18.884194 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 18:00:18.884570 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 18:00:19.065573 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 18:00:19.065918 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 18:00:19.066802 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 18:00:19.067679 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 18:00:19.069598 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 18:00:19.070704 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 18:00:19.071821 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 18:00:19.071963 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 18:00:19.073045 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 18:00:19.073176 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 18:00:19.074223 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 18:00:19.074350 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 18:00:19.075358 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 18:00:19.075486 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 18:00:19.076819 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 18:00:19.076957 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 18:00:19.078086 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 18:00:19.079582 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 18:00:19.080577 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 18:00:19.081652 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 18:00:19.082739 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 18:00:19.093586 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 18:00:19.105632 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 18:00:19.107372 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 18:00:19.108215 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 18:00:19.108252 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 18:00:19.109837 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 18:00:19.111649 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 18:00:19.113412 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 18:00:19.114295 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 18:00:19.115602 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 18:00:19.117229 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 18:00:19.118138 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 18:00:19.121705 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 18:00:19.123636 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 18:00:19.124892 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 18:00:19.125256 systemd-journald[1106]: Time spent on flushing to /var/log/journal/480369f1a6f445469252f1e2e4a336a7 is 26.461ms for 857 entries. Nov 12 18:00:19.125256 systemd-journald[1106]: System Journal (/var/log/journal/480369f1a6f445469252f1e2e4a336a7) is 8.0M, max 195.6M, 187.6M free. Nov 12 18:00:19.166430 systemd-journald[1106]: Received client request to flush runtime journal. Nov 12 18:00:19.166477 kernel: loop0: detected capacity change from 0 to 114328 Nov 12 18:00:19.166490 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 18:00:19.130573 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 18:00:19.132635 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 18:00:19.137612 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 18:00:19.138872 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 18:00:19.139854 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 18:00:19.141612 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 18:00:19.142811 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 18:00:19.148991 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 18:00:19.163008 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 18:00:19.165706 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 18:00:19.170004 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 18:00:19.171424 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 18:00:19.179496 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 18:00:19.181769 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 18:00:19.184484 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 18:00:19.184824 kernel: loop1: detected capacity change from 0 to 114432 Nov 12 18:00:19.193782 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 18:00:19.194959 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 18:00:19.214400 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Nov 12 18:00:19.214749 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Nov 12 18:00:19.219470 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 18:00:19.223562 kernel: loop2: detected capacity change from 0 to 189592 Nov 12 18:00:19.254574 kernel: loop3: detected capacity change from 0 to 114328 Nov 12 18:00:19.259570 kernel: loop4: detected capacity change from 0 to 114432 Nov 12 18:00:19.264750 kernel: loop5: detected capacity change from 0 to 189592 Nov 12 18:00:19.267824 (sd-merge)[1174]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 18:00:19.268209 (sd-merge)[1174]: Merged extensions into '/usr'. Nov 12 18:00:19.273273 systemd[1]: Reloading requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 18:00:19.273288 systemd[1]: Reloading... Nov 12 18:00:19.320627 zram_generator::config[1202]: No configuration found. Nov 12 18:00:19.395092 ldconfig[1145]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 18:00:19.415631 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 18:00:19.451566 systemd[1]: Reloading finished in 177 ms. Nov 12 18:00:19.476067 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 18:00:19.478064 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 18:00:19.494687 systemd[1]: Starting ensure-sysext.service... Nov 12 18:00:19.496377 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 18:00:19.504801 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... Nov 12 18:00:19.504817 systemd[1]: Reloading... Nov 12 18:00:19.512473 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 18:00:19.512851 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 18:00:19.513502 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 18:00:19.513737 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Nov 12 18:00:19.513790 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Nov 12 18:00:19.516072 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 18:00:19.516085 systemd-tmpfiles[1237]: Skipping /boot Nov 12 18:00:19.523160 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 18:00:19.523173 systemd-tmpfiles[1237]: Skipping /boot Nov 12 18:00:19.560584 zram_generator::config[1267]: No configuration found. Nov 12 18:00:19.641080 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 18:00:19.677038 systemd[1]: Reloading finished in 171 ms. Nov 12 18:00:19.695298 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 18:00:19.710048 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 18:00:19.718498 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 18:00:19.720790 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 18:00:19.722765 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 18:00:19.727816 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 18:00:19.744808 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 18:00:19.748745 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 18:00:19.751773 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 18:00:19.752877 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 18:00:19.756658 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 18:00:19.762805 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 18:00:19.764173 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 18:00:19.769332 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 18:00:19.771046 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 18:00:19.774100 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 18:00:19.774243 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 18:00:19.775539 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 18:00:19.775696 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 18:00:19.776297 systemd-udevd[1311]: Using default interface naming scheme 'v255'. Nov 12 18:00:19.777045 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 18:00:19.777161 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 18:00:19.790967 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 18:00:19.799600 augenrules[1329]: No rules Nov 12 18:00:19.803977 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 18:00:19.808814 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 18:00:19.810723 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 18:00:19.812603 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 18:00:19.823234 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 18:00:19.825225 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 18:00:19.826647 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 18:00:19.828218 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 18:00:19.829736 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 18:00:19.831236 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 18:00:19.832874 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 18:00:19.833013 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 18:00:19.834405 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 18:00:19.834538 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 18:00:19.835946 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 18:00:19.836069 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 18:00:19.839597 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 18:00:19.854559 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1352) Nov 12 18:00:19.859308 systemd[1]: Finished ensure-sysext.service. Nov 12 18:00:19.861560 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1346) Nov 12 18:00:19.865479 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 12 18:00:19.866762 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 18:00:19.870746 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 18:00:19.874715 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 18:00:19.875573 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1352) Nov 12 18:00:19.878427 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 18:00:19.883281 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 18:00:19.884119 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 18:00:19.885594 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 18:00:19.888701 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 18:00:19.889513 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 18:00:19.890005 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 18:00:19.890166 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 18:00:19.892892 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 18:00:19.893041 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 18:00:19.896968 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 18:00:19.897096 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 18:00:19.900345 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 18:00:19.900488 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 18:00:19.911874 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 18:00:19.915218 systemd-resolved[1304]: Positive Trust Anchors: Nov 12 18:00:19.915234 systemd-resolved[1304]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 18:00:19.915268 systemd-resolved[1304]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 18:00:19.918716 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 18:00:19.919757 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 18:00:19.919820 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 18:00:19.922007 systemd-resolved[1304]: Defaulting to hostname 'linux'. Nov 12 18:00:19.925602 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 18:00:19.927795 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 18:00:19.941313 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 18:00:19.951619 systemd-networkd[1376]: lo: Link UP Nov 12 18:00:19.951629 systemd-networkd[1376]: lo: Gained carrier Nov 12 18:00:19.952338 systemd-networkd[1376]: Enumeration completed Nov 12 18:00:19.952782 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 18:00:19.953825 systemd[1]: Reached target network.target - Network. Nov 12 18:00:19.955807 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 18:00:19.955816 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 18:00:19.956459 systemd-networkd[1376]: eth0: Link UP Nov 12 18:00:19.956468 systemd-networkd[1376]: eth0: Gained carrier Nov 12 18:00:19.956480 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 18:00:19.963746 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 18:00:19.964856 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 18:00:19.965882 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 18:00:19.982655 systemd-networkd[1376]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 18:00:19.986842 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Nov 12 18:00:19.987781 systemd-timesyncd[1377]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 18:00:19.987836 systemd-timesyncd[1377]: Initial clock synchronization to Tue 2024-11-12 18:00:19.958250 UTC. Nov 12 18:00:20.007814 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 18:00:20.017042 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 18:00:20.019398 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 18:00:20.042805 lvm[1398]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 18:00:20.045224 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 18:00:20.075045 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 18:00:20.076178 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 18:00:20.077066 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 18:00:20.077921 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 18:00:20.078817 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 18:00:20.079866 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 18:00:20.080739 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 18:00:20.081617 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 18:00:20.082449 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 18:00:20.082479 systemd[1]: Reached target paths.target - Path Units. Nov 12 18:00:20.083143 systemd[1]: Reached target timers.target - Timer Units. Nov 12 18:00:20.084680 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 18:00:20.086755 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 18:00:20.099494 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 18:00:20.101412 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 18:00:20.102686 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 18:00:20.103515 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 18:00:20.104237 systemd[1]: Reached target basic.target - Basic System. Nov 12 18:00:20.104958 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 18:00:20.104988 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 18:00:20.105856 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 18:00:20.107490 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 18:00:20.109572 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 18:00:20.110487 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 18:00:20.112770 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 18:00:20.114645 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 18:00:20.116504 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 18:00:20.118775 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 18:00:20.120045 jq[1408]: false Nov 12 18:00:20.125910 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 18:00:20.127787 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 18:00:20.132874 dbus-daemon[1407]: [system] SELinux support is enabled Nov 12 18:00:20.135750 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 18:00:20.137295 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 18:00:20.137736 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 18:00:20.140734 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 18:00:20.142789 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 18:00:20.144098 extend-filesystems[1409]: Found loop3 Nov 12 18:00:20.146297 extend-filesystems[1409]: Found loop4 Nov 12 18:00:20.146297 extend-filesystems[1409]: Found loop5 Nov 12 18:00:20.146297 extend-filesystems[1409]: Found vda Nov 12 18:00:20.146297 extend-filesystems[1409]: Found vda1 Nov 12 18:00:20.146297 extend-filesystems[1409]: Found vda2 Nov 12 18:00:20.146297 extend-filesystems[1409]: Found vda3 Nov 12 18:00:20.146297 extend-filesystems[1409]: Found usr Nov 12 18:00:20.146297 extend-filesystems[1409]: Found vda4 Nov 12 18:00:20.146297 extend-filesystems[1409]: Found vda6 Nov 12 18:00:20.146297 extend-filesystems[1409]: Found vda7 Nov 12 18:00:20.146297 extend-filesystems[1409]: Found vda9 Nov 12 18:00:20.146297 extend-filesystems[1409]: Checking size of /dev/vda9 Nov 12 18:00:20.144194 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 18:00:20.149574 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 18:00:20.160712 jq[1424]: true Nov 12 18:00:20.158022 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 18:00:20.158204 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 18:00:20.158504 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 18:00:20.158723 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 18:00:20.163207 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 18:00:20.163681 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 18:00:20.182162 update_engine[1422]: I20241112 18:00:20.179567 1422 main.cc:92] Flatcar Update Engine starting Nov 12 18:00:20.181011 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 18:00:20.181081 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 18:00:20.182639 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 18:00:20.182666 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 18:00:20.183660 update_engine[1422]: I20241112 18:00:20.182910 1422 update_check_scheduler.cc:74] Next update check in 5m46s Nov 12 18:00:20.185032 systemd[1]: Started update-engine.service - Update Engine. Nov 12 18:00:20.187460 tar[1428]: linux-arm64/helm Nov 12 18:00:20.192149 extend-filesystems[1409]: Resized partition /dev/vda9 Nov 12 18:00:20.193256 (ntainerd)[1430]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 18:00:20.196104 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 18:00:20.198164 extend-filesystems[1444]: resize2fs 1.47.1 (20-May-2024) Nov 12 18:00:20.199080 jq[1429]: true Nov 12 18:00:20.205855 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 18:00:20.205904 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1349) Nov 12 18:00:20.213236 systemd-logind[1415]: Watching system buttons on /dev/input/event0 (Power Button) Nov 12 18:00:20.214708 systemd-logind[1415]: New seat seat0. Nov 12 18:00:20.218636 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 18:00:20.236795 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 18:00:20.251397 extend-filesystems[1444]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 18:00:20.251397 extend-filesystems[1444]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 18:00:20.251397 extend-filesystems[1444]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 18:00:20.261008 extend-filesystems[1409]: Resized filesystem in /dev/vda9 Nov 12 18:00:20.257911 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 18:00:20.258102 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 18:00:20.285053 bash[1462]: Updated "/home/core/.ssh/authorized_keys" Nov 12 18:00:20.287528 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 18:00:20.289289 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 18:00:20.303331 locksmithd[1443]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 18:00:20.442440 containerd[1430]: time="2024-11-12T18:00:20.442231094Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 18:00:20.471971 containerd[1430]: time="2024-11-12T18:00:20.471868579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 18:00:20.473512 containerd[1430]: time="2024-11-12T18:00:20.473446111Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 18:00:20.473512 containerd[1430]: time="2024-11-12T18:00:20.473484081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 18:00:20.473512 containerd[1430]: time="2024-11-12T18:00:20.473500890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 18:00:20.473726 containerd[1430]: time="2024-11-12T18:00:20.473690780Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 18:00:20.473726 containerd[1430]: time="2024-11-12T18:00:20.473717530Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 18:00:20.473789 containerd[1430]: time="2024-11-12T18:00:20.473772868Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 18:00:20.473816 containerd[1430]: time="2024-11-12T18:00:20.473789478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 18:00:20.473983 containerd[1430]: time="2024-11-12T18:00:20.473948464Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 18:00:20.473983 containerd[1430]: time="2024-11-12T18:00:20.473972740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 18:00:20.474027 containerd[1430]: time="2024-11-12T18:00:20.473986315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 18:00:20.474027 containerd[1430]: time="2024-11-12T18:00:20.473996137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 18:00:20.474127 containerd[1430]: time="2024-11-12T18:00:20.474109328Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 18:00:20.474329 containerd[1430]: time="2024-11-12T18:00:20.474301813Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 18:00:20.474424 containerd[1430]: time="2024-11-12T18:00:20.474405183Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 18:00:20.474451 containerd[1430]: time="2024-11-12T18:00:20.474423549Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 18:00:20.474516 containerd[1430]: time="2024-11-12T18:00:20.474501086Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 18:00:20.474578 containerd[1430]: time="2024-11-12T18:00:20.474564449Z" level=info msg="metadata content store policy set" policy=shared Nov 12 18:00:20.477805 containerd[1430]: time="2024-11-12T18:00:20.477766107Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 18:00:20.477850 containerd[1430]: time="2024-11-12T18:00:20.477828472Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 18:00:20.477884 containerd[1430]: time="2024-11-12T18:00:20.477848635Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 18:00:20.477884 containerd[1430]: time="2024-11-12T18:00:20.477872710Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 18:00:20.477928 containerd[1430]: time="2024-11-12T18:00:20.477886924Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 18:00:20.478084 containerd[1430]: time="2024-11-12T18:00:20.478042278Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 18:00:20.478297 containerd[1430]: time="2024-11-12T18:00:20.478269060Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 18:00:20.478393 containerd[1430]: time="2024-11-12T18:00:20.478370952Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 18:00:20.478424 containerd[1430]: time="2024-11-12T18:00:20.478392792Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 18:00:20.478424 containerd[1430]: time="2024-11-12T18:00:20.478406287Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 18:00:20.478424 containerd[1430]: time="2024-11-12T18:00:20.478419742Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 18:00:20.478474 containerd[1430]: time="2024-11-12T18:00:20.478432598Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 18:00:20.478474 containerd[1430]: time="2024-11-12T18:00:20.478446093Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 18:00:20.478474 containerd[1430]: time="2024-11-12T18:00:20.478460387Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 18:00:20.478526 containerd[1430]: time="2024-11-12T18:00:20.478474241Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 18:00:20.478526 containerd[1430]: time="2024-11-12T18:00:20.478486738Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 18:00:20.478526 containerd[1430]: time="2024-11-12T18:00:20.478498157Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 18:00:20.478526 containerd[1430]: time="2024-11-12T18:00:20.478509776Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 18:00:20.478600 containerd[1430]: time="2024-11-12T18:00:20.478528182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 18:00:20.478600 containerd[1430]: time="2024-11-12T18:00:20.478566671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 18:00:20.478600 containerd[1430]: time="2024-11-12T18:00:20.478580525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 18:00:20.478600 containerd[1430]: time="2024-11-12T18:00:20.478592064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 18:00:20.478676 containerd[1430]: time="2024-11-12T18:00:20.478603563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 18:00:20.478676 containerd[1430]: time="2024-11-12T18:00:20.478621450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 18:00:20.478676 containerd[1430]: time="2024-11-12T18:00:20.478634825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 18:00:20.478676 containerd[1430]: time="2024-11-12T18:00:20.478647602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 18:00:20.478676 containerd[1430]: time="2024-11-12T18:00:20.478660099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 18:00:20.478676 containerd[1430]: time="2024-11-12T18:00:20.478675910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 18:00:20.478773 containerd[1430]: time="2024-11-12T18:00:20.478688566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 18:00:20.478773 containerd[1430]: time="2024-11-12T18:00:20.478699985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 18:00:20.478773 containerd[1430]: time="2024-11-12T18:00:20.478711604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 18:00:20.478773 containerd[1430]: time="2024-11-12T18:00:20.478727894Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 18:00:20.478773 containerd[1430]: time="2024-11-12T18:00:20.478747777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 18:00:20.478773 containerd[1430]: time="2024-11-12T18:00:20.478758877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 18:00:20.478773 containerd[1430]: time="2024-11-12T18:00:20.478770016Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 18:00:20.479943 containerd[1430]: time="2024-11-12T18:00:20.479903606Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 18:00:20.479974 containerd[1430]: time="2024-11-12T18:00:20.479948404Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 18:00:20.479974 containerd[1430]: time="2024-11-12T18:00:20.479961859Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 18:00:20.480032 containerd[1430]: time="2024-11-12T18:00:20.479973757Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 18:00:20.480065 containerd[1430]: time="2024-11-12T18:00:20.480031251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 18:00:20.480065 containerd[1430]: time="2024-11-12T18:00:20.480050256Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 18:00:20.480101 containerd[1430]: time="2024-11-12T18:00:20.480073294Z" level=info msg="NRI interface is disabled by configuration." Nov 12 18:00:20.480101 containerd[1430]: time="2024-11-12T18:00:20.480084832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 18:00:20.480789 containerd[1430]: time="2024-11-12T18:00:20.480719782Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 18:00:20.480906 containerd[1430]: time="2024-11-12T18:00:20.480840320Z" level=info msg="Connect containerd service" Nov 12 18:00:20.480975 containerd[1430]: time="2024-11-12T18:00:20.480954190Z" level=info msg="using legacy CRI server" Nov 12 18:00:20.480975 containerd[1430]: time="2024-11-12T18:00:20.480971478Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 18:00:20.481412 containerd[1430]: time="2024-11-12T18:00:20.481372419Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 18:00:20.483143 containerd[1430]: time="2024-11-12T18:00:20.483106143Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 18:00:20.483434 containerd[1430]: time="2024-11-12T18:00:20.483398444Z" level=info msg="Start subscribing containerd event" Nov 12 18:00:20.483481 containerd[1430]: time="2024-11-12T18:00:20.483449230Z" level=info msg="Start recovering state" Nov 12 18:00:20.483571 containerd[1430]: time="2024-11-12T18:00:20.483517424Z" level=info msg="Start event monitor" Nov 12 18:00:20.483571 containerd[1430]: time="2024-11-12T18:00:20.483532397Z" level=info msg="Start snapshots syncer" Nov 12 18:00:20.483623 containerd[1430]: time="2024-11-12T18:00:20.483570846Z" level=info msg="Start cni network conf syncer for default" Nov 12 18:00:20.483623 containerd[1430]: time="2024-11-12T18:00:20.483580149Z" level=info msg="Start streaming server" Nov 12 18:00:20.484008 containerd[1430]: time="2024-11-12T18:00:20.483951784Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 18:00:20.484008 containerd[1430]: time="2024-11-12T18:00:20.484006124Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 18:00:20.485284 containerd[1430]: time="2024-11-12T18:00:20.484069127Z" level=info msg="containerd successfully booted in 0.047027s" Nov 12 18:00:20.484141 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 18:00:20.575371 tar[1428]: linux-arm64/LICENSE Nov 12 18:00:20.575371 tar[1428]: linux-arm64/README.md Nov 12 18:00:20.584280 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 18:00:21.176324 sshd_keygen[1425]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 18:00:21.196650 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 18:00:21.205783 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 18:00:21.210928 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 18:00:21.211102 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 18:00:21.213275 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 18:00:21.225964 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 18:00:21.229461 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 18:00:21.231226 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 12 18:00:21.232220 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 18:00:21.922664 systemd-networkd[1376]: eth0: Gained IPv6LL Nov 12 18:00:21.925039 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 18:00:21.926471 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 18:00:21.937782 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 18:00:21.939850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 18:00:21.941572 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 18:00:21.956587 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 18:00:21.956757 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 18:00:21.958523 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 18:00:21.964291 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 18:00:22.417141 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 18:00:22.418596 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 18:00:22.421340 (kubelet)[1520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 18:00:22.422617 systemd[1]: Startup finished in 549ms (kernel) + 4.797s (initrd) + 3.951s (userspace) = 9.299s. Nov 12 18:00:22.836329 kubelet[1520]: E1112 18:00:22.836182 1520 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 18:00:22.838557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 18:00:22.838701 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 18:00:26.989229 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 18:00:26.990315 systemd[1]: Started sshd@0-10.0.0.125:22-10.0.0.1:45090.service - OpenSSH per-connection server daemon (10.0.0.1:45090). Nov 12 18:00:27.040571 sshd[1533]: Accepted publickey for core from 10.0.0.1 port 45090 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:00:27.041672 sshd[1533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:00:27.051780 systemd-logind[1415]: New session 1 of user core. Nov 12 18:00:27.052760 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 18:00:27.066758 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 18:00:27.075079 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 18:00:27.077276 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 18:00:27.083380 (systemd)[1537]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 18:00:27.164908 systemd[1537]: Queued start job for default target default.target. Nov 12 18:00:27.173532 systemd[1537]: Created slice app.slice - User Application Slice. Nov 12 18:00:27.173600 systemd[1537]: Reached target paths.target - Paths. Nov 12 18:00:27.173612 systemd[1537]: Reached target timers.target - Timers. Nov 12 18:00:27.174825 systemd[1537]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 18:00:27.184344 systemd[1537]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 18:00:27.184404 systemd[1537]: Reached target sockets.target - Sockets. Nov 12 18:00:27.184416 systemd[1537]: Reached target basic.target - Basic System. Nov 12 18:00:27.184451 systemd[1537]: Reached target default.target - Main User Target. Nov 12 18:00:27.184478 systemd[1537]: Startup finished in 96ms. Nov 12 18:00:27.184741 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 18:00:27.186035 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 18:00:27.245964 systemd[1]: Started sshd@1-10.0.0.125:22-10.0.0.1:45098.service - OpenSSH per-connection server daemon (10.0.0.1:45098). Nov 12 18:00:27.280779 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 45098 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:00:27.282106 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:00:27.285878 systemd-logind[1415]: New session 2 of user core. Nov 12 18:00:27.303691 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 18:00:27.356526 sshd[1548]: pam_unix(sshd:session): session closed for user core Nov 12 18:00:27.368986 systemd[1]: sshd@1-10.0.0.125:22-10.0.0.1:45098.service: Deactivated successfully. Nov 12 18:00:27.370395 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 18:00:27.372489 systemd-logind[1415]: Session 2 logged out. Waiting for processes to exit. Nov 12 18:00:27.377815 systemd[1]: Started sshd@2-10.0.0.125:22-10.0.0.1:45106.service - OpenSSH per-connection server daemon (10.0.0.1:45106). Nov 12 18:00:27.378619 systemd-logind[1415]: Removed session 2. Nov 12 18:00:27.407399 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 45106 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:00:27.408645 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:00:27.412225 systemd-logind[1415]: New session 3 of user core. Nov 12 18:00:27.426702 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 18:00:27.476505 sshd[1555]: pam_unix(sshd:session): session closed for user core Nov 12 18:00:27.485871 systemd[1]: sshd@2-10.0.0.125:22-10.0.0.1:45106.service: Deactivated successfully. Nov 12 18:00:27.487335 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 18:00:27.488647 systemd-logind[1415]: Session 3 logged out. Waiting for processes to exit. Nov 12 18:00:27.489830 systemd[1]: Started sshd@3-10.0.0.125:22-10.0.0.1:45116.service - OpenSSH per-connection server daemon (10.0.0.1:45116). Nov 12 18:00:27.490570 systemd-logind[1415]: Removed session 3. Nov 12 18:00:27.525270 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 45116 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:00:27.526457 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:00:27.530409 systemd-logind[1415]: New session 4 of user core. Nov 12 18:00:27.538676 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 18:00:27.589899 sshd[1562]: pam_unix(sshd:session): session closed for user core Nov 12 18:00:27.598836 systemd[1]: sshd@3-10.0.0.125:22-10.0.0.1:45116.service: Deactivated successfully. Nov 12 18:00:27.600195 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 18:00:27.601385 systemd-logind[1415]: Session 4 logged out. Waiting for processes to exit. Nov 12 18:00:27.602449 systemd[1]: Started sshd@4-10.0.0.125:22-10.0.0.1:45122.service - OpenSSH per-connection server daemon (10.0.0.1:45122). Nov 12 18:00:27.603184 systemd-logind[1415]: Removed session 4. Nov 12 18:00:27.634691 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 45122 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:00:27.635849 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:00:27.639852 systemd-logind[1415]: New session 5 of user core. Nov 12 18:00:27.646676 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 18:00:27.722022 sudo[1572]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 18:00:27.722311 sudo[1572]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 18:00:27.740426 sudo[1572]: pam_unix(sudo:session): session closed for user root Nov 12 18:00:27.742097 sshd[1569]: pam_unix(sshd:session): session closed for user core Nov 12 18:00:27.750935 systemd[1]: sshd@4-10.0.0.125:22-10.0.0.1:45122.service: Deactivated successfully. Nov 12 18:00:27.752333 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 18:00:27.753519 systemd-logind[1415]: Session 5 logged out. Waiting for processes to exit. Nov 12 18:00:27.754678 systemd[1]: Started sshd@5-10.0.0.125:22-10.0.0.1:45132.service - OpenSSH per-connection server daemon (10.0.0.1:45132). Nov 12 18:00:27.755427 systemd-logind[1415]: Removed session 5. Nov 12 18:00:27.787310 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 45132 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:00:27.788373 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:00:27.792073 systemd-logind[1415]: New session 6 of user core. Nov 12 18:00:27.800745 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 18:00:27.851352 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 18:00:27.851632 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 18:00:27.854406 sudo[1581]: pam_unix(sudo:session): session closed for user root Nov 12 18:00:27.858717 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 18:00:27.858960 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 18:00:27.876750 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 18:00:27.877836 auditctl[1584]: No rules Nov 12 18:00:27.878104 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 18:00:27.879612 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 18:00:27.881621 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 18:00:27.903229 augenrules[1602]: No rules Nov 12 18:00:27.904310 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 18:00:27.905195 sudo[1580]: pam_unix(sudo:session): session closed for user root Nov 12 18:00:27.906470 sshd[1577]: pam_unix(sshd:session): session closed for user core Nov 12 18:00:27.912708 systemd[1]: sshd@5-10.0.0.125:22-10.0.0.1:45132.service: Deactivated successfully. Nov 12 18:00:27.914195 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 18:00:27.915619 systemd-logind[1415]: Session 6 logged out. Waiting for processes to exit. Nov 12 18:00:27.924896 systemd[1]: Started sshd@6-10.0.0.125:22-10.0.0.1:45136.service - OpenSSH per-connection server daemon (10.0.0.1:45136). Nov 12 18:00:27.925621 systemd-logind[1415]: Removed session 6. Nov 12 18:00:27.953675 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 45136 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:00:27.954991 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:00:27.958063 systemd-logind[1415]: New session 7 of user core. Nov 12 18:00:27.969670 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 18:00:28.018904 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 18:00:28.019180 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 18:00:28.343792 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 18:00:28.343946 (dockerd)[1631]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 18:00:28.597144 dockerd[1631]: time="2024-11-12T18:00:28.597016009Z" level=info msg="Starting up" Nov 12 18:00:28.742856 dockerd[1631]: time="2024-11-12T18:00:28.742817183Z" level=info msg="Loading containers: start." Nov 12 18:00:28.840571 kernel: Initializing XFRM netlink socket Nov 12 18:00:28.907347 systemd-networkd[1376]: docker0: Link UP Nov 12 18:00:28.927804 dockerd[1631]: time="2024-11-12T18:00:28.927754737Z" level=info msg="Loading containers: done." Nov 12 18:00:28.940653 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1194261560-merged.mount: Deactivated successfully. Nov 12 18:00:28.942870 dockerd[1631]: time="2024-11-12T18:00:28.942817899Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 18:00:28.942938 dockerd[1631]: time="2024-11-12T18:00:28.942919987Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 18:00:28.943059 dockerd[1631]: time="2024-11-12T18:00:28.943031704Z" level=info msg="Daemon has completed initialization" Nov 12 18:00:28.968590 dockerd[1631]: time="2024-11-12T18:00:28.968456597Z" level=info msg="API listen on /run/docker.sock" Nov 12 18:00:28.968688 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 18:00:29.387826 containerd[1430]: time="2024-11-12T18:00:29.387710174Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\"" Nov 12 18:00:30.068624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2708884548.mount: Deactivated successfully. Nov 12 18:00:31.132000 containerd[1430]: time="2024-11-12T18:00:31.131954790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:00:31.133019 containerd[1430]: time="2024-11-12T18:00:31.132988774Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.2: active requests=0, bytes read=25616007" Nov 12 18:00:31.133962 containerd[1430]: time="2024-11-12T18:00:31.133926125Z" level=info msg="ImageCreate event name:\"sha256:f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:00:31.136918 containerd[1430]: time="2024-11-12T18:00:31.136881769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:00:31.138351 containerd[1430]: time="2024-11-12T18:00:31.138315830Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.2\" with image id \"sha256:f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\", size \"25612805\" in 1.750556426s" Nov 12 18:00:31.138388 containerd[1430]: time="2024-11-12T18:00:31.138355154Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\" returns image reference \"sha256:f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270\"" Nov 12 18:00:31.139551 containerd[1430]: time="2024-11-12T18:00:31.139512586Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\"" Nov 12 18:00:32.337442 containerd[1430]: time="2024-11-12T18:00:32.337393172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:00:32.338338 containerd[1430]: time="2024-11-12T18:00:32.338123991Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.2: active requests=0, bytes read=22469649" Nov 12 18:00:32.339062 containerd[1430]: time="2024-11-12T18:00:32.339019591Z" level=info msg="ImageCreate event name:\"sha256:9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:00:32.342045 containerd[1430]: time="2024-11-12T18:00:32.342003298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:00:32.343582 containerd[1430]: time="2024-11-12T18:00:32.343531721Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.2\" with image id \"sha256:9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\", size \"23872272\" in 1.203984885s" Nov 12 18:00:32.345338 containerd[1430]: time="2024-11-12T18:00:32.343674240Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\" returns image reference \"sha256:9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba\"" Nov 12 18:00:32.346033 containerd[1430]: time="2024-11-12T18:00:32.345749118Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\"" Nov 12 18:00:33.088987 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 18:00:33.098734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 18:00:33.200915 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 18:00:33.204437 (kubelet)[1844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 18:00:33.238598 kubelet[1844]: E1112 18:00:33.238523 1844 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 18:00:33.241733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 18:00:33.241876 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 18:00:33.769293 containerd[1430]: time="2024-11-12T18:00:33.769245196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:00:33.770339 containerd[1430]: time="2024-11-12T18:00:33.770275736Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.2: active requests=0, bytes read=17027038" Nov 12 18:00:33.771054 containerd[1430]: time="2024-11-12T18:00:33.770991686Z" level=info msg="ImageCreate event name:\"sha256:d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:00:33.774199 containerd[1430]: time="2024-11-12T18:00:33.774151891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:00:33.775720 containerd[1430]: time="2024-11-12T18:00:33.775594703Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.2\" with image id \"sha256:d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\", size \"18429679\" in 1.429614141s" Nov 12 18:00:33.775720 containerd[1430]: time="2024-11-12T18:00:33.775631034Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\" returns image reference \"sha256:d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a\"" Nov 12 18:00:33.776394 containerd[1430]: time="2024-11-12T18:00:33.776181915Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\"" Nov 12 18:00:34.664670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4162760398.mount: Deactivated successfully. Nov 12 18:00:34.873940 containerd[1430]: time="2024-11-12T18:00:34.873883871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:00:34.874875 containerd[1430]: time="2024-11-12T18:00:34.874847232Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.2: active requests=0, bytes read=26769666" Nov 12 18:00:34.875637 containerd[1430]: time="2024-11-12T18:00:34.875614619Z" level=info msg="ImageCreate event name:\"sha256:021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:00:34.877635 containerd[1430]: time="2024-11-12T18:00:34.877580153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:00:34.878530 containerd[1430]: time="2024-11-12T18:00:34.878480961Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.2\" with image id \"sha256:021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba\", repo tag \"registry.k8s.io/kube-proxy:v1.31.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\", size \"26768683\" in 1.102265712s" Nov 12 18:00:34.878530 containerd[1430]: time="2024-11-12T18:00:34.878520292Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\" returns image reference \"sha256:021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba\"" Nov 12 18:00:34.879053 containerd[1430]: time="2024-11-12T18:00:34.879014603Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 18:00:35.474653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3433041769.mount: Deactivated successfully. Nov 12 18:00:36.071474 containerd[1430]: time="2024-11-12T18:00:36.071420132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:00:36.072763 containerd[1430]: time="2024-11-12T18:00:36.072709287Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Nov 12 18:00:36.073734 containerd[1430]: time="2024-11-12T18:00:36.073702436Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:00:36.076540 containerd[1430]: time="2024-11-12T18:00:36.076504199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:00:36.078873 containerd[1430]: time="2024-11-12T18:00:36.078730339Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.199678923s" Nov 12 18:00:36.078873 containerd[1430]: time="2024-11-12T18:00:36.078767275Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Nov 12 18:00:36.079338 containerd[1430]: time="2024-11-12T18:00:36.079310679Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 12 18:00:36.494035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount285521755.mount: Deactivated successfully. Nov 12 18:00:36.497884 containerd[1430]: time="2024-11-12T18:00:36.497842716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:00:36.499386 containerd[1430]: time="2024-11-12T18:00:36.499120838Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Nov 12 18:00:36.500117 containerd[1430]: time="2024-11-12T18:00:36.500087604Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:00:36.502404 containerd[1430]: time="2024-11-12T18:00:36.502363512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:00:36.503242 containerd[1430]: time="2024-11-12T18:00:36.503217193Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 423.875255ms" Nov 12 18:00:36.503316 containerd[1430]: time="2024-11-12T18:00:36.503245454Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 12 18:00:36.503907 containerd[1430]: time="2024-11-12T18:00:36.503664179Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Nov 12 18:00:37.062560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3592867725.mount: Deactivated successfully. Nov 12 18:00:40.093636 containerd[1430]: time="2024-11-12T18:00:40.093557359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:00:40.094103 containerd[1430]: time="2024-11-12T18:00:40.094009210Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406104" Nov 12 18:00:40.094904 containerd[1430]: time="2024-11-12T18:00:40.094851664Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:00:40.098194 containerd[1430]: time="2024-11-12T18:00:40.098152192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:00:40.099576 containerd[1430]: time="2024-11-12T18:00:40.099481959Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.5957848s" Nov 12 18:00:40.099576 containerd[1430]: time="2024-11-12T18:00:40.099521179Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Nov 12 18:00:43.492220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 18:00:43.501809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 18:00:43.627292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 18:00:43.630917 (kubelet)[1996]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 18:00:43.662449 kubelet[1996]: E1112 18:00:43.662361 1996 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 18:00:43.664876 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 18:00:43.665020 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 18:00:44.116742 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 18:00:44.132848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 18:00:44.156293 systemd[1]: Reloading requested from client PID 2012 ('systemctl') (unit session-7.scope)... Nov 12 18:00:44.156309 systemd[1]: Reloading... Nov 12 18:00:44.225668 zram_generator::config[2054]: No configuration found. Nov 12 18:00:44.367891 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 18:00:44.419612 systemd[1]: Reloading finished in 262 ms. Nov 12 18:00:44.461784 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 18:00:44.462622 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 18:00:44.464637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 18:00:44.557162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 18:00:44.561693 (kubelet)[2097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 18:00:44.597340 kubelet[2097]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 18:00:44.597340 kubelet[2097]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 18:00:44.597340 kubelet[2097]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 18:00:44.597740 kubelet[2097]: I1112 18:00:44.597466 2097 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 18:00:45.526661 kubelet[2097]: I1112 18:00:45.526605 2097 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Nov 12 18:00:45.526661 kubelet[2097]: I1112 18:00:45.526640 2097 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 18:00:45.527391 kubelet[2097]: I1112 18:00:45.527235 2097 server.go:929] "Client rotation is on, will bootstrap in background" Nov 12 18:00:45.566948 kubelet[2097]: E1112 18:00:45.566907 2097 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.125:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Nov 12 18:00:45.568425 kubelet[2097]: I1112 18:00:45.568348 2097 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 18:00:45.574476 kubelet[2097]: E1112 18:00:45.574248 2097 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 12 18:00:45.574476 kubelet[2097]: I1112 18:00:45.574281 2097 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 12 18:00:45.580906 kubelet[2097]: I1112 18:00:45.580881 2097 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 18:00:45.582049 kubelet[2097]: I1112 18:00:45.582027 2097 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 12 18:00:45.582297 kubelet[2097]: I1112 18:00:45.582264 2097 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 18:00:45.582519 kubelet[2097]: I1112 18:00:45.582357 2097 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 12 18:00:45.582910 kubelet[2097]: I1112 18:00:45.582893 2097 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 18:00:45.582969 kubelet[2097]: I1112 18:00:45.582960 2097 container_manager_linux.go:300] "Creating device plugin manager" Nov 12 18:00:45.583256 kubelet[2097]: I1112 18:00:45.583241 2097 state_mem.go:36] "Initialized new in-memory state store" Nov 12 18:00:45.588855 kubelet[2097]: I1112 18:00:45.588569 2097 kubelet.go:408] "Attempting to sync node with API server" Nov 12 18:00:45.588855 kubelet[2097]: I1112 18:00:45.588620 2097 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 18:00:45.588855 kubelet[2097]: I1112 18:00:45.588653 2097 kubelet.go:314] "Adding apiserver pod source" Nov 12 18:00:45.588855 kubelet[2097]: I1112 18:00:45.588671 2097 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 18:00:45.591448 kubelet[2097]: W1112 18:00:45.591172 2097 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Nov 12 18:00:45.591448 kubelet[2097]: E1112 18:00:45.591233 2097 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Nov 12 18:00:45.591448 kubelet[2097]: W1112 18:00:45.591374 2097 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Nov 12 18:00:45.591448 kubelet[2097]: E1112 18:00:45.591421 2097 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Nov 12 18:00:45.592070 kubelet[2097]: I1112 18:00:45.592036 2097 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 18:00:45.594443 kubelet[2097]: I1112 18:00:45.594324 2097 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 18:00:45.596162 kubelet[2097]: W1112 18:00:45.595148 2097 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 18:00:45.596162 kubelet[2097]: I1112 18:00:45.596023 2097 server.go:1269] "Started kubelet" Nov 12 18:00:45.598304 kubelet[2097]: I1112 18:00:45.598276 2097 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 18:00:45.600076 kubelet[2097]: I1112 18:00:45.600046 2097 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 12 18:00:45.600400 kubelet[2097]: I1112 18:00:45.600367 2097 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 18:00:45.600486 kubelet[2097]: I1112 18:00:45.600468 2097 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 12 18:00:45.600569 kubelet[2097]: I1112 18:00:45.600509 2097 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 18:00:45.600749 kubelet[2097]: I1112 18:00:45.600731 2097 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 18:00:45.602870 kubelet[2097]: I1112 18:00:45.602841 2097 server.go:460] "Adding debug handlers to kubelet server" Nov 12 18:00:45.603766 kubelet[2097]: E1112 18:00:45.603737 2097 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 18:00:45.603847 kubelet[2097]: I1112 18:00:45.603821 2097 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 12 18:00:45.603964 kubelet[2097]: I1112 18:00:45.603942 2097 reconciler.go:26] "Reconciler: start to sync state" Nov 12 18:00:45.604068 kubelet[2097]: W1112 18:00:45.604027 2097 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Nov 12 18:00:45.604108 kubelet[2097]: E1112 18:00:45.604077 2097 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Nov 12 18:00:45.604296 kubelet[2097]: E1112 18:00:45.604247 2097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="200ms" Nov 12 18:00:45.604492 kubelet[2097]: I1112 18:00:45.604407 2097 factory.go:221] Registration of the systemd container factory successfully Nov 12 18:00:45.604492 kubelet[2097]: I1112 18:00:45.604470 2097 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 18:00:45.606021 kubelet[2097]: E1112 18:00:45.603640 2097 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.125:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.125:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18074a74699fc7d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 18:00:45.595985879 +0000 UTC m=+1.031296037,LastTimestamp:2024-11-12 18:00:45.595985879 +0000 UTC m=+1.031296037,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 18:00:45.606560 kubelet[2097]: I1112 18:00:45.606520 2097 factory.go:221] Registration of the containerd container factory successfully Nov 12 18:00:45.606759 kubelet[2097]: E1112 18:00:45.606737 2097 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 18:00:45.619608 kubelet[2097]: I1112 18:00:45.619499 2097 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 18:00:45.620487 kubelet[2097]: I1112 18:00:45.620451 2097 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 18:00:45.620487 kubelet[2097]: I1112 18:00:45.620481 2097 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 18:00:45.620540 kubelet[2097]: I1112 18:00:45.620500 2097 kubelet.go:2321] "Starting kubelet main sync loop" Nov 12 18:00:45.620588 kubelet[2097]: E1112 18:00:45.620554 2097 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 18:00:45.622612 kubelet[2097]: W1112 18:00:45.622568 2097 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Nov 12 18:00:45.622700 kubelet[2097]: E1112 18:00:45.622629 2097 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Nov 12 18:00:45.623493 kubelet[2097]: I1112 18:00:45.623378 2097 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 18:00:45.623493 kubelet[2097]: I1112 18:00:45.623394 2097 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 18:00:45.623493 kubelet[2097]: I1112 18:00:45.623411 2097 state_mem.go:36] "Initialized new in-memory state store" Nov 12 18:00:45.633461 kubelet[2097]: I1112 18:00:45.633429 2097 policy_none.go:49] "None policy: Start" Nov 12 18:00:45.634109 kubelet[2097]: I1112 18:00:45.634088 2097 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 18:00:45.634503 kubelet[2097]: I1112 18:00:45.634205 2097 state_mem.go:35] "Initializing new in-memory state store" Nov 12 18:00:45.640260 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 18:00:45.654019 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 18:00:45.656636 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 18:00:45.668858 kubelet[2097]: I1112 18:00:45.668249 2097 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 18:00:45.668858 kubelet[2097]: I1112 18:00:45.668442 2097 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 12 18:00:45.668858 kubelet[2097]: I1112 18:00:45.668454 2097 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 18:00:45.668858 kubelet[2097]: I1112 18:00:45.668721 2097 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 18:00:45.670670 kubelet[2097]: E1112 18:00:45.670648 2097 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 18:00:45.727401 systemd[1]: Created slice kubepods-burstable-pod68a7798c7a4214a7d071317f8c03f23f.slice - libcontainer container kubepods-burstable-pod68a7798c7a4214a7d071317f8c03f23f.slice. Nov 12 18:00:45.737225 systemd[1]: Created slice kubepods-burstable-pod2bd0c21dd05cc63bc1db25732dedb07c.slice - libcontainer container kubepods-burstable-pod2bd0c21dd05cc63bc1db25732dedb07c.slice. Nov 12 18:00:45.741319 systemd[1]: Created slice kubepods-burstable-pod33673bc39d15d92b38b41cdd12700fe3.slice - libcontainer container kubepods-burstable-pod33673bc39d15d92b38b41cdd12700fe3.slice. Nov 12 18:00:45.771211 kubelet[2097]: I1112 18:00:45.770754 2097 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 18:00:45.771211 kubelet[2097]: E1112 18:00:45.771161 2097 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Nov 12 18:00:45.804791 kubelet[2097]: I1112 18:00:45.804685 2097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33673bc39d15d92b38b41cdd12700fe3-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33673bc39d15d92b38b41cdd12700fe3\") " pod="kube-system/kube-scheduler-localhost" Nov 12 18:00:45.805653 kubelet[2097]: I1112 18:00:45.804766 2097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68a7798c7a4214a7d071317f8c03f23f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"68a7798c7a4214a7d071317f8c03f23f\") " pod="kube-system/kube-apiserver-localhost" Nov 12 18:00:45.805748 kubelet[2097]: E1112 18:00:45.805138 2097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="400ms" Nov 12 18:00:45.805861 kubelet[2097]: I1112 18:00:45.805826 2097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68a7798c7a4214a7d071317f8c03f23f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"68a7798c7a4214a7d071317f8c03f23f\") " pod="kube-system/kube-apiserver-localhost" Nov 12 18:00:45.805958 kubelet[2097]: I1112 18:00:45.805865 2097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 18:00:45.805989 kubelet[2097]: I1112 18:00:45.805972 2097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 18:00:45.806085 kubelet[2097]: I1112 18:00:45.805999 2097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 18:00:45.806181 kubelet[2097]: I1112 18:00:45.806101 2097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 18:00:45.806275 kubelet[2097]: I1112 18:00:45.806196 2097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68a7798c7a4214a7d071317f8c03f23f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"68a7798c7a4214a7d071317f8c03f23f\") " pod="kube-system/kube-apiserver-localhost" Nov 12 18:00:45.806367 kubelet[2097]: I1112 18:00:45.806292 2097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 18:00:45.972628 kubelet[2097]: I1112 18:00:45.972582 2097 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 18:00:45.972952 kubelet[2097]: E1112 18:00:45.972918 2097 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Nov 12 18:00:46.035702 kubelet[2097]: E1112 18:00:46.035578 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:46.036185 containerd[1430]: time="2024-11-12T18:00:46.036143702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:68a7798c7a4214a7d071317f8c03f23f,Namespace:kube-system,Attempt:0,}" Nov 12 18:00:46.039481 kubelet[2097]: E1112 18:00:46.039459 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:46.040156 containerd[1430]: time="2024-11-12T18:00:46.039887256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:2bd0c21dd05cc63bc1db25732dedb07c,Namespace:kube-system,Attempt:0,}" Nov 12 18:00:46.043442 kubelet[2097]: E1112 18:00:46.043418 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:46.043890 containerd[1430]: time="2024-11-12T18:00:46.043859490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33673bc39d15d92b38b41cdd12700fe3,Namespace:kube-system,Attempt:0,}" Nov 12 18:00:46.206716 kubelet[2097]: E1112 18:00:46.206607 2097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="800ms" Nov 12 18:00:46.374573 kubelet[2097]: I1112 18:00:46.374524 2097 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 18:00:46.374932 kubelet[2097]: E1112 18:00:46.374871 2097 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Nov 12 18:00:46.584482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount217591049.mount: Deactivated successfully. Nov 12 18:00:46.588975 containerd[1430]: time="2024-11-12T18:00:46.588939362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 18:00:46.589616 containerd[1430]: time="2024-11-12T18:00:46.589586019Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Nov 12 18:00:46.591039 containerd[1430]: time="2024-11-12T18:00:46.590264906Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 18:00:46.591106 containerd[1430]: time="2024-11-12T18:00:46.591069469Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 18:00:46.591306 containerd[1430]: time="2024-11-12T18:00:46.591277838Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 18:00:46.592310 containerd[1430]: time="2024-11-12T18:00:46.592276934Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 18:00:46.592921 containerd[1430]: time="2024-11-12T18:00:46.592857975Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 18:00:46.597149 containerd[1430]: time="2024-11-12T18:00:46.597078364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 18:00:46.598411 containerd[1430]: time="2024-11-12T18:00:46.597998847Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 561.776132ms" Nov 12 18:00:46.598734 containerd[1430]: time="2024-11-12T18:00:46.598706604Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 558.75837ms" Nov 12 18:00:46.602035 containerd[1430]: time="2024-11-12T18:00:46.601669506Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.61992ms" Nov 12 18:00:46.717284 containerd[1430]: time="2024-11-12T18:00:46.717176960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:00:46.717284 containerd[1430]: time="2024-11-12T18:00:46.717235900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:00:46.717284 containerd[1430]: time="2024-11-12T18:00:46.717255374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:00:46.717650 containerd[1430]: time="2024-11-12T18:00:46.717353740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:00:46.718236 containerd[1430]: time="2024-11-12T18:00:46.717309035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:00:46.718236 containerd[1430]: time="2024-11-12T18:00:46.717375932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:00:46.718236 containerd[1430]: time="2024-11-12T18:00:46.717391447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:00:46.718236 containerd[1430]: time="2024-11-12T18:00:46.717470220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:00:46.718418 containerd[1430]: time="2024-11-12T18:00:46.717900032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:00:46.718418 containerd[1430]: time="2024-11-12T18:00:46.717944737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:00:46.718418 containerd[1430]: time="2024-11-12T18:00:46.717959971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:00:46.718418 containerd[1430]: time="2024-11-12T18:00:46.718032306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:00:46.740777 systemd[1]: Started cri-containerd-2dbb1cddbf2f90d85c980fbfed095f58b36c1f56bf6cb1d2d8a916cd2facc692.scope - libcontainer container 2dbb1cddbf2f90d85c980fbfed095f58b36c1f56bf6cb1d2d8a916cd2facc692. Nov 12 18:00:46.741795 systemd[1]: Started cri-containerd-8de088554bd749113469ab2c32c7c1712116ab47bd04e2f4ec09391e4a0c8314.scope - libcontainer container 8de088554bd749113469ab2c32c7c1712116ab47bd04e2f4ec09391e4a0c8314. Nov 12 18:00:46.745177 systemd[1]: Started cri-containerd-91c5ec3d5809ec518a42d052d877f9c762f3bf871a1749e2cb1115cdc7482851.scope - libcontainer container 91c5ec3d5809ec518a42d052d877f9c762f3bf871a1749e2cb1115cdc7482851. Nov 12 18:00:46.750636 kubelet[2097]: W1112 18:00:46.750509 2097 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Nov 12 18:00:46.750636 kubelet[2097]: E1112 18:00:46.750600 2097 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Nov 12 18:00:46.757422 kubelet[2097]: W1112 18:00:46.757335 2097 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Nov 12 18:00:46.757422 kubelet[2097]: E1112 18:00:46.757387 2097 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Nov 12 18:00:46.774795 containerd[1430]: time="2024-11-12T18:00:46.774743732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33673bc39d15d92b38b41cdd12700fe3,Namespace:kube-system,Attempt:0,} returns sandbox id \"8de088554bd749113469ab2c32c7c1712116ab47bd04e2f4ec09391e4a0c8314\"" Nov 12 18:00:46.776994 containerd[1430]: time="2024-11-12T18:00:46.776954212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:2bd0c21dd05cc63bc1db25732dedb07c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2dbb1cddbf2f90d85c980fbfed095f58b36c1f56bf6cb1d2d8a916cd2facc692\"" Nov 12 18:00:46.777269 kubelet[2097]: E1112 18:00:46.777244 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:46.777427 kubelet[2097]: E1112 18:00:46.777410 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:46.779220 containerd[1430]: time="2024-11-12T18:00:46.779189924Z" level=info msg="CreateContainer within sandbox \"2dbb1cddbf2f90d85c980fbfed095f58b36c1f56bf6cb1d2d8a916cd2facc692\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 18:00:46.779339 containerd[1430]: time="2024-11-12T18:00:46.779320879Z" level=info msg="CreateContainer within sandbox \"8de088554bd749113469ab2c32c7c1712116ab47bd04e2f4ec09391e4a0c8314\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 18:00:46.781787 containerd[1430]: time="2024-11-12T18:00:46.781754242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:68a7798c7a4214a7d071317f8c03f23f,Namespace:kube-system,Attempt:0,} returns sandbox id \"91c5ec3d5809ec518a42d052d877f9c762f3bf871a1749e2cb1115cdc7482851\"" Nov 12 18:00:46.782462 kubelet[2097]: E1112 18:00:46.782438 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:46.783959 containerd[1430]: time="2024-11-12T18:00:46.783933973Z" level=info msg="CreateContainer within sandbox \"91c5ec3d5809ec518a42d052d877f9c762f3bf871a1749e2cb1115cdc7482851\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 18:00:46.797643 containerd[1430]: time="2024-11-12T18:00:46.797535618Z" level=info msg="CreateContainer within sandbox \"2dbb1cddbf2f90d85c980fbfed095f58b36c1f56bf6cb1d2d8a916cd2facc692\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0ef3e5bc596a8da7da17cfa8d95cf558a0df6f5d398400b50b0f1c4bfc6d7d37\"" Nov 12 18:00:46.798355 containerd[1430]: time="2024-11-12T18:00:46.798314830Z" level=info msg="CreateContainer within sandbox \"8de088554bd749113469ab2c32c7c1712116ab47bd04e2f4ec09391e4a0c8314\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8d9b2438f076c736b21066739ca6733311665d6f0f8960381d79ecb29b6cc18f\"" Nov 12 18:00:46.798478 containerd[1430]: time="2024-11-12T18:00:46.798326826Z" level=info msg="StartContainer for \"0ef3e5bc596a8da7da17cfa8d95cf558a0df6f5d398400b50b0f1c4bfc6d7d37\"" Nov 12 18:00:46.798686 containerd[1430]: time="2024-11-12T18:00:46.798659071Z" level=info msg="CreateContainer within sandbox \"91c5ec3d5809ec518a42d052d877f9c762f3bf871a1749e2cb1115cdc7482851\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8d1657be2ef683ff7b625499aa36486284dd3eed3c41c03c5bf69760a181f047\"" Nov 12 18:00:46.800725 containerd[1430]: time="2024-11-12T18:00:46.800698730Z" level=info msg="StartContainer for \"8d9b2438f076c736b21066739ca6733311665d6f0f8960381d79ecb29b6cc18f\"" Nov 12 18:00:46.801304 containerd[1430]: time="2024-11-12T18:00:46.800729960Z" level=info msg="StartContainer for \"8d1657be2ef683ff7b625499aa36486284dd3eed3c41c03c5bf69760a181f047\"" Nov 12 18:00:46.809469 kubelet[2097]: W1112 18:00:46.809405 2097 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Nov 12 18:00:46.809540 kubelet[2097]: E1112 18:00:46.809474 2097 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Nov 12 18:00:46.824712 systemd[1]: Started cri-containerd-0ef3e5bc596a8da7da17cfa8d95cf558a0df6f5d398400b50b0f1c4bfc6d7d37.scope - libcontainer container 0ef3e5bc596a8da7da17cfa8d95cf558a0df6f5d398400b50b0f1c4bfc6d7d37. Nov 12 18:00:46.829118 systemd[1]: Started cri-containerd-8d1657be2ef683ff7b625499aa36486284dd3eed3c41c03c5bf69760a181f047.scope - libcontainer container 8d1657be2ef683ff7b625499aa36486284dd3eed3c41c03c5bf69760a181f047. Nov 12 18:00:46.830015 systemd[1]: Started cri-containerd-8d9b2438f076c736b21066739ca6733311665d6f0f8960381d79ecb29b6cc18f.scope - libcontainer container 8d9b2438f076c736b21066739ca6733311665d6f0f8960381d79ecb29b6cc18f. Nov 12 18:00:46.894674 containerd[1430]: time="2024-11-12T18:00:46.894140810Z" level=info msg="StartContainer for \"0ef3e5bc596a8da7da17cfa8d95cf558a0df6f5d398400b50b0f1c4bfc6d7d37\" returns successfully" Nov 12 18:00:46.894674 containerd[1430]: time="2024-11-12T18:00:46.894401121Z" level=info msg="StartContainer for \"8d9b2438f076c736b21066739ca6733311665d6f0f8960381d79ecb29b6cc18f\" returns successfully" Nov 12 18:00:46.894674 containerd[1430]: time="2024-11-12T18:00:46.894505285Z" level=info msg="StartContainer for \"8d1657be2ef683ff7b625499aa36486284dd3eed3c41c03c5bf69760a181f047\" returns successfully" Nov 12 18:00:47.007469 kubelet[2097]: E1112 18:00:47.007404 2097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="1.6s" Nov 12 18:00:47.012468 kubelet[2097]: W1112 18:00:47.012332 2097 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Nov 12 18:00:47.012468 kubelet[2097]: E1112 18:00:47.012391 2097 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Nov 12 18:00:47.176834 kubelet[2097]: I1112 18:00:47.176735 2097 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 18:00:47.632958 kubelet[2097]: E1112 18:00:47.632927 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:47.636592 kubelet[2097]: E1112 18:00:47.635794 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:47.637375 kubelet[2097]: E1112 18:00:47.637349 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:48.272953 kubelet[2097]: I1112 18:00:48.272912 2097 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Nov 12 18:00:48.272953 kubelet[2097]: E1112 18:00:48.272954 2097 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 12 18:00:48.283820 kubelet[2097]: E1112 18:00:48.283792 2097 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 18:00:48.384925 kubelet[2097]: E1112 18:00:48.384874 2097 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 18:00:48.485653 kubelet[2097]: E1112 18:00:48.485604 2097 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 18:00:48.586723 kubelet[2097]: E1112 18:00:48.586299 2097 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 18:00:48.638524 kubelet[2097]: E1112 18:00:48.638501 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:48.686798 kubelet[2097]: E1112 18:00:48.686764 2097 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 18:00:48.787605 kubelet[2097]: E1112 18:00:48.787564 2097 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 18:00:48.888653 kubelet[2097]: E1112 18:00:48.888619 2097 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 18:00:48.989011 kubelet[2097]: E1112 18:00:48.988963 2097 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 18:00:49.089710 kubelet[2097]: E1112 18:00:49.089665 2097 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 18:00:49.592295 kubelet[2097]: I1112 18:00:49.592248 2097 apiserver.go:52] "Watching apiserver" Nov 12 18:00:49.600705 kubelet[2097]: I1112 18:00:49.600661 2097 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 12 18:00:49.650086 kubelet[2097]: E1112 18:00:49.649979 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:50.025762 systemd[1]: Reloading requested from client PID 2376 ('systemctl') (unit session-7.scope)... Nov 12 18:00:50.025776 systemd[1]: Reloading... Nov 12 18:00:50.099576 zram_generator::config[2415]: No configuration found. Nov 12 18:00:50.179438 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 18:00:50.243663 systemd[1]: Reloading finished in 217 ms. Nov 12 18:00:50.275134 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 18:00:50.287042 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 18:00:50.287236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 18:00:50.287281 systemd[1]: kubelet.service: Consumed 1.315s CPU time, 117.6M memory peak, 0B memory swap peak. Nov 12 18:00:50.294788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 18:00:50.386264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 18:00:50.390641 (kubelet)[2457]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 18:00:50.430747 kubelet[2457]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 18:00:50.430747 kubelet[2457]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 18:00:50.430747 kubelet[2457]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 18:00:50.431173 kubelet[2457]: I1112 18:00:50.430862 2457 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 18:00:50.436933 kubelet[2457]: I1112 18:00:50.436888 2457 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Nov 12 18:00:50.436933 kubelet[2457]: I1112 18:00:50.436918 2457 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 18:00:50.437130 kubelet[2457]: I1112 18:00:50.437113 2457 server.go:929] "Client rotation is on, will bootstrap in background" Nov 12 18:00:50.438465 kubelet[2457]: I1112 18:00:50.438443 2457 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 18:00:50.440623 kubelet[2457]: I1112 18:00:50.440599 2457 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 18:00:50.443413 kubelet[2457]: E1112 18:00:50.443383 2457 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 12 18:00:50.443477 kubelet[2457]: I1112 18:00:50.443412 2457 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 12 18:00:50.445948 kubelet[2457]: I1112 18:00:50.445925 2457 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 18:00:50.446066 kubelet[2457]: I1112 18:00:50.446040 2457 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 12 18:00:50.446185 kubelet[2457]: I1112 18:00:50.446148 2457 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 18:00:50.446323 kubelet[2457]: I1112 18:00:50.446176 2457 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 12 18:00:50.446398 kubelet[2457]: I1112 18:00:50.446327 2457 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 18:00:50.446398 kubelet[2457]: I1112 18:00:50.446335 2457 container_manager_linux.go:300] "Creating device plugin manager" Nov 12 18:00:50.446398 kubelet[2457]: I1112 18:00:50.446362 2457 state_mem.go:36] "Initialized new in-memory state store" Nov 12 18:00:50.446469 kubelet[2457]: I1112 18:00:50.446450 2457 kubelet.go:408] "Attempting to sync node with API server" Nov 12 18:00:50.446497 kubelet[2457]: I1112 18:00:50.446471 2457 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 18:00:50.446497 kubelet[2457]: I1112 18:00:50.446493 2457 kubelet.go:314] "Adding apiserver pod source" Nov 12 18:00:50.446557 kubelet[2457]: I1112 18:00:50.446501 2457 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 18:00:50.447976 kubelet[2457]: I1112 18:00:50.446940 2457 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 18:00:50.447976 kubelet[2457]: I1112 18:00:50.447347 2457 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 18:00:50.448219 kubelet[2457]: I1112 18:00:50.448191 2457 server.go:1269] "Started kubelet" Nov 12 18:00:50.448349 kubelet[2457]: I1112 18:00:50.448319 2457 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 18:00:50.448509 kubelet[2457]: I1112 18:00:50.448450 2457 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 18:00:50.448782 kubelet[2457]: I1112 18:00:50.448760 2457 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 18:00:50.451549 kubelet[2457]: I1112 18:00:50.449834 2457 server.go:460] "Adding debug handlers to kubelet server" Nov 12 18:00:50.454051 kubelet[2457]: I1112 18:00:50.454024 2457 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 18:00:50.454489 kubelet[2457]: I1112 18:00:50.454450 2457 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 12 18:00:50.455039 kubelet[2457]: I1112 18:00:50.454843 2457 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 12 18:00:50.455039 kubelet[2457]: E1112 18:00:50.454937 2457 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 18:00:50.455039 kubelet[2457]: I1112 18:00:50.455006 2457 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 12 18:00:50.455138 kubelet[2457]: I1112 18:00:50.455120 2457 reconciler.go:26] "Reconciler: start to sync state" Nov 12 18:00:50.458601 kubelet[2457]: I1112 18:00:50.457852 2457 factory.go:221] Registration of the systemd container factory successfully Nov 12 18:00:50.458601 kubelet[2457]: I1112 18:00:50.457952 2457 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 18:00:50.458601 kubelet[2457]: E1112 18:00:50.458418 2457 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 18:00:50.459013 kubelet[2457]: I1112 18:00:50.458969 2457 factory.go:221] Registration of the containerd container factory successfully Nov 12 18:00:50.470055 kubelet[2457]: I1112 18:00:50.470009 2457 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 18:00:50.474641 kubelet[2457]: I1112 18:00:50.474595 2457 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 18:00:50.474641 kubelet[2457]: I1112 18:00:50.474627 2457 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 18:00:50.474641 kubelet[2457]: I1112 18:00:50.474646 2457 kubelet.go:2321] "Starting kubelet main sync loop" Nov 12 18:00:50.474781 kubelet[2457]: E1112 18:00:50.474698 2457 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 18:00:50.506195 kubelet[2457]: I1112 18:00:50.506158 2457 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 18:00:50.506195 kubelet[2457]: I1112 18:00:50.506177 2457 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 18:00:50.506195 kubelet[2457]: I1112 18:00:50.506196 2457 state_mem.go:36] "Initialized new in-memory state store" Nov 12 18:00:50.506353 kubelet[2457]: I1112 18:00:50.506336 2457 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 18:00:50.506378 kubelet[2457]: I1112 18:00:50.506346 2457 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 18:00:50.506378 kubelet[2457]: I1112 18:00:50.506364 2457 policy_none.go:49] "None policy: Start" Nov 12 18:00:50.507019 kubelet[2457]: I1112 18:00:50.506999 2457 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 18:00:50.507067 kubelet[2457]: I1112 18:00:50.507027 2457 state_mem.go:35] "Initializing new in-memory state store" Nov 12 18:00:50.507216 kubelet[2457]: I1112 18:00:50.507178 2457 state_mem.go:75] "Updated machine memory state" Nov 12 18:00:50.511066 kubelet[2457]: I1112 18:00:50.511035 2457 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 18:00:50.511215 kubelet[2457]: I1112 18:00:50.511201 2457 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 12 18:00:50.511262 kubelet[2457]: I1112 18:00:50.511217 2457 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 18:00:50.511397 kubelet[2457]: I1112 18:00:50.511381 2457 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 18:00:50.581454 kubelet[2457]: E1112 18:00:50.581256 2457 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 18:00:50.617138 kubelet[2457]: I1112 18:00:50.617091 2457 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 18:00:50.623186 kubelet[2457]: I1112 18:00:50.623161 2457 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Nov 12 18:00:50.623291 kubelet[2457]: I1112 18:00:50.623236 2457 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Nov 12 18:00:50.656256 kubelet[2457]: I1112 18:00:50.656137 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 18:00:50.656256 kubelet[2457]: I1112 18:00:50.656172 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33673bc39d15d92b38b41cdd12700fe3-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33673bc39d15d92b38b41cdd12700fe3\") " pod="kube-system/kube-scheduler-localhost" Nov 12 18:00:50.656256 kubelet[2457]: I1112 18:00:50.656195 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68a7798c7a4214a7d071317f8c03f23f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"68a7798c7a4214a7d071317f8c03f23f\") " pod="kube-system/kube-apiserver-localhost" Nov 12 18:00:50.656256 kubelet[2457]: I1112 18:00:50.656213 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68a7798c7a4214a7d071317f8c03f23f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"68a7798c7a4214a7d071317f8c03f23f\") " pod="kube-system/kube-apiserver-localhost" Nov 12 18:00:50.656256 kubelet[2457]: I1112 18:00:50.656231 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68a7798c7a4214a7d071317f8c03f23f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"68a7798c7a4214a7d071317f8c03f23f\") " pod="kube-system/kube-apiserver-localhost" Nov 12 18:00:50.656520 kubelet[2457]: I1112 18:00:50.656249 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 18:00:50.656520 kubelet[2457]: I1112 18:00:50.656264 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 18:00:50.656520 kubelet[2457]: I1112 18:00:50.656294 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 18:00:50.656520 kubelet[2457]: I1112 18:00:50.656312 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 18:00:50.882129 kubelet[2457]: E1112 18:00:50.882091 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:50.882300 kubelet[2457]: E1112 18:00:50.882274 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:50.882548 kubelet[2457]: E1112 18:00:50.882521 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:51.020787 sudo[2495]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 12 18:00:51.021099 sudo[2495]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 12 18:00:51.445467 sudo[2495]: pam_unix(sudo:session): session closed for user root Nov 12 18:00:51.447593 kubelet[2457]: I1112 18:00:51.447566 2457 apiserver.go:52] "Watching apiserver" Nov 12 18:00:51.456112 kubelet[2457]: I1112 18:00:51.456091 2457 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 12 18:00:51.490903 kubelet[2457]: E1112 18:00:51.490834 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:51.495914 kubelet[2457]: E1112 18:00:51.495885 2457 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 12 18:00:51.496125 kubelet[2457]: E1112 18:00:51.496038 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:51.499526 kubelet[2457]: E1112 18:00:51.499504 2457 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 18:00:51.499846 kubelet[2457]: E1112 18:00:51.499788 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:51.522661 kubelet[2457]: I1112 18:00:51.522454 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.522433721 podStartE2EDuration="1.522433721s" podCreationTimestamp="2024-11-12 18:00:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 18:00:51.513944834 +0000 UTC m=+1.120058971" watchObservedRunningTime="2024-11-12 18:00:51.522433721 +0000 UTC m=+1.128547818" Nov 12 18:00:51.531199 kubelet[2457]: I1112 18:00:51.531070 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5310535349999999 podStartE2EDuration="1.531053535s" podCreationTimestamp="2024-11-12 18:00:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 18:00:51.531051736 +0000 UTC m=+1.137165873" watchObservedRunningTime="2024-11-12 18:00:51.531053535 +0000 UTC m=+1.137167672" Nov 12 18:00:51.531199 kubelet[2457]: I1112 18:00:51.531188 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.531182583 podStartE2EDuration="2.531182583s" podCreationTimestamp="2024-11-12 18:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 18:00:51.522905803 +0000 UTC m=+1.129019900" watchObservedRunningTime="2024-11-12 18:00:51.531182583 +0000 UTC m=+1.137296680" Nov 12 18:00:52.492575 kubelet[2457]: E1112 18:00:52.492333 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:52.492575 kubelet[2457]: E1112 18:00:52.492483 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:52.854618 kubelet[2457]: E1112 18:00:52.854478 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:53.341541 sudo[1613]: pam_unix(sudo:session): session closed for user root Nov 12 18:00:53.343239 sshd[1610]: pam_unix(sshd:session): session closed for user core Nov 12 18:00:53.346165 systemd[1]: sshd@6-10.0.0.125:22-10.0.0.1:45136.service: Deactivated successfully. Nov 12 18:00:53.348422 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 18:00:53.349706 systemd[1]: session-7.scope: Consumed 6.688s CPU time, 150.8M memory peak, 0B memory swap peak. Nov 12 18:00:53.351049 systemd-logind[1415]: Session 7 logged out. Waiting for processes to exit. Nov 12 18:00:53.352318 systemd-logind[1415]: Removed session 7. Nov 12 18:00:55.337360 kubelet[2457]: I1112 18:00:55.337265 2457 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 18:00:55.337916 kubelet[2457]: I1112 18:00:55.337791 2457 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 18:00:55.337946 containerd[1430]: time="2024-11-12T18:00:55.337595554Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 18:00:56.139166 kubelet[2457]: E1112 18:00:56.139087 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:56.300247 kubelet[2457]: I1112 18:00:56.299695 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-cilium-run\") pod \"cilium-pqwl6\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " pod="kube-system/cilium-pqwl6" Nov 12 18:00:56.300247 kubelet[2457]: I1112 18:00:56.299741 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-lib-modules\") pod \"cilium-pqwl6\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " pod="kube-system/cilium-pqwl6" Nov 12 18:00:56.300247 kubelet[2457]: I1112 18:00:56.299760 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-hostproc\") pod \"cilium-pqwl6\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " pod="kube-system/cilium-pqwl6" Nov 12 18:00:56.300247 kubelet[2457]: I1112 18:00:56.299780 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-host-proc-sys-kernel\") pod \"cilium-pqwl6\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " pod="kube-system/cilium-pqwl6" Nov 12 18:00:56.300247 kubelet[2457]: I1112 18:00:56.299799 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-cilium-cgroup\") pod \"cilium-pqwl6\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " pod="kube-system/cilium-pqwl6" Nov 12 18:00:56.300247 kubelet[2457]: I1112 18:00:56.299818 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-cni-path\") pod \"cilium-pqwl6\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " pod="kube-system/cilium-pqwl6" Nov 12 18:00:56.300567 kubelet[2457]: I1112 18:00:56.299839 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/26b7f331-5976-4c89-b82e-7aa2d01af351-clustermesh-secrets\") pod \"cilium-pqwl6\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " pod="kube-system/cilium-pqwl6" Nov 12 18:00:56.300567 kubelet[2457]: I1112 18:00:56.299865 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl4bp\" (UniqueName: \"kubernetes.io/projected/26b7f331-5976-4c89-b82e-7aa2d01af351-kube-api-access-tl4bp\") pod \"cilium-pqwl6\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " pod="kube-system/cilium-pqwl6" Nov 12 18:00:56.300567 kubelet[2457]: I1112 18:00:56.299887 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-etc-cni-netd\") pod \"cilium-pqwl6\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " pod="kube-system/cilium-pqwl6" Nov 12 18:00:56.300567 kubelet[2457]: I1112 18:00:56.299907 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-host-proc-sys-net\") pod \"cilium-pqwl6\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " pod="kube-system/cilium-pqwl6" Nov 12 18:00:56.300567 kubelet[2457]: I1112 18:00:56.299925 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/26b7f331-5976-4c89-b82e-7aa2d01af351-hubble-tls\") pod \"cilium-pqwl6\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " pod="kube-system/cilium-pqwl6" Nov 12 18:00:56.300567 kubelet[2457]: I1112 18:00:56.299941 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e3828dc2-c5b2-4b9f-821b-f2065f6dbe9c-kube-proxy\") pod \"kube-proxy-vk5zt\" (UID: \"e3828dc2-c5b2-4b9f-821b-f2065f6dbe9c\") " pod="kube-system/kube-proxy-vk5zt" Nov 12 18:00:56.300722 kubelet[2457]: I1112 18:00:56.299963 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3828dc2-c5b2-4b9f-821b-f2065f6dbe9c-xtables-lock\") pod \"kube-proxy-vk5zt\" (UID: \"e3828dc2-c5b2-4b9f-821b-f2065f6dbe9c\") " pod="kube-system/kube-proxy-vk5zt" Nov 12 18:00:56.300722 kubelet[2457]: I1112 18:00:56.299985 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3828dc2-c5b2-4b9f-821b-f2065f6dbe9c-lib-modules\") pod \"kube-proxy-vk5zt\" (UID: \"e3828dc2-c5b2-4b9f-821b-f2065f6dbe9c\") " pod="kube-system/kube-proxy-vk5zt" Nov 12 18:00:56.300722 kubelet[2457]: I1112 18:00:56.300005 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26b7f331-5976-4c89-b82e-7aa2d01af351-cilium-config-path\") pod \"cilium-pqwl6\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " pod="kube-system/cilium-pqwl6" Nov 12 18:00:56.300722 kubelet[2457]: I1112 18:00:56.300026 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-bpf-maps\") pod \"cilium-pqwl6\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " pod="kube-system/cilium-pqwl6" Nov 12 18:00:56.300722 kubelet[2457]: I1112 18:00:56.300043 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-xtables-lock\") pod \"cilium-pqwl6\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " pod="kube-system/cilium-pqwl6" Nov 12 18:00:56.300835 kubelet[2457]: I1112 18:00:56.300062 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gfrl\" (UniqueName: \"kubernetes.io/projected/e3828dc2-c5b2-4b9f-821b-f2065f6dbe9c-kube-api-access-5gfrl\") pod \"kube-proxy-vk5zt\" (UID: \"e3828dc2-c5b2-4b9f-821b-f2065f6dbe9c\") " pod="kube-system/kube-proxy-vk5zt" Nov 12 18:00:56.304435 systemd[1]: Created slice kubepods-besteffort-pode3828dc2_c5b2_4b9f_821b_f2065f6dbe9c.slice - libcontainer container kubepods-besteffort-pode3828dc2_c5b2_4b9f_821b_f2065f6dbe9c.slice. Nov 12 18:00:56.320358 systemd[1]: Created slice kubepods-burstable-pod26b7f331_5976_4c89_b82e_7aa2d01af351.slice - libcontainer container kubepods-burstable-pod26b7f331_5976_4c89_b82e_7aa2d01af351.slice. Nov 12 18:00:56.498941 kubelet[2457]: E1112 18:00:56.498846 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:56.500846 kubelet[2457]: I1112 18:00:56.500811 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86kt7\" (UniqueName: \"kubernetes.io/projected/ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d-kube-api-access-86kt7\") pod \"cilium-operator-5d85765b45-k7lxs\" (UID: \"ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d\") " pod="kube-system/cilium-operator-5d85765b45-k7lxs" Nov 12 18:00:56.500846 kubelet[2457]: I1112 18:00:56.500846 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d-cilium-config-path\") pod \"cilium-operator-5d85765b45-k7lxs\" (UID: \"ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d\") " pod="kube-system/cilium-operator-5d85765b45-k7lxs" Nov 12 18:00:56.504436 systemd[1]: Created slice kubepods-besteffort-podab1027f2_88bb_47bc_a6cd_cb5acd71fb8d.slice - libcontainer container kubepods-besteffort-podab1027f2_88bb_47bc_a6cd_cb5acd71fb8d.slice. Nov 12 18:00:56.613843 kubelet[2457]: E1112 18:00:56.613797 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:56.614374 containerd[1430]: time="2024-11-12T18:00:56.614340123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vk5zt,Uid:e3828dc2-c5b2-4b9f-821b-f2065f6dbe9c,Namespace:kube-system,Attempt:0,}" Nov 12 18:00:56.623695 kubelet[2457]: E1112 18:00:56.623649 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:56.624153 containerd[1430]: time="2024-11-12T18:00:56.624111242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pqwl6,Uid:26b7f331-5976-4c89-b82e-7aa2d01af351,Namespace:kube-system,Attempt:0,}" Nov 12 18:00:56.674634 containerd[1430]: time="2024-11-12T18:00:56.674524355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:00:56.674634 containerd[1430]: time="2024-11-12T18:00:56.674618258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:00:56.674634 containerd[1430]: time="2024-11-12T18:00:56.674636215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:00:56.674823 containerd[1430]: time="2024-11-12T18:00:56.674729038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:00:56.674969 containerd[1430]: time="2024-11-12T18:00:56.674838098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:00:56.674969 containerd[1430]: time="2024-11-12T18:00:56.674888809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:00:56.674969 containerd[1430]: time="2024-11-12T18:00:56.674903966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:00:56.675092 containerd[1430]: time="2024-11-12T18:00:56.674963676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:00:56.696745 systemd[1]: Started cri-containerd-6ddac783fdb36130c35052e8c50493ef2102f1a73338ad6924574ae984ffb16c.scope - libcontainer container 6ddac783fdb36130c35052e8c50493ef2102f1a73338ad6924574ae984ffb16c. Nov 12 18:00:56.698027 systemd[1]: Started cri-containerd-a649e177c5cb01295bfccd10cfc569170cdd2ad1ed67d34f09142101c16d0ac5.scope - libcontainer container a649e177c5cb01295bfccd10cfc569170cdd2ad1ed67d34f09142101c16d0ac5. Nov 12 18:00:56.724091 containerd[1430]: time="2024-11-12T18:00:56.724032831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vk5zt,Uid:e3828dc2-c5b2-4b9f-821b-f2065f6dbe9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ddac783fdb36130c35052e8c50493ef2102f1a73338ad6924574ae984ffb16c\"" Nov 12 18:00:56.724683 containerd[1430]: time="2024-11-12T18:00:56.724627204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pqwl6,Uid:26b7f331-5976-4c89-b82e-7aa2d01af351,Namespace:kube-system,Attempt:0,} returns sandbox id \"a649e177c5cb01295bfccd10cfc569170cdd2ad1ed67d34f09142101c16d0ac5\"" Nov 12 18:00:56.725120 kubelet[2457]: E1112 18:00:56.725096 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:56.725449 kubelet[2457]: E1112 18:00:56.725304 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:56.727001 containerd[1430]: time="2024-11-12T18:00:56.726970061Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 12 18:00:56.728077 containerd[1430]: time="2024-11-12T18:00:56.728034989Z" level=info msg="CreateContainer within sandbox \"6ddac783fdb36130c35052e8c50493ef2102f1a73338ad6924574ae984ffb16c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 18:00:56.747790 containerd[1430]: time="2024-11-12T18:00:56.747730119Z" level=info msg="CreateContainer within sandbox \"6ddac783fdb36130c35052e8c50493ef2102f1a73338ad6924574ae984ffb16c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5d97eb7f6693d4294c284cf5740e113635bbf729713ff02a00c4a6bdf5b4f7f7\"" Nov 12 18:00:56.748479 containerd[1430]: time="2024-11-12T18:00:56.748449590Z" level=info msg="StartContainer for \"5d97eb7f6693d4294c284cf5740e113635bbf729713ff02a00c4a6bdf5b4f7f7\"" Nov 12 18:00:56.771698 systemd[1]: Started cri-containerd-5d97eb7f6693d4294c284cf5740e113635bbf729713ff02a00c4a6bdf5b4f7f7.scope - libcontainer container 5d97eb7f6693d4294c284cf5740e113635bbf729713ff02a00c4a6bdf5b4f7f7. Nov 12 18:00:56.797034 containerd[1430]: time="2024-11-12T18:00:56.796983681Z" level=info msg="StartContainer for \"5d97eb7f6693d4294c284cf5740e113635bbf729713ff02a00c4a6bdf5b4f7f7\" returns successfully" Nov 12 18:00:56.807976 kubelet[2457]: E1112 18:00:56.807930 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:56.808942 containerd[1430]: time="2024-11-12T18:00:56.808897014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-k7lxs,Uid:ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d,Namespace:kube-system,Attempt:0,}" Nov 12 18:00:56.850367 containerd[1430]: time="2024-11-12T18:00:56.850015002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:00:56.850367 containerd[1430]: time="2024-11-12T18:00:56.850087389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:00:56.850367 containerd[1430]: time="2024-11-12T18:00:56.850103226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:00:56.850367 containerd[1430]: time="2024-11-12T18:00:56.850196290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:00:56.872200 systemd[1]: Started cri-containerd-103cc33ddbb9d03b93a2f76b1db5085eaddaefe7d0aded3ffac9162160e5469c.scope - libcontainer container 103cc33ddbb9d03b93a2f76b1db5085eaddaefe7d0aded3ffac9162160e5469c. Nov 12 18:00:56.906427 containerd[1430]: time="2024-11-12T18:00:56.906388001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-k7lxs,Uid:ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"103cc33ddbb9d03b93a2f76b1db5085eaddaefe7d0aded3ffac9162160e5469c\"" Nov 12 18:00:56.907370 kubelet[2457]: E1112 18:00:56.907329 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:00:57.507702 kubelet[2457]: E1112 18:00:57.505600 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:01.276995 kubelet[2457]: E1112 18:01:01.276942 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:01.386622 kubelet[2457]: I1112 18:01:01.386492 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vk5zt" podStartSLOduration=5.386473277 podStartE2EDuration="5.386473277s" podCreationTimestamp="2024-11-12 18:00:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 18:00:57.514890473 +0000 UTC m=+7.121004610" watchObservedRunningTime="2024-11-12 18:01:01.386473277 +0000 UTC m=+10.992587414" Nov 12 18:01:02.861787 kubelet[2457]: E1112 18:01:02.861646 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:03.858772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount841168453.mount: Deactivated successfully. Nov 12 18:01:05.064848 containerd[1430]: time="2024-11-12T18:01:05.064797323Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:01:05.065505 containerd[1430]: time="2024-11-12T18:01:05.065346308Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651582" Nov 12 18:01:05.069508 containerd[1430]: time="2024-11-12T18:01:05.069471092Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:01:05.071459 containerd[1430]: time="2024-11-12T18:01:05.071429574Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.343795592s" Nov 12 18:01:05.071532 containerd[1430]: time="2024-11-12T18:01:05.071463091Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Nov 12 18:01:05.080704 containerd[1430]: time="2024-11-12T18:01:05.080580172Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 12 18:01:05.086067 containerd[1430]: time="2024-11-12T18:01:05.085855440Z" level=info msg="CreateContainer within sandbox \"a649e177c5cb01295bfccd10cfc569170cdd2ad1ed67d34f09142101c16d0ac5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 18:01:05.114818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3404586247.mount: Deactivated successfully. Nov 12 18:01:05.115161 containerd[1430]: time="2024-11-12T18:01:05.115030978Z" level=info msg="CreateContainer within sandbox \"a649e177c5cb01295bfccd10cfc569170cdd2ad1ed67d34f09142101c16d0ac5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d89a28d2b46cd43604b73d050c816bc55a290d913829a2396aa72340946b5de1\"" Nov 12 18:01:05.116191 containerd[1430]: time="2024-11-12T18:01:05.116121868Z" level=info msg="StartContainer for \"d89a28d2b46cd43604b73d050c816bc55a290d913829a2396aa72340946b5de1\"" Nov 12 18:01:05.130712 update_engine[1422]: I20241112 18:01:05.129050 1422 update_attempter.cc:509] Updating boot flags... Nov 12 18:01:05.143696 systemd[1]: Started cri-containerd-d89a28d2b46cd43604b73d050c816bc55a290d913829a2396aa72340946b5de1.scope - libcontainer container d89a28d2b46cd43604b73d050c816bc55a290d913829a2396aa72340946b5de1. Nov 12 18:01:05.156562 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2892) Nov 12 18:01:05.184729 containerd[1430]: time="2024-11-12T18:01:05.184683515Z" level=info msg="StartContainer for \"d89a28d2b46cd43604b73d050c816bc55a290d913829a2396aa72340946b5de1\" returns successfully" Nov 12 18:01:05.214414 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2894) Nov 12 18:01:05.243279 systemd[1]: cri-containerd-d89a28d2b46cd43604b73d050c816bc55a290d913829a2396aa72340946b5de1.scope: Deactivated successfully. Nov 12 18:01:05.347385 containerd[1430]: time="2024-11-12T18:01:05.342380455Z" level=info msg="shim disconnected" id=d89a28d2b46cd43604b73d050c816bc55a290d913829a2396aa72340946b5de1 namespace=k8s.io Nov 12 18:01:05.347385 containerd[1430]: time="2024-11-12T18:01:05.347300238Z" level=warning msg="cleaning up after shim disconnected" id=d89a28d2b46cd43604b73d050c816bc55a290d913829a2396aa72340946b5de1 namespace=k8s.io Nov 12 18:01:05.347385 containerd[1430]: time="2024-11-12T18:01:05.347315997Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 18:01:05.569787 kubelet[2457]: E1112 18:01:05.569744 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:05.572468 containerd[1430]: time="2024-11-12T18:01:05.572406821Z" level=info msg="CreateContainer within sandbox \"a649e177c5cb01295bfccd10cfc569170cdd2ad1ed67d34f09142101c16d0ac5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 18:01:05.597937 containerd[1430]: time="2024-11-12T18:01:05.597739627Z" level=info msg="CreateContainer within sandbox \"a649e177c5cb01295bfccd10cfc569170cdd2ad1ed67d34f09142101c16d0ac5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bdefa8e79bf773020c6b47aa8d9d2a4d7e50406420df28f7b70b32723248b32d\"" Nov 12 18:01:05.600973 containerd[1430]: time="2024-11-12T18:01:05.600884510Z" level=info msg="StartContainer for \"bdefa8e79bf773020c6b47aa8d9d2a4d7e50406420df28f7b70b32723248b32d\"" Nov 12 18:01:05.626695 systemd[1]: Started cri-containerd-bdefa8e79bf773020c6b47aa8d9d2a4d7e50406420df28f7b70b32723248b32d.scope - libcontainer container bdefa8e79bf773020c6b47aa8d9d2a4d7e50406420df28f7b70b32723248b32d. Nov 12 18:01:05.648709 containerd[1430]: time="2024-11-12T18:01:05.648668132Z" level=info msg="StartContainer for \"bdefa8e79bf773020c6b47aa8d9d2a4d7e50406420df28f7b70b32723248b32d\" returns successfully" Nov 12 18:01:05.671354 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 18:01:05.671642 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 18:01:05.671717 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 12 18:01:05.676851 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 18:01:05.677034 systemd[1]: cri-containerd-bdefa8e79bf773020c6b47aa8d9d2a4d7e50406420df28f7b70b32723248b32d.scope: Deactivated successfully. Nov 12 18:01:05.689671 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 18:01:05.714179 containerd[1430]: time="2024-11-12T18:01:05.714114613Z" level=info msg="shim disconnected" id=bdefa8e79bf773020c6b47aa8d9d2a4d7e50406420df28f7b70b32723248b32d namespace=k8s.io Nov 12 18:01:05.714796 containerd[1430]: time="2024-11-12T18:01:05.714513933Z" level=warning msg="cleaning up after shim disconnected" id=bdefa8e79bf773020c6b47aa8d9d2a4d7e50406420df28f7b70b32723248b32d namespace=k8s.io Nov 12 18:01:05.714796 containerd[1430]: time="2024-11-12T18:01:05.714538810Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 18:01:06.112801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d89a28d2b46cd43604b73d050c816bc55a290d913829a2396aa72340946b5de1-rootfs.mount: Deactivated successfully. Nov 12 18:01:06.568238 kubelet[2457]: E1112 18:01:06.568191 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:06.570712 containerd[1430]: time="2024-11-12T18:01:06.570570130Z" level=info msg="CreateContainer within sandbox \"a649e177c5cb01295bfccd10cfc569170cdd2ad1ed67d34f09142101c16d0ac5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 18:01:06.593149 containerd[1430]: time="2024-11-12T18:01:06.593062004Z" level=info msg="CreateContainer within sandbox \"a649e177c5cb01295bfccd10cfc569170cdd2ad1ed67d34f09142101c16d0ac5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d217e1450c8c4a7e8a001b2730363c34e240ac5591ce0a8e4ce09aed9c83a94a\"" Nov 12 18:01:06.594650 containerd[1430]: time="2024-11-12T18:01:06.593584874Z" level=info msg="StartContainer for \"d217e1450c8c4a7e8a001b2730363c34e240ac5591ce0a8e4ce09aed9c83a94a\"" Nov 12 18:01:06.625822 systemd[1]: Started cri-containerd-d217e1450c8c4a7e8a001b2730363c34e240ac5591ce0a8e4ce09aed9c83a94a.scope - libcontainer container d217e1450c8c4a7e8a001b2730363c34e240ac5591ce0a8e4ce09aed9c83a94a. Nov 12 18:01:06.654193 containerd[1430]: time="2024-11-12T18:01:06.652281166Z" level=info msg="StartContainer for \"d217e1450c8c4a7e8a001b2730363c34e240ac5591ce0a8e4ce09aed9c83a94a\" returns successfully" Nov 12 18:01:06.658461 systemd[1]: cri-containerd-d217e1450c8c4a7e8a001b2730363c34e240ac5591ce0a8e4ce09aed9c83a94a.scope: Deactivated successfully. Nov 12 18:01:06.674995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d217e1450c8c4a7e8a001b2730363c34e240ac5591ce0a8e4ce09aed9c83a94a-rootfs.mount: Deactivated successfully. Nov 12 18:01:06.679549 containerd[1430]: time="2024-11-12T18:01:06.679487634Z" level=info msg="shim disconnected" id=d217e1450c8c4a7e8a001b2730363c34e240ac5591ce0a8e4ce09aed9c83a94a namespace=k8s.io Nov 12 18:01:06.679665 containerd[1430]: time="2024-11-12T18:01:06.679540109Z" level=warning msg="cleaning up after shim disconnected" id=d217e1450c8c4a7e8a001b2730363c34e240ac5591ce0a8e4ce09aed9c83a94a namespace=k8s.io Nov 12 18:01:06.679665 containerd[1430]: time="2024-11-12T18:01:06.679564867Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 18:01:07.572834 kubelet[2457]: E1112 18:01:07.572763 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:07.575960 containerd[1430]: time="2024-11-12T18:01:07.575721785Z" level=info msg="CreateContainer within sandbox \"a649e177c5cb01295bfccd10cfc569170cdd2ad1ed67d34f09142101c16d0ac5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 18:01:07.590097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2154417673.mount: Deactivated successfully. Nov 12 18:01:07.596877 containerd[1430]: time="2024-11-12T18:01:07.596820395Z" level=info msg="CreateContainer within sandbox \"a649e177c5cb01295bfccd10cfc569170cdd2ad1ed67d34f09142101c16d0ac5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2e8d02b40e1513cf8900643cc89de3df9ffa15f45ddd7566ca6465f1e9d1d7ba\"" Nov 12 18:01:07.597685 containerd[1430]: time="2024-11-12T18:01:07.597490655Z" level=info msg="StartContainer for \"2e8d02b40e1513cf8900643cc89de3df9ffa15f45ddd7566ca6465f1e9d1d7ba\"" Nov 12 18:01:07.620734 systemd[1]: Started cri-containerd-2e8d02b40e1513cf8900643cc89de3df9ffa15f45ddd7566ca6465f1e9d1d7ba.scope - libcontainer container 2e8d02b40e1513cf8900643cc89de3df9ffa15f45ddd7566ca6465f1e9d1d7ba. Nov 12 18:01:07.640389 systemd[1]: cri-containerd-2e8d02b40e1513cf8900643cc89de3df9ffa15f45ddd7566ca6465f1e9d1d7ba.scope: Deactivated successfully. Nov 12 18:01:07.642038 containerd[1430]: time="2024-11-12T18:01:07.642003231Z" level=info msg="StartContainer for \"2e8d02b40e1513cf8900643cc89de3df9ffa15f45ddd7566ca6465f1e9d1d7ba\" returns successfully" Nov 12 18:01:07.657474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e8d02b40e1513cf8900643cc89de3df9ffa15f45ddd7566ca6465f1e9d1d7ba-rootfs.mount: Deactivated successfully. Nov 12 18:01:07.664267 containerd[1430]: time="2024-11-12T18:01:07.664191985Z" level=info msg="shim disconnected" id=2e8d02b40e1513cf8900643cc89de3df9ffa15f45ddd7566ca6465f1e9d1d7ba namespace=k8s.io Nov 12 18:01:07.664267 containerd[1430]: time="2024-11-12T18:01:07.664251699Z" level=warning msg="cleaning up after shim disconnected" id=2e8d02b40e1513cf8900643cc89de3df9ffa15f45ddd7566ca6465f1e9d1d7ba namespace=k8s.io Nov 12 18:01:07.664267 containerd[1430]: time="2024-11-12T18:01:07.664260019Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 18:01:08.576166 kubelet[2457]: E1112 18:01:08.575855 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:08.578976 containerd[1430]: time="2024-11-12T18:01:08.578822323Z" level=info msg="CreateContainer within sandbox \"a649e177c5cb01295bfccd10cfc569170cdd2ad1ed67d34f09142101c16d0ac5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 18:01:08.594895 containerd[1430]: time="2024-11-12T18:01:08.594839152Z" level=info msg="CreateContainer within sandbox \"a649e177c5cb01295bfccd10cfc569170cdd2ad1ed67d34f09142101c16d0ac5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f\"" Nov 12 18:01:08.595416 containerd[1430]: time="2024-11-12T18:01:08.595384587Z" level=info msg="StartContainer for \"74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f\"" Nov 12 18:01:08.621718 systemd[1]: Started cri-containerd-74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f.scope - libcontainer container 74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f. Nov 12 18:01:08.657099 containerd[1430]: time="2024-11-12T18:01:08.656940393Z" level=info msg="StartContainer for \"74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f\" returns successfully" Nov 12 18:01:08.766599 kubelet[2457]: I1112 18:01:08.766525 2457 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Nov 12 18:01:08.806832 systemd[1]: Created slice kubepods-burstable-pod1a32b383_ae25_48a3_9997_afeeca673358.slice - libcontainer container kubepods-burstable-pod1a32b383_ae25_48a3_9997_afeeca673358.slice. Nov 12 18:01:08.813420 systemd[1]: Created slice kubepods-burstable-pod0897c61e_9bd1_42f8_bfee_7c6cefebb9b4.slice - libcontainer container kubepods-burstable-pod0897c61e_9bd1_42f8_bfee_7c6cefebb9b4.slice. Nov 12 18:01:08.987962 kubelet[2457]: I1112 18:01:08.987908 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0897c61e-9bd1-42f8-bfee-7c6cefebb9b4-config-volume\") pod \"coredns-6f6b679f8f-6gdvl\" (UID: \"0897c61e-9bd1-42f8-bfee-7c6cefebb9b4\") " pod="kube-system/coredns-6f6b679f8f-6gdvl" Nov 12 18:01:08.987962 kubelet[2457]: I1112 18:01:08.987956 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a32b383-ae25-48a3-9997-afeeca673358-config-volume\") pod \"coredns-6f6b679f8f-mk4zc\" (UID: \"1a32b383-ae25-48a3-9997-afeeca673358\") " pod="kube-system/coredns-6f6b679f8f-mk4zc" Nov 12 18:01:08.988133 kubelet[2457]: I1112 18:01:08.987978 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z2kn\" (UniqueName: \"kubernetes.io/projected/0897c61e-9bd1-42f8-bfee-7c6cefebb9b4-kube-api-access-2z2kn\") pod \"coredns-6f6b679f8f-6gdvl\" (UID: \"0897c61e-9bd1-42f8-bfee-7c6cefebb9b4\") " pod="kube-system/coredns-6f6b679f8f-6gdvl" Nov 12 18:01:08.988133 kubelet[2457]: I1112 18:01:08.987998 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85pp7\" (UniqueName: \"kubernetes.io/projected/1a32b383-ae25-48a3-9997-afeeca673358-kube-api-access-85pp7\") pod \"coredns-6f6b679f8f-mk4zc\" (UID: \"1a32b383-ae25-48a3-9997-afeeca673358\") " pod="kube-system/coredns-6f6b679f8f-mk4zc" Nov 12 18:01:09.111988 kubelet[2457]: E1112 18:01:09.111941 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:09.114063 containerd[1430]: time="2024-11-12T18:01:09.113677510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mk4zc,Uid:1a32b383-ae25-48a3-9997-afeeca673358,Namespace:kube-system,Attempt:0,}" Nov 12 18:01:09.116243 kubelet[2457]: E1112 18:01:09.115997 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:09.117634 containerd[1430]: time="2024-11-12T18:01:09.117590445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6gdvl,Uid:0897c61e-9bd1-42f8-bfee-7c6cefebb9b4,Namespace:kube-system,Attempt:0,}" Nov 12 18:01:09.580031 kubelet[2457]: E1112 18:01:09.579999 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:10.582214 kubelet[2457]: E1112 18:01:10.582177 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:10.749893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2600327402.mount: Deactivated successfully. Nov 12 18:01:11.584367 kubelet[2457]: E1112 18:01:11.584275 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:11.918877 containerd[1430]: time="2024-11-12T18:01:11.918827572Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:01:11.919361 containerd[1430]: time="2024-11-12T18:01:11.919327778Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138298" Nov 12 18:01:11.920105 containerd[1430]: time="2024-11-12T18:01:11.920072927Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:01:11.921725 containerd[1430]: time="2024-11-12T18:01:11.921689097Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 6.841071488s" Nov 12 18:01:11.921770 containerd[1430]: time="2024-11-12T18:01:11.921727854Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Nov 12 18:01:11.924617 containerd[1430]: time="2024-11-12T18:01:11.924587498Z" level=info msg="CreateContainer within sandbox \"103cc33ddbb9d03b93a2f76b1db5085eaddaefe7d0aded3ffac9162160e5469c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 12 18:01:11.935013 containerd[1430]: time="2024-11-12T18:01:11.934965388Z" level=info msg="CreateContainer within sandbox \"103cc33ddbb9d03b93a2f76b1db5085eaddaefe7d0aded3ffac9162160e5469c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2\"" Nov 12 18:01:11.935348 containerd[1430]: time="2024-11-12T18:01:11.935314324Z" level=info msg="StartContainer for \"58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2\"" Nov 12 18:01:11.962768 systemd[1]: Started cri-containerd-58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2.scope - libcontainer container 58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2. Nov 12 18:01:11.985688 containerd[1430]: time="2024-11-12T18:01:11.985646838Z" level=info msg="StartContainer for \"58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2\" returns successfully" Nov 12 18:01:12.587667 kubelet[2457]: E1112 18:01:12.587632 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:12.587983 kubelet[2457]: E1112 18:01:12.587748 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:12.598675 kubelet[2457]: I1112 18:01:12.598612 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pqwl6" podStartSLOduration=8.244699237 podStartE2EDuration="16.598595354s" podCreationTimestamp="2024-11-12 18:00:56 +0000 UTC" firstStartedPulling="2024-11-12 18:00:56.726411202 +0000 UTC m=+6.332525299" lastFinishedPulling="2024-11-12 18:01:05.080307319 +0000 UTC m=+14.686421416" observedRunningTime="2024-11-12 18:01:09.601592308 +0000 UTC m=+19.207706485" watchObservedRunningTime="2024-11-12 18:01:12.598595354 +0000 UTC m=+22.204709491" Nov 12 18:01:13.591095 kubelet[2457]: E1112 18:01:13.589247 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:15.788903 systemd-networkd[1376]: cilium_host: Link UP Nov 12 18:01:15.789079 systemd-networkd[1376]: cilium_net: Link UP Nov 12 18:01:15.789234 systemd-networkd[1376]: cilium_net: Gained carrier Nov 12 18:01:15.789348 systemd-networkd[1376]: cilium_host: Gained carrier Nov 12 18:01:15.789437 systemd-networkd[1376]: cilium_net: Gained IPv6LL Nov 12 18:01:15.789568 systemd-networkd[1376]: cilium_host: Gained IPv6LL Nov 12 18:01:15.877070 systemd-networkd[1376]: cilium_vxlan: Link UP Nov 12 18:01:15.877076 systemd-networkd[1376]: cilium_vxlan: Gained carrier Nov 12 18:01:16.186584 kernel: NET: Registered PF_ALG protocol family Nov 12 18:01:16.753300 systemd-networkd[1376]: lxc_health: Link UP Nov 12 18:01:16.764574 systemd-networkd[1376]: lxc_health: Gained carrier Nov 12 18:01:17.237568 kernel: eth0: renamed from tmpaaea7 Nov 12 18:01:17.249683 systemd-networkd[1376]: lxc002af6bfa398: Link UP Nov 12 18:01:17.251507 systemd-networkd[1376]: lxc762dfe8bc6c3: Link UP Nov 12 18:01:17.257658 kernel: eth0: renamed from tmp0481b Nov 12 18:01:17.264023 systemd-networkd[1376]: lxc002af6bfa398: Gained carrier Nov 12 18:01:17.265696 systemd-networkd[1376]: lxc762dfe8bc6c3: Gained carrier Nov 12 18:01:17.346685 systemd-networkd[1376]: cilium_vxlan: Gained IPv6LL Nov 12 18:01:18.050675 systemd-networkd[1376]: lxc_health: Gained IPv6LL Nov 12 18:01:18.637665 kubelet[2457]: E1112 18:01:18.637576 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:18.654583 kubelet[2457]: I1112 18:01:18.654198 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-k7lxs" podStartSLOduration=7.6405268920000005 podStartE2EDuration="22.65418184s" podCreationTimestamp="2024-11-12 18:00:56 +0000 UTC" firstStartedPulling="2024-11-12 18:00:56.909066478 +0000 UTC m=+6.515180575" lastFinishedPulling="2024-11-12 18:01:11.922721386 +0000 UTC m=+21.528835523" observedRunningTime="2024-11-12 18:01:12.599642727 +0000 UTC m=+22.205756824" watchObservedRunningTime="2024-11-12 18:01:18.65418184 +0000 UTC m=+28.260295937" Nov 12 18:01:18.691816 systemd-networkd[1376]: lxc762dfe8bc6c3: Gained IPv6LL Nov 12 18:01:18.754755 systemd-networkd[1376]: lxc002af6bfa398: Gained IPv6LL Nov 12 18:01:19.599661 kubelet[2457]: E1112 18:01:19.599618 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:20.812670 containerd[1430]: time="2024-11-12T18:01:20.812532328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:01:20.812670 containerd[1430]: time="2024-11-12T18:01:20.812607285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:01:20.812670 containerd[1430]: time="2024-11-12T18:01:20.812619004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:01:20.812670 containerd[1430]: time="2024-11-12T18:01:20.812349255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:01:20.812670 containerd[1430]: time="2024-11-12T18:01:20.812428132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:01:20.812670 containerd[1430]: time="2024-11-12T18:01:20.812438971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:01:20.812670 containerd[1430]: time="2024-11-12T18:01:20.812521648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:01:20.813338 containerd[1430]: time="2024-11-12T18:01:20.813279779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:01:20.836721 systemd[1]: Started cri-containerd-0481b9233e70474a7b9ba26062170c67684957e88d2378514708715d0edf0e51.scope - libcontainer container 0481b9233e70474a7b9ba26062170c67684957e88d2378514708715d0edf0e51. Nov 12 18:01:20.839215 systemd[1]: Started cri-containerd-aaea70586cdb1248700183d1bf4324decfd13255f7af558055929bb41c15e216.scope - libcontainer container aaea70586cdb1248700183d1bf4324decfd13255f7af558055929bb41c15e216. Nov 12 18:01:20.848391 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 18:01:20.850842 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 18:01:20.867411 containerd[1430]: time="2024-11-12T18:01:20.867291951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6gdvl,Uid:0897c61e-9bd1-42f8-bfee-7c6cefebb9b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"0481b9233e70474a7b9ba26062170c67684957e88d2378514708715d0edf0e51\"" Nov 12 18:01:20.868204 kubelet[2457]: E1112 18:01:20.868176 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:20.871540 containerd[1430]: time="2024-11-12T18:01:20.871504950Z" level=info msg="CreateContainer within sandbox \"0481b9233e70474a7b9ba26062170c67684957e88d2378514708715d0edf0e51\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 18:01:20.876074 containerd[1430]: time="2024-11-12T18:01:20.876041136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mk4zc,Uid:1a32b383-ae25-48a3-9997-afeeca673358,Namespace:kube-system,Attempt:0,} returns sandbox id \"aaea70586cdb1248700183d1bf4324decfd13255f7af558055929bb41c15e216\"" Nov 12 18:01:20.876854 kubelet[2457]: E1112 18:01:20.876830 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:20.878478 containerd[1430]: time="2024-11-12T18:01:20.878376246Z" level=info msg="CreateContainer within sandbox \"aaea70586cdb1248700183d1bf4324decfd13255f7af558055929bb41c15e216\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 18:01:20.889698 containerd[1430]: time="2024-11-12T18:01:20.889659654Z" level=info msg="CreateContainer within sandbox \"0481b9233e70474a7b9ba26062170c67684957e88d2378514708715d0edf0e51\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fad6d8bf1881f867fec90969799ae7f9a7aa0a971e63e67ae9a488e6218fc974\"" Nov 12 18:01:20.890356 containerd[1430]: time="2024-11-12T18:01:20.890194914Z" level=info msg="StartContainer for \"fad6d8bf1881f867fec90969799ae7f9a7aa0a971e63e67ae9a488e6218fc974\"" Nov 12 18:01:20.916731 systemd[1]: Started cri-containerd-fad6d8bf1881f867fec90969799ae7f9a7aa0a971e63e67ae9a488e6218fc974.scope - libcontainer container fad6d8bf1881f867fec90969799ae7f9a7aa0a971e63e67ae9a488e6218fc974. Nov 12 18:01:20.937055 containerd[1430]: time="2024-11-12T18:01:20.936951923Z" level=info msg="StartContainer for \"fad6d8bf1881f867fec90969799ae7f9a7aa0a971e63e67ae9a488e6218fc974\" returns successfully" Nov 12 18:01:20.950481 containerd[1430]: time="2024-11-12T18:01:20.950365010Z" level=info msg="CreateContainer within sandbox \"aaea70586cdb1248700183d1bf4324decfd13255f7af558055929bb41c15e216\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4f6bb8853286da474ca2adbd332349b747fc300942ceef7ac5ec0e4e6dfb87ad\"" Nov 12 18:01:20.950929 containerd[1430]: time="2024-11-12T18:01:20.950893989Z" level=info msg="StartContainer for \"4f6bb8853286da474ca2adbd332349b747fc300942ceef7ac5ec0e4e6dfb87ad\"" Nov 12 18:01:20.978730 systemd[1]: Started cri-containerd-4f6bb8853286da474ca2adbd332349b747fc300942ceef7ac5ec0e4e6dfb87ad.scope - libcontainer container 4f6bb8853286da474ca2adbd332349b747fc300942ceef7ac5ec0e4e6dfb87ad. Nov 12 18:01:21.006446 containerd[1430]: time="2024-11-12T18:01:21.006392358Z" level=info msg="StartContainer for \"4f6bb8853286da474ca2adbd332349b747fc300942ceef7ac5ec0e4e6dfb87ad\" returns successfully" Nov 12 18:01:21.624173 kubelet[2457]: E1112 18:01:21.623037 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:21.624864 kubelet[2457]: E1112 18:01:21.624838 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:21.642311 kubelet[2457]: I1112 18:01:21.642243 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-mk4zc" podStartSLOduration=25.642228132 podStartE2EDuration="25.642228132s" podCreationTimestamp="2024-11-12 18:00:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 18:01:21.641911823 +0000 UTC m=+31.248025960" watchObservedRunningTime="2024-11-12 18:01:21.642228132 +0000 UTC m=+31.248342269" Nov 12 18:01:21.642447 kubelet[2457]: I1112 18:01:21.642328 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-6gdvl" podStartSLOduration=25.642323808 podStartE2EDuration="25.642323808s" podCreationTimestamp="2024-11-12 18:00:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 18:01:21.632963304 +0000 UTC m=+31.239077481" watchObservedRunningTime="2024-11-12 18:01:21.642323808 +0000 UTC m=+31.248437945" Nov 12 18:01:21.817850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3258244803.mount: Deactivated successfully. Nov 12 18:01:22.615262 systemd[1]: Started sshd@7-10.0.0.125:22-10.0.0.1:58516.service - OpenSSH per-connection server daemon (10.0.0.1:58516). Nov 12 18:01:22.626246 kubelet[2457]: E1112 18:01:22.626107 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:22.626535 kubelet[2457]: E1112 18:01:22.626472 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:22.658283 sshd[3873]: Accepted publickey for core from 10.0.0.1 port 58516 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:01:22.659845 sshd[3873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:01:22.668171 systemd-logind[1415]: New session 8 of user core. Nov 12 18:01:22.691424 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 18:01:22.758163 kernel: hrtimer: interrupt took 3300409 ns Nov 12 18:01:22.840753 sshd[3873]: pam_unix(sshd:session): session closed for user core Nov 12 18:01:22.844371 systemd[1]: sshd@7-10.0.0.125:22-10.0.0.1:58516.service: Deactivated successfully. Nov 12 18:01:22.845994 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 18:01:22.847967 systemd-logind[1415]: Session 8 logged out. Waiting for processes to exit. Nov 12 18:01:22.851337 systemd-logind[1415]: Removed session 8. Nov 12 18:01:23.630000 kubelet[2457]: E1112 18:01:23.628428 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:01:27.853539 systemd[1]: Started sshd@8-10.0.0.125:22-10.0.0.1:58524.service - OpenSSH per-connection server daemon (10.0.0.1:58524). Nov 12 18:01:27.894629 sshd[3891]: Accepted publickey for core from 10.0.0.1 port 58524 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:01:27.895987 sshd[3891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:01:27.902648 systemd-logind[1415]: New session 9 of user core. Nov 12 18:01:27.915149 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 18:01:28.048742 sshd[3891]: pam_unix(sshd:session): session closed for user core Nov 12 18:01:28.052108 systemd[1]: sshd@8-10.0.0.125:22-10.0.0.1:58524.service: Deactivated successfully. Nov 12 18:01:28.053754 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 18:01:28.055363 systemd-logind[1415]: Session 9 logged out. Waiting for processes to exit. Nov 12 18:01:28.056491 systemd-logind[1415]: Removed session 9. Nov 12 18:01:33.074868 systemd[1]: Started sshd@9-10.0.0.125:22-10.0.0.1:49962.service - OpenSSH per-connection server daemon (10.0.0.1:49962). Nov 12 18:01:33.116186 sshd[3910]: Accepted publickey for core from 10.0.0.1 port 49962 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:01:33.118047 sshd[3910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:01:33.122614 systemd-logind[1415]: New session 10 of user core. Nov 12 18:01:33.133745 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 18:01:33.255788 sshd[3910]: pam_unix(sshd:session): session closed for user core Nov 12 18:01:33.259680 systemd[1]: sshd@9-10.0.0.125:22-10.0.0.1:49962.service: Deactivated successfully. Nov 12 18:01:33.261222 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 18:01:33.261944 systemd-logind[1415]: Session 10 logged out. Waiting for processes to exit. Nov 12 18:01:33.262777 systemd-logind[1415]: Removed session 10. Nov 12 18:01:38.276824 systemd[1]: Started sshd@10-10.0.0.125:22-10.0.0.1:49976.service - OpenSSH per-connection server daemon (10.0.0.1:49976). Nov 12 18:01:38.311957 sshd[3926]: Accepted publickey for core from 10.0.0.1 port 49976 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:01:38.312382 sshd[3926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:01:38.316529 systemd-logind[1415]: New session 11 of user core. Nov 12 18:01:38.328761 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 18:01:38.449057 sshd[3926]: pam_unix(sshd:session): session closed for user core Nov 12 18:01:38.464924 systemd[1]: sshd@10-10.0.0.125:22-10.0.0.1:49976.service: Deactivated successfully. Nov 12 18:01:38.466512 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 18:01:38.469759 systemd-logind[1415]: Session 11 logged out. Waiting for processes to exit. Nov 12 18:01:38.478816 systemd[1]: Started sshd@11-10.0.0.125:22-10.0.0.1:49992.service - OpenSSH per-connection server daemon (10.0.0.1:49992). Nov 12 18:01:38.479991 systemd-logind[1415]: Removed session 11. Nov 12 18:01:38.510686 sshd[3941]: Accepted publickey for core from 10.0.0.1 port 49992 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:01:38.511974 sshd[3941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:01:38.518122 systemd-logind[1415]: New session 12 of user core. Nov 12 18:01:38.527724 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 18:01:38.696091 sshd[3941]: pam_unix(sshd:session): session closed for user core Nov 12 18:01:38.708110 systemd[1]: sshd@11-10.0.0.125:22-10.0.0.1:49992.service: Deactivated successfully. Nov 12 18:01:38.711453 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 18:01:38.715653 systemd-logind[1415]: Session 12 logged out. Waiting for processes to exit. Nov 12 18:01:38.722019 systemd[1]: Started sshd@12-10.0.0.125:22-10.0.0.1:49996.service - OpenSSH per-connection server daemon (10.0.0.1:49996). Nov 12 18:01:38.725132 systemd-logind[1415]: Removed session 12. Nov 12 18:01:38.754839 sshd[3955]: Accepted publickey for core from 10.0.0.1 port 49996 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:01:38.756030 sshd[3955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:01:38.759390 systemd-logind[1415]: New session 13 of user core. Nov 12 18:01:38.770805 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 18:01:38.887326 sshd[3955]: pam_unix(sshd:session): session closed for user core Nov 12 18:01:38.890418 systemd[1]: sshd@12-10.0.0.125:22-10.0.0.1:49996.service: Deactivated successfully. Nov 12 18:01:38.892055 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 18:01:38.892671 systemd-logind[1415]: Session 13 logged out. Waiting for processes to exit. Nov 12 18:01:38.893399 systemd-logind[1415]: Removed session 13. Nov 12 18:01:43.899464 systemd[1]: Started sshd@13-10.0.0.125:22-10.0.0.1:37942.service - OpenSSH per-connection server daemon (10.0.0.1:37942). Nov 12 18:01:43.934372 sshd[3970]: Accepted publickey for core from 10.0.0.1 port 37942 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:01:43.935710 sshd[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:01:43.939477 systemd-logind[1415]: New session 14 of user core. Nov 12 18:01:43.950712 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 18:01:44.063157 sshd[3970]: pam_unix(sshd:session): session closed for user core Nov 12 18:01:44.065797 systemd[1]: sshd@13-10.0.0.125:22-10.0.0.1:37942.service: Deactivated successfully. Nov 12 18:01:44.067459 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 18:01:44.070181 systemd-logind[1415]: Session 14 logged out. Waiting for processes to exit. Nov 12 18:01:44.071026 systemd-logind[1415]: Removed session 14. Nov 12 18:01:49.073324 systemd[1]: Started sshd@14-10.0.0.125:22-10.0.0.1:37948.service - OpenSSH per-connection server daemon (10.0.0.1:37948). Nov 12 18:01:49.112113 sshd[3984]: Accepted publickey for core from 10.0.0.1 port 37948 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:01:49.113749 sshd[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:01:49.117600 systemd-logind[1415]: New session 15 of user core. Nov 12 18:01:49.124703 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 18:01:49.244961 sshd[3984]: pam_unix(sshd:session): session closed for user core Nov 12 18:01:49.253127 systemd[1]: sshd@14-10.0.0.125:22-10.0.0.1:37948.service: Deactivated successfully. Nov 12 18:01:49.255055 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 18:01:49.256800 systemd-logind[1415]: Session 15 logged out. Waiting for processes to exit. Nov 12 18:01:49.264793 systemd[1]: Started sshd@15-10.0.0.125:22-10.0.0.1:37956.service - OpenSSH per-connection server daemon (10.0.0.1:37956). Nov 12 18:01:49.265232 systemd-logind[1415]: Removed session 15. Nov 12 18:01:49.296307 sshd[3998]: Accepted publickey for core from 10.0.0.1 port 37956 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:01:49.297668 sshd[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:01:49.301693 systemd-logind[1415]: New session 16 of user core. Nov 12 18:01:49.311688 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 18:01:49.579181 sshd[3998]: pam_unix(sshd:session): session closed for user core Nov 12 18:01:49.587522 systemd[1]: sshd@15-10.0.0.125:22-10.0.0.1:37956.service: Deactivated successfully. Nov 12 18:01:49.590131 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 18:01:49.591414 systemd-logind[1415]: Session 16 logged out. Waiting for processes to exit. Nov 12 18:01:49.597856 systemd[1]: Started sshd@16-10.0.0.125:22-10.0.0.1:37964.service - OpenSSH per-connection server daemon (10.0.0.1:37964). Nov 12 18:01:49.599578 systemd-logind[1415]: Removed session 16. Nov 12 18:01:49.629810 sshd[4011]: Accepted publickey for core from 10.0.0.1 port 37964 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:01:49.631001 sshd[4011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:01:49.634684 systemd-logind[1415]: New session 17 of user core. Nov 12 18:01:49.645693 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 18:01:50.896231 sshd[4011]: pam_unix(sshd:session): session closed for user core Nov 12 18:01:50.910007 systemd[1]: sshd@16-10.0.0.125:22-10.0.0.1:37964.service: Deactivated successfully. Nov 12 18:01:50.912679 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 18:01:50.917003 systemd-logind[1415]: Session 17 logged out. Waiting for processes to exit. Nov 12 18:01:50.924431 systemd[1]: Started sshd@17-10.0.0.125:22-10.0.0.1:37966.service - OpenSSH per-connection server daemon (10.0.0.1:37966). Nov 12 18:01:50.926692 systemd-logind[1415]: Removed session 17. Nov 12 18:01:50.959460 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 37966 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:01:50.960865 sshd[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:01:50.964450 systemd-logind[1415]: New session 18 of user core. Nov 12 18:01:50.972700 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 18:01:51.198313 sshd[4034]: pam_unix(sshd:session): session closed for user core Nov 12 18:01:51.211493 systemd[1]: sshd@17-10.0.0.125:22-10.0.0.1:37966.service: Deactivated successfully. Nov 12 18:01:51.214907 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 18:01:51.217788 systemd-logind[1415]: Session 18 logged out. Waiting for processes to exit. Nov 12 18:01:51.228865 systemd[1]: Started sshd@18-10.0.0.125:22-10.0.0.1:37968.service - OpenSSH per-connection server daemon (10.0.0.1:37968). Nov 12 18:01:51.229738 systemd-logind[1415]: Removed session 18. Nov 12 18:01:51.259536 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 37968 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:01:51.260858 sshd[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:01:51.265007 systemd-logind[1415]: New session 19 of user core. Nov 12 18:01:51.276709 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 18:01:51.385847 sshd[4046]: pam_unix(sshd:session): session closed for user core Nov 12 18:01:51.389135 systemd[1]: sshd@18-10.0.0.125:22-10.0.0.1:37968.service: Deactivated successfully. Nov 12 18:01:51.390778 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 18:01:51.391337 systemd-logind[1415]: Session 19 logged out. Waiting for processes to exit. Nov 12 18:01:51.392142 systemd-logind[1415]: Removed session 19. Nov 12 18:01:56.396054 systemd[1]: Started sshd@19-10.0.0.125:22-10.0.0.1:49526.service - OpenSSH per-connection server daemon (10.0.0.1:49526). Nov 12 18:01:56.429403 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 49526 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:01:56.430613 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:01:56.434155 systemd-logind[1415]: New session 20 of user core. Nov 12 18:01:56.443676 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 18:01:56.546433 sshd[4064]: pam_unix(sshd:session): session closed for user core Nov 12 18:01:56.549159 systemd-logind[1415]: Session 20 logged out. Waiting for processes to exit. Nov 12 18:01:56.549331 systemd[1]: sshd@19-10.0.0.125:22-10.0.0.1:49526.service: Deactivated successfully. Nov 12 18:01:56.551876 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 18:01:56.553279 systemd-logind[1415]: Removed session 20. Nov 12 18:02:01.561408 systemd[1]: Started sshd@20-10.0.0.125:22-10.0.0.1:49540.service - OpenSSH per-connection server daemon (10.0.0.1:49540). Nov 12 18:02:01.597187 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 49540 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:02:01.598485 sshd[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:02:01.602169 systemd-logind[1415]: New session 21 of user core. Nov 12 18:02:01.612698 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 18:02:01.717888 sshd[4080]: pam_unix(sshd:session): session closed for user core Nov 12 18:02:01.721135 systemd[1]: sshd@20-10.0.0.125:22-10.0.0.1:49540.service: Deactivated successfully. Nov 12 18:02:01.722777 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 18:02:01.723427 systemd-logind[1415]: Session 21 logged out. Waiting for processes to exit. Nov 12 18:02:01.724310 systemd-logind[1415]: Removed session 21. Nov 12 18:02:05.475782 kubelet[2457]: E1112 18:02:05.475750 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:02:06.729089 systemd[1]: Started sshd@21-10.0.0.125:22-10.0.0.1:47932.service - OpenSSH per-connection server daemon (10.0.0.1:47932). Nov 12 18:02:06.764585 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 47932 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:02:06.765512 sshd[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:02:06.769442 systemd-logind[1415]: New session 22 of user core. Nov 12 18:02:06.778686 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 18:02:06.883060 sshd[4094]: pam_unix(sshd:session): session closed for user core Nov 12 18:02:06.894967 systemd[1]: sshd@21-10.0.0.125:22-10.0.0.1:47932.service: Deactivated successfully. Nov 12 18:02:06.896409 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 18:02:06.899705 systemd-logind[1415]: Session 22 logged out. Waiting for processes to exit. Nov 12 18:02:06.915870 systemd[1]: Started sshd@22-10.0.0.125:22-10.0.0.1:47938.service - OpenSSH per-connection server daemon (10.0.0.1:47938). Nov 12 18:02:06.916886 systemd-logind[1415]: Removed session 22. Nov 12 18:02:06.945569 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 47938 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:02:06.946515 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:02:06.950287 systemd-logind[1415]: New session 23 of user core. Nov 12 18:02:06.964741 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 18:02:08.476156 kubelet[2457]: E1112 18:02:08.476093 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:02:08.818649 containerd[1430]: time="2024-11-12T18:02:08.818474525Z" level=info msg="StopContainer for \"58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2\" with timeout 30 (s)" Nov 12 18:02:08.819244 containerd[1430]: time="2024-11-12T18:02:08.819213524Z" level=info msg="Stop container \"58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2\" with signal terminated" Nov 12 18:02:08.838857 systemd[1]: cri-containerd-58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2.scope: Deactivated successfully. Nov 12 18:02:08.858348 containerd[1430]: time="2024-11-12T18:02:08.858304958Z" level=info msg="StopContainer for \"74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f\" with timeout 2 (s)" Nov 12 18:02:08.859037 containerd[1430]: time="2024-11-12T18:02:08.858524398Z" level=info msg="Stop container \"74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f\" with signal terminated" Nov 12 18:02:08.863688 containerd[1430]: time="2024-11-12T18:02:08.863511547Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 18:02:08.865962 systemd-networkd[1376]: lxc_health: Link DOWN Nov 12 18:02:08.865967 systemd-networkd[1376]: lxc_health: Lost carrier Nov 12 18:02:08.873667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2-rootfs.mount: Deactivated successfully. Nov 12 18:02:08.881975 containerd[1430]: time="2024-11-12T18:02:08.881917626Z" level=info msg="shim disconnected" id=58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2 namespace=k8s.io Nov 12 18:02:08.881975 containerd[1430]: time="2024-11-12T18:02:08.881973026Z" level=warning msg="cleaning up after shim disconnected" id=58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2 namespace=k8s.io Nov 12 18:02:08.882131 containerd[1430]: time="2024-11-12T18:02:08.881984386Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 18:02:08.886395 systemd[1]: cri-containerd-74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f.scope: Deactivated successfully. Nov 12 18:02:08.886657 systemd[1]: cri-containerd-74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f.scope: Consumed 6.397s CPU time. Nov 12 18:02:08.903038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f-rootfs.mount: Deactivated successfully. Nov 12 18:02:08.909271 containerd[1430]: time="2024-11-12T18:02:08.909208047Z" level=info msg="shim disconnected" id=74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f namespace=k8s.io Nov 12 18:02:08.909271 containerd[1430]: time="2024-11-12T18:02:08.909260926Z" level=warning msg="cleaning up after shim disconnected" id=74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f namespace=k8s.io Nov 12 18:02:08.909271 containerd[1430]: time="2024-11-12T18:02:08.909277446Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 18:02:08.921342 containerd[1430]: time="2024-11-12T18:02:08.921298620Z" level=info msg="StopContainer for \"58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2\" returns successfully" Nov 12 18:02:08.922065 containerd[1430]: time="2024-11-12T18:02:08.921984059Z" level=info msg="StopContainer for \"74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f\" returns successfully" Nov 12 18:02:08.930355 containerd[1430]: time="2024-11-12T18:02:08.930324120Z" level=info msg="StopPodSandbox for \"103cc33ddbb9d03b93a2f76b1db5085eaddaefe7d0aded3ffac9162160e5469c\"" Nov 12 18:02:08.930437 containerd[1430]: time="2024-11-12T18:02:08.930369920Z" level=info msg="Container to stop \"58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 18:02:08.930437 containerd[1430]: time="2024-11-12T18:02:08.930396280Z" level=info msg="StopPodSandbox for \"a649e177c5cb01295bfccd10cfc569170cdd2ad1ed67d34f09142101c16d0ac5\"" Nov 12 18:02:08.930506 containerd[1430]: time="2024-11-12T18:02:08.930431160Z" level=info msg="Container to stop \"d217e1450c8c4a7e8a001b2730363c34e240ac5591ce0a8e4ce09aed9c83a94a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 18:02:08.930506 containerd[1430]: time="2024-11-12T18:02:08.930456520Z" level=info msg="Container to stop \"2e8d02b40e1513cf8900643cc89de3df9ffa15f45ddd7566ca6465f1e9d1d7ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 18:02:08.930506 containerd[1430]: time="2024-11-12T18:02:08.930474600Z" level=info msg="Container to stop \"74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 18:02:08.930506 containerd[1430]: time="2024-11-12T18:02:08.930485440Z" level=info msg="Container to stop \"bdefa8e79bf773020c6b47aa8d9d2a4d7e50406420df28f7b70b32723248b32d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 18:02:08.930506 containerd[1430]: time="2024-11-12T18:02:08.930494920Z" level=info msg="Container to stop \"d89a28d2b46cd43604b73d050c816bc55a290d913829a2396aa72340946b5de1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 18:02:08.931908 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-103cc33ddbb9d03b93a2f76b1db5085eaddaefe7d0aded3ffac9162160e5469c-shm.mount: Deactivated successfully. Nov 12 18:02:08.932017 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a649e177c5cb01295bfccd10cfc569170cdd2ad1ed67d34f09142101c16d0ac5-shm.mount: Deactivated successfully. Nov 12 18:02:08.935917 systemd[1]: cri-containerd-a649e177c5cb01295bfccd10cfc569170cdd2ad1ed67d34f09142101c16d0ac5.scope: Deactivated successfully. Nov 12 18:02:08.939886 systemd[1]: cri-containerd-103cc33ddbb9d03b93a2f76b1db5085eaddaefe7d0aded3ffac9162160e5469c.scope: Deactivated successfully. Nov 12 18:02:08.958455 containerd[1430]: time="2024-11-12T18:02:08.958390099Z" level=info msg="shim disconnected" id=a649e177c5cb01295bfccd10cfc569170cdd2ad1ed67d34f09142101c16d0ac5 namespace=k8s.io Nov 12 18:02:08.958455 containerd[1430]: time="2024-11-12T18:02:08.958445099Z" level=warning msg="cleaning up after shim disconnected" id=a649e177c5cb01295bfccd10cfc569170cdd2ad1ed67d34f09142101c16d0ac5 namespace=k8s.io Nov 12 18:02:08.958455 containerd[1430]: time="2024-11-12T18:02:08.958453619Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 18:02:08.958779 containerd[1430]: time="2024-11-12T18:02:08.958623498Z" level=info msg="shim disconnected" id=103cc33ddbb9d03b93a2f76b1db5085eaddaefe7d0aded3ffac9162160e5469c namespace=k8s.io Nov 12 18:02:08.958779 containerd[1430]: time="2024-11-12T18:02:08.958773978Z" level=warning msg="cleaning up after shim disconnected" id=103cc33ddbb9d03b93a2f76b1db5085eaddaefe7d0aded3ffac9162160e5469c namespace=k8s.io Nov 12 18:02:08.958902 containerd[1430]: time="2024-11-12T18:02:08.958875058Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 18:02:08.974957 containerd[1430]: time="2024-11-12T18:02:08.974910543Z" level=info msg="TearDown network for sandbox \"103cc33ddbb9d03b93a2f76b1db5085eaddaefe7d0aded3ffac9162160e5469c\" successfully" Nov 12 18:02:08.974957 containerd[1430]: time="2024-11-12T18:02:08.974944423Z" level=info msg="StopPodSandbox for \"103cc33ddbb9d03b93a2f76b1db5085eaddaefe7d0aded3ffac9162160e5469c\" returns successfully" Nov 12 18:02:08.982944 containerd[1430]: time="2024-11-12T18:02:08.982905965Z" level=info msg="TearDown network for sandbox \"a649e177c5cb01295bfccd10cfc569170cdd2ad1ed67d34f09142101c16d0ac5\" successfully" Nov 12 18:02:08.982944 containerd[1430]: time="2024-11-12T18:02:08.982938605Z" level=info msg="StopPodSandbox for \"a649e177c5cb01295bfccd10cfc569170cdd2ad1ed67d34f09142101c16d0ac5\" returns successfully" Nov 12 18:02:09.065887 kubelet[2457]: I1112 18:02:09.065838 2457 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-cilium-run\") pod \"26b7f331-5976-4c89-b82e-7aa2d01af351\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " Nov 12 18:02:09.065887 kubelet[2457]: I1112 18:02:09.065879 2457 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-hostproc\") pod \"26b7f331-5976-4c89-b82e-7aa2d01af351\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " Nov 12 18:02:09.066116 kubelet[2457]: I1112 18:02:09.065904 2457 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/26b7f331-5976-4c89-b82e-7aa2d01af351-clustermesh-secrets\") pod \"26b7f331-5976-4c89-b82e-7aa2d01af351\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " Nov 12 18:02:09.066116 kubelet[2457]: I1112 18:02:09.065925 2457 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26b7f331-5976-4c89-b82e-7aa2d01af351-cilium-config-path\") pod \"26b7f331-5976-4c89-b82e-7aa2d01af351\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " Nov 12 18:02:09.066116 kubelet[2457]: I1112 18:02:09.065942 2457 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86kt7\" (UniqueName: \"kubernetes.io/projected/ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d-kube-api-access-86kt7\") pod \"ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d\" (UID: \"ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d\") " Nov 12 18:02:09.066116 kubelet[2457]: I1112 18:02:09.065957 2457 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-cilium-cgroup\") pod \"26b7f331-5976-4c89-b82e-7aa2d01af351\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " Nov 12 18:02:09.066116 kubelet[2457]: I1112 18:02:09.065970 2457 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-cni-path\") pod \"26b7f331-5976-4c89-b82e-7aa2d01af351\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " Nov 12 18:02:09.066116 kubelet[2457]: I1112 18:02:09.065985 2457 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-host-proc-sys-kernel\") pod \"26b7f331-5976-4c89-b82e-7aa2d01af351\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " Nov 12 18:02:09.066252 kubelet[2457]: I1112 18:02:09.066002 2457 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-lib-modules\") pod \"26b7f331-5976-4c89-b82e-7aa2d01af351\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " Nov 12 18:02:09.066252 kubelet[2457]: I1112 18:02:09.066015 2457 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-host-proc-sys-net\") pod \"26b7f331-5976-4c89-b82e-7aa2d01af351\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " Nov 12 18:02:09.066252 kubelet[2457]: I1112 18:02:09.066029 2457 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-xtables-lock\") pod \"26b7f331-5976-4c89-b82e-7aa2d01af351\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " Nov 12 18:02:09.066252 kubelet[2457]: I1112 18:02:09.066045 2457 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d-cilium-config-path\") pod \"ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d\" (UID: \"ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d\") " Nov 12 18:02:09.066252 kubelet[2457]: I1112 18:02:09.066062 2457 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl4bp\" (UniqueName: \"kubernetes.io/projected/26b7f331-5976-4c89-b82e-7aa2d01af351-kube-api-access-tl4bp\") pod \"26b7f331-5976-4c89-b82e-7aa2d01af351\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " Nov 12 18:02:09.066252 kubelet[2457]: I1112 18:02:09.066077 2457 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-etc-cni-netd\") pod \"26b7f331-5976-4c89-b82e-7aa2d01af351\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " Nov 12 18:02:09.066382 kubelet[2457]: I1112 18:02:09.066094 2457 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/26b7f331-5976-4c89-b82e-7aa2d01af351-hubble-tls\") pod \"26b7f331-5976-4c89-b82e-7aa2d01af351\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " Nov 12 18:02:09.066382 kubelet[2457]: I1112 18:02:09.066108 2457 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-bpf-maps\") pod \"26b7f331-5976-4c89-b82e-7aa2d01af351\" (UID: \"26b7f331-5976-4c89-b82e-7aa2d01af351\") " Nov 12 18:02:09.070603 kubelet[2457]: I1112 18:02:09.070035 2457 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-hostproc" (OuterVolumeSpecName: "hostproc") pod "26b7f331-5976-4c89-b82e-7aa2d01af351" (UID: "26b7f331-5976-4c89-b82e-7aa2d01af351"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 18:02:09.070603 kubelet[2457]: I1112 18:02:09.070099 2457 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "26b7f331-5976-4c89-b82e-7aa2d01af351" (UID: "26b7f331-5976-4c89-b82e-7aa2d01af351"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 18:02:09.070603 kubelet[2457]: I1112 18:02:09.070524 2457 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "26b7f331-5976-4c89-b82e-7aa2d01af351" (UID: "26b7f331-5976-4c89-b82e-7aa2d01af351"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 18:02:09.071569 kubelet[2457]: I1112 18:02:09.070788 2457 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "26b7f331-5976-4c89-b82e-7aa2d01af351" (UID: "26b7f331-5976-4c89-b82e-7aa2d01af351"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 18:02:09.071569 kubelet[2457]: I1112 18:02:09.070833 2457 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "26b7f331-5976-4c89-b82e-7aa2d01af351" (UID: "26b7f331-5976-4c89-b82e-7aa2d01af351"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 18:02:09.071569 kubelet[2457]: I1112 18:02:09.070850 2457 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "26b7f331-5976-4c89-b82e-7aa2d01af351" (UID: "26b7f331-5976-4c89-b82e-7aa2d01af351"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 18:02:09.071664 kubelet[2457]: I1112 18:02:09.071459 2457 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "26b7f331-5976-4c89-b82e-7aa2d01af351" (UID: "26b7f331-5976-4c89-b82e-7aa2d01af351"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 18:02:09.072276 kubelet[2457]: I1112 18:02:09.072234 2457 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26b7f331-5976-4c89-b82e-7aa2d01af351-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "26b7f331-5976-4c89-b82e-7aa2d01af351" (UID: "26b7f331-5976-4c89-b82e-7aa2d01af351"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 18:02:09.072849 kubelet[2457]: I1112 18:02:09.072812 2457 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d" (UID: "ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 18:02:09.075739 kubelet[2457]: I1112 18:02:09.075575 2457 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26b7f331-5976-4c89-b82e-7aa2d01af351-kube-api-access-tl4bp" (OuterVolumeSpecName: "kube-api-access-tl4bp") pod "26b7f331-5976-4c89-b82e-7aa2d01af351" (UID: "26b7f331-5976-4c89-b82e-7aa2d01af351"). InnerVolumeSpecName "kube-api-access-tl4bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 18:02:09.075739 kubelet[2457]: I1112 18:02:09.075631 2457 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-cni-path" (OuterVolumeSpecName: "cni-path") pod "26b7f331-5976-4c89-b82e-7aa2d01af351" (UID: "26b7f331-5976-4c89-b82e-7aa2d01af351"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 18:02:09.075739 kubelet[2457]: I1112 18:02:09.075648 2457 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "26b7f331-5976-4c89-b82e-7aa2d01af351" (UID: "26b7f331-5976-4c89-b82e-7aa2d01af351"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 18:02:09.075739 kubelet[2457]: I1112 18:02:09.075663 2457 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "26b7f331-5976-4c89-b82e-7aa2d01af351" (UID: "26b7f331-5976-4c89-b82e-7aa2d01af351"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 18:02:09.075875 kubelet[2457]: I1112 18:02:09.075686 2457 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d-kube-api-access-86kt7" (OuterVolumeSpecName: "kube-api-access-86kt7") pod "ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d" (UID: "ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d"). InnerVolumeSpecName "kube-api-access-86kt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 18:02:09.076163 kubelet[2457]: I1112 18:02:09.076141 2457 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26b7f331-5976-4c89-b82e-7aa2d01af351-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "26b7f331-5976-4c89-b82e-7aa2d01af351" (UID: "26b7f331-5976-4c89-b82e-7aa2d01af351"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 18:02:09.078296 kubelet[2457]: I1112 18:02:09.078257 2457 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26b7f331-5976-4c89-b82e-7aa2d01af351-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "26b7f331-5976-4c89-b82e-7aa2d01af351" (UID: "26b7f331-5976-4c89-b82e-7aa2d01af351"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 12 18:02:09.167084 kubelet[2457]: I1112 18:02:09.167034 2457 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 12 18:02:09.167084 kubelet[2457]: I1112 18:02:09.167072 2457 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 12 18:02:09.167084 kubelet[2457]: I1112 18:02:09.167081 2457 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 12 18:02:09.167084 kubelet[2457]: I1112 18:02:09.167089 2457 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 12 18:02:09.167084 kubelet[2457]: I1112 18:02:09.167098 2457 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 18:02:09.167319 kubelet[2457]: I1112 18:02:09.167107 2457 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tl4bp\" (UniqueName: \"kubernetes.io/projected/26b7f331-5976-4c89-b82e-7aa2d01af351-kube-api-access-tl4bp\") on node \"localhost\" DevicePath \"\"" Nov 12 18:02:09.167319 kubelet[2457]: I1112 18:02:09.167117 2457 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 12 18:02:09.167319 kubelet[2457]: I1112 18:02:09.167124 2457 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/26b7f331-5976-4c89-b82e-7aa2d01af351-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 12 18:02:09.167319 kubelet[2457]: I1112 18:02:09.167132 2457 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 12 18:02:09.167319 kubelet[2457]: I1112 18:02:09.167139 2457 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 12 18:02:09.167319 kubelet[2457]: I1112 18:02:09.167146 2457 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 12 18:02:09.167319 kubelet[2457]: I1112 18:02:09.167153 2457 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/26b7f331-5976-4c89-b82e-7aa2d01af351-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 12 18:02:09.167319 kubelet[2457]: I1112 18:02:09.167161 2457 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26b7f331-5976-4c89-b82e-7aa2d01af351-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 18:02:09.167486 kubelet[2457]: I1112 18:02:09.167169 2457 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-86kt7\" (UniqueName: \"kubernetes.io/projected/ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d-kube-api-access-86kt7\") on node \"localhost\" DevicePath \"\"" Nov 12 18:02:09.167486 kubelet[2457]: I1112 18:02:09.167176 2457 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 12 18:02:09.167486 kubelet[2457]: I1112 18:02:09.167183 2457 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/26b7f331-5976-4c89-b82e-7aa2d01af351-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 12 18:02:09.711301 kubelet[2457]: I1112 18:02:09.711228 2457 scope.go:117] "RemoveContainer" containerID="58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2" Nov 12 18:02:09.713202 containerd[1430]: time="2024-11-12T18:02:09.713168078Z" level=info msg="RemoveContainer for \"58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2\"" Nov 12 18:02:09.718384 systemd[1]: Removed slice kubepods-besteffort-podab1027f2_88bb_47bc_a6cd_cb5acd71fb8d.slice - libcontainer container kubepods-besteffort-podab1027f2_88bb_47bc_a6cd_cb5acd71fb8d.slice. Nov 12 18:02:09.720861 containerd[1430]: time="2024-11-12T18:02:09.720825182Z" level=info msg="RemoveContainer for \"58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2\" returns successfully" Nov 12 18:02:09.721592 kubelet[2457]: I1112 18:02:09.721142 2457 scope.go:117] "RemoveContainer" containerID="58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2" Nov 12 18:02:09.721671 containerd[1430]: time="2024-11-12T18:02:09.721335501Z" level=error msg="ContainerStatus for \"58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2\": not found" Nov 12 18:02:09.722448 systemd[1]: Removed slice kubepods-burstable-pod26b7f331_5976_4c89_b82e_7aa2d01af351.slice - libcontainer container kubepods-burstable-pod26b7f331_5976_4c89_b82e_7aa2d01af351.slice. Nov 12 18:02:09.722860 systemd[1]: kubepods-burstable-pod26b7f331_5976_4c89_b82e_7aa2d01af351.slice: Consumed 6.525s CPU time. Nov 12 18:02:09.728464 kubelet[2457]: E1112 18:02:09.728425 2457 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2\": not found" containerID="58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2" Nov 12 18:02:09.728538 kubelet[2457]: I1112 18:02:09.728456 2457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2"} err="failed to get container status \"58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"58a07278e6d31db45789ba6da37839e74a1e1ff5368d0dd2d45e3b7ef90935b2\": not found" Nov 12 18:02:09.728538 kubelet[2457]: I1112 18:02:09.728528 2457 scope.go:117] "RemoveContainer" containerID="74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f" Nov 12 18:02:09.729788 containerd[1430]: time="2024-11-12T18:02:09.729752043Z" level=info msg="RemoveContainer for \"74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f\"" Nov 12 18:02:09.732672 containerd[1430]: time="2024-11-12T18:02:09.732632316Z" level=info msg="RemoveContainer for \"74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f\" returns successfully" Nov 12 18:02:09.732845 kubelet[2457]: I1112 18:02:09.732814 2457 scope.go:117] "RemoveContainer" containerID="2e8d02b40e1513cf8900643cc89de3df9ffa15f45ddd7566ca6465f1e9d1d7ba" Nov 12 18:02:09.734455 containerd[1430]: time="2024-11-12T18:02:09.734372913Z" level=info msg="RemoveContainer for \"2e8d02b40e1513cf8900643cc89de3df9ffa15f45ddd7566ca6465f1e9d1d7ba\"" Nov 12 18:02:09.736783 containerd[1430]: time="2024-11-12T18:02:09.736748428Z" level=info msg="RemoveContainer for \"2e8d02b40e1513cf8900643cc89de3df9ffa15f45ddd7566ca6465f1e9d1d7ba\" returns successfully" Nov 12 18:02:09.736912 kubelet[2457]: I1112 18:02:09.736891 2457 scope.go:117] "RemoveContainer" containerID="d217e1450c8c4a7e8a001b2730363c34e240ac5591ce0a8e4ce09aed9c83a94a" Nov 12 18:02:09.738141 containerd[1430]: time="2024-11-12T18:02:09.738090745Z" level=info msg="RemoveContainer for \"d217e1450c8c4a7e8a001b2730363c34e240ac5591ce0a8e4ce09aed9c83a94a\"" Nov 12 18:02:09.740575 containerd[1430]: time="2024-11-12T18:02:09.740415860Z" level=info msg="RemoveContainer for \"d217e1450c8c4a7e8a001b2730363c34e240ac5591ce0a8e4ce09aed9c83a94a\" returns successfully" Nov 12 18:02:09.741642 kubelet[2457]: I1112 18:02:09.741570 2457 scope.go:117] "RemoveContainer" containerID="bdefa8e79bf773020c6b47aa8d9d2a4d7e50406420df28f7b70b32723248b32d" Nov 12 18:02:09.742710 containerd[1430]: time="2024-11-12T18:02:09.742684335Z" level=info msg="RemoveContainer for \"bdefa8e79bf773020c6b47aa8d9d2a4d7e50406420df28f7b70b32723248b32d\"" Nov 12 18:02:09.745930 containerd[1430]: time="2024-11-12T18:02:09.745892768Z" level=info msg="RemoveContainer for \"bdefa8e79bf773020c6b47aa8d9d2a4d7e50406420df28f7b70b32723248b32d\" returns successfully" Nov 12 18:02:09.746189 kubelet[2457]: I1112 18:02:09.746088 2457 scope.go:117] "RemoveContainer" containerID="d89a28d2b46cd43604b73d050c816bc55a290d913829a2396aa72340946b5de1" Nov 12 18:02:09.747393 containerd[1430]: time="2024-11-12T18:02:09.747371245Z" level=info msg="RemoveContainer for \"d89a28d2b46cd43604b73d050c816bc55a290d913829a2396aa72340946b5de1\"" Nov 12 18:02:09.749431 containerd[1430]: time="2024-11-12T18:02:09.749400241Z" level=info msg="RemoveContainer for \"d89a28d2b46cd43604b73d050c816bc55a290d913829a2396aa72340946b5de1\" returns successfully" Nov 12 18:02:09.749606 kubelet[2457]: I1112 18:02:09.749577 2457 scope.go:117] "RemoveContainer" containerID="74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f" Nov 12 18:02:09.749794 containerd[1430]: time="2024-11-12T18:02:09.749760640Z" level=error msg="ContainerStatus for \"74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f\": not found" Nov 12 18:02:09.749953 kubelet[2457]: E1112 18:02:09.749911 2457 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f\": not found" containerID="74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f" Nov 12 18:02:09.750098 kubelet[2457]: I1112 18:02:09.750007 2457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f"} err="failed to get container status \"74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f\": rpc error: code = NotFound desc = an error occurred when try to find container \"74b5194e0e78a5eecdc30328ae72e386d668f8ae7e891ccae1986e37afa9583f\": not found" Nov 12 18:02:09.750098 kubelet[2457]: I1112 18:02:09.750032 2457 scope.go:117] "RemoveContainer" containerID="2e8d02b40e1513cf8900643cc89de3df9ffa15f45ddd7566ca6465f1e9d1d7ba" Nov 12 18:02:09.750236 containerd[1430]: time="2024-11-12T18:02:09.750191279Z" level=error msg="ContainerStatus for \"2e8d02b40e1513cf8900643cc89de3df9ffa15f45ddd7566ca6465f1e9d1d7ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e8d02b40e1513cf8900643cc89de3df9ffa15f45ddd7566ca6465f1e9d1d7ba\": not found" Nov 12 18:02:09.750449 kubelet[2457]: E1112 18:02:09.750343 2457 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e8d02b40e1513cf8900643cc89de3df9ffa15f45ddd7566ca6465f1e9d1d7ba\": not found" containerID="2e8d02b40e1513cf8900643cc89de3df9ffa15f45ddd7566ca6465f1e9d1d7ba" Nov 12 18:02:09.750449 kubelet[2457]: I1112 18:02:09.750365 2457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e8d02b40e1513cf8900643cc89de3df9ffa15f45ddd7566ca6465f1e9d1d7ba"} err="failed to get container status \"2e8d02b40e1513cf8900643cc89de3df9ffa15f45ddd7566ca6465f1e9d1d7ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e8d02b40e1513cf8900643cc89de3df9ffa15f45ddd7566ca6465f1e9d1d7ba\": not found" Nov 12 18:02:09.750449 kubelet[2457]: I1112 18:02:09.750387 2457 scope.go:117] "RemoveContainer" containerID="d217e1450c8c4a7e8a001b2730363c34e240ac5591ce0a8e4ce09aed9c83a94a" Nov 12 18:02:09.750557 containerd[1430]: time="2024-11-12T18:02:09.750525078Z" level=error msg="ContainerStatus for \"d217e1450c8c4a7e8a001b2730363c34e240ac5591ce0a8e4ce09aed9c83a94a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d217e1450c8c4a7e8a001b2730363c34e240ac5591ce0a8e4ce09aed9c83a94a\": not found" Nov 12 18:02:09.750793 kubelet[2457]: E1112 18:02:09.750657 2457 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d217e1450c8c4a7e8a001b2730363c34e240ac5591ce0a8e4ce09aed9c83a94a\": not found" containerID="d217e1450c8c4a7e8a001b2730363c34e240ac5591ce0a8e4ce09aed9c83a94a" Nov 12 18:02:09.750793 kubelet[2457]: I1112 18:02:09.750679 2457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d217e1450c8c4a7e8a001b2730363c34e240ac5591ce0a8e4ce09aed9c83a94a"} err="failed to get container status \"d217e1450c8c4a7e8a001b2730363c34e240ac5591ce0a8e4ce09aed9c83a94a\": rpc error: code = NotFound desc = an error occurred when try to find container \"d217e1450c8c4a7e8a001b2730363c34e240ac5591ce0a8e4ce09aed9c83a94a\": not found" Nov 12 18:02:09.750793 kubelet[2457]: I1112 18:02:09.750731 2457 scope.go:117] "RemoveContainer" containerID="bdefa8e79bf773020c6b47aa8d9d2a4d7e50406420df28f7b70b32723248b32d" Nov 12 18:02:09.750899 containerd[1430]: time="2024-11-12T18:02:09.750870597Z" level=error msg="ContainerStatus for \"bdefa8e79bf773020c6b47aa8d9d2a4d7e50406420df28f7b70b32723248b32d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdefa8e79bf773020c6b47aa8d9d2a4d7e50406420df28f7b70b32723248b32d\": not found" Nov 12 18:02:09.751096 kubelet[2457]: E1112 18:02:09.750985 2457 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdefa8e79bf773020c6b47aa8d9d2a4d7e50406420df28f7b70b32723248b32d\": not found" containerID="bdefa8e79bf773020c6b47aa8d9d2a4d7e50406420df28f7b70b32723248b32d" Nov 12 18:02:09.751096 kubelet[2457]: I1112 18:02:09.751005 2457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bdefa8e79bf773020c6b47aa8d9d2a4d7e50406420df28f7b70b32723248b32d"} err="failed to get container status \"bdefa8e79bf773020c6b47aa8d9d2a4d7e50406420df28f7b70b32723248b32d\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdefa8e79bf773020c6b47aa8d9d2a4d7e50406420df28f7b70b32723248b32d\": not found" Nov 12 18:02:09.751096 kubelet[2457]: I1112 18:02:09.751019 2457 scope.go:117] "RemoveContainer" containerID="d89a28d2b46cd43604b73d050c816bc55a290d913829a2396aa72340946b5de1" Nov 12 18:02:09.751183 containerd[1430]: time="2024-11-12T18:02:09.751148717Z" level=error msg="ContainerStatus for \"d89a28d2b46cd43604b73d050c816bc55a290d913829a2396aa72340946b5de1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d89a28d2b46cd43604b73d050c816bc55a290d913829a2396aa72340946b5de1\": not found" Nov 12 18:02:09.751378 kubelet[2457]: E1112 18:02:09.751290 2457 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d89a28d2b46cd43604b73d050c816bc55a290d913829a2396aa72340946b5de1\": not found" containerID="d89a28d2b46cd43604b73d050c816bc55a290d913829a2396aa72340946b5de1" Nov 12 18:02:09.751378 kubelet[2457]: I1112 18:02:09.751337 2457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d89a28d2b46cd43604b73d050c816bc55a290d913829a2396aa72340946b5de1"} err="failed to get container status \"d89a28d2b46cd43604b73d050c816bc55a290d913829a2396aa72340946b5de1\": rpc error: code = NotFound desc = an error occurred when try to find container \"d89a28d2b46cd43604b73d050c816bc55a290d913829a2396aa72340946b5de1\": not found" Nov 12 18:02:09.832028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-103cc33ddbb9d03b93a2f76b1db5085eaddaefe7d0aded3ffac9162160e5469c-rootfs.mount: Deactivated successfully. Nov 12 18:02:09.832135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a649e177c5cb01295bfccd10cfc569170cdd2ad1ed67d34f09142101c16d0ac5-rootfs.mount: Deactivated successfully. Nov 12 18:02:09.832188 systemd[1]: var-lib-kubelet-pods-ab1027f2\x2d88bb\x2d47bc\x2da6cd\x2dcb5acd71fb8d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d86kt7.mount: Deactivated successfully. Nov 12 18:02:09.832239 systemd[1]: var-lib-kubelet-pods-26b7f331\x2d5976\x2d4c89\x2db82e\x2d7aa2d01af351-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtl4bp.mount: Deactivated successfully. Nov 12 18:02:09.832308 systemd[1]: var-lib-kubelet-pods-26b7f331\x2d5976\x2d4c89\x2db82e\x2d7aa2d01af351-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 12 18:02:09.832366 systemd[1]: var-lib-kubelet-pods-26b7f331\x2d5976\x2d4c89\x2db82e\x2d7aa2d01af351-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 12 18:02:10.478431 kubelet[2457]: I1112 18:02:10.477660 2457 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26b7f331-5976-4c89-b82e-7aa2d01af351" path="/var/lib/kubelet/pods/26b7f331-5976-4c89-b82e-7aa2d01af351/volumes" Nov 12 18:02:10.478431 kubelet[2457]: I1112 18:02:10.478177 2457 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d" path="/var/lib/kubelet/pods/ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d/volumes" Nov 12 18:02:10.534033 kubelet[2457]: E1112 18:02:10.533988 2457 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 12 18:02:10.785675 sshd[4109]: pam_unix(sshd:session): session closed for user core Nov 12 18:02:10.791967 systemd[1]: sshd@22-10.0.0.125:22-10.0.0.1:47938.service: Deactivated successfully. Nov 12 18:02:10.794373 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 18:02:10.794596 systemd[1]: session-23.scope: Consumed 1.192s CPU time. Nov 12 18:02:10.795966 systemd-logind[1415]: Session 23 logged out. Waiting for processes to exit. Nov 12 18:02:10.797408 systemd-logind[1415]: Removed session 23. Nov 12 18:02:10.806880 systemd[1]: Started sshd@23-10.0.0.125:22-10.0.0.1:47948.service - OpenSSH per-connection server daemon (10.0.0.1:47948). Nov 12 18:02:10.842371 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 47948 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:02:10.843728 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:02:10.847904 systemd-logind[1415]: New session 24 of user core. Nov 12 18:02:10.857731 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 18:02:12.067007 sshd[4273]: pam_unix(sshd:session): session closed for user core Nov 12 18:02:12.079534 systemd[1]: sshd@23-10.0.0.125:22-10.0.0.1:47948.service: Deactivated successfully. Nov 12 18:02:12.083432 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 18:02:12.084611 systemd[1]: session-24.scope: Consumed 1.131s CPU time. Nov 12 18:02:12.087614 systemd-logind[1415]: Session 24 logged out. Waiting for processes to exit. Nov 12 18:02:12.095852 systemd[1]: Started sshd@24-10.0.0.125:22-10.0.0.1:47952.service - OpenSSH per-connection server daemon (10.0.0.1:47952). Nov 12 18:02:12.096735 kubelet[2457]: E1112 18:02:12.096043 2457 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26b7f331-5976-4c89-b82e-7aa2d01af351" containerName="mount-bpf-fs" Nov 12 18:02:12.096735 kubelet[2457]: E1112 18:02:12.096064 2457 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26b7f331-5976-4c89-b82e-7aa2d01af351" containerName="cilium-agent" Nov 12 18:02:12.096735 kubelet[2457]: E1112 18:02:12.096072 2457 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26b7f331-5976-4c89-b82e-7aa2d01af351" containerName="mount-cgroup" Nov 12 18:02:12.096735 kubelet[2457]: E1112 18:02:12.096078 2457 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26b7f331-5976-4c89-b82e-7aa2d01af351" containerName="apply-sysctl-overwrites" Nov 12 18:02:12.096735 kubelet[2457]: E1112 18:02:12.096084 2457 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26b7f331-5976-4c89-b82e-7aa2d01af351" containerName="clean-cilium-state" Nov 12 18:02:12.096735 kubelet[2457]: E1112 18:02:12.096090 2457 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d" containerName="cilium-operator" Nov 12 18:02:12.096735 kubelet[2457]: I1112 18:02:12.096112 2457 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab1027f2-88bb-47bc-a6cd-cb5acd71fb8d" containerName="cilium-operator" Nov 12 18:02:12.096735 kubelet[2457]: I1112 18:02:12.096120 2457 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b7f331-5976-4c89-b82e-7aa2d01af351" containerName="cilium-agent" Nov 12 18:02:12.098026 systemd-logind[1415]: Removed session 24. Nov 12 18:02:12.108403 systemd[1]: Created slice kubepods-burstable-podff8fd9a5_9179_410c_8555_83ab31a5a665.slice - libcontainer container kubepods-burstable-podff8fd9a5_9179_410c_8555_83ab31a5a665.slice. Nov 12 18:02:12.140381 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 47952 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:02:12.141853 sshd[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:02:12.147357 systemd-logind[1415]: New session 25 of user core. Nov 12 18:02:12.152718 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 18:02:12.184346 kubelet[2457]: I1112 18:02:12.184310 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff8fd9a5-9179-410c-8555-83ab31a5a665-cilium-run\") pod \"cilium-b4df8\" (UID: \"ff8fd9a5-9179-410c-8555-83ab31a5a665\") " pod="kube-system/cilium-b4df8" Nov 12 18:02:12.184346 kubelet[2457]: I1112 18:02:12.184350 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff8fd9a5-9179-410c-8555-83ab31a5a665-etc-cni-netd\") pod \"cilium-b4df8\" (UID: \"ff8fd9a5-9179-410c-8555-83ab31a5a665\") " pod="kube-system/cilium-b4df8" Nov 12 18:02:12.184463 kubelet[2457]: I1112 18:02:12.184371 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff8fd9a5-9179-410c-8555-83ab31a5a665-host-proc-sys-kernel\") pod \"cilium-b4df8\" (UID: \"ff8fd9a5-9179-410c-8555-83ab31a5a665\") " pod="kube-system/cilium-b4df8" Nov 12 18:02:12.184463 kubelet[2457]: I1112 18:02:12.184388 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff8fd9a5-9179-410c-8555-83ab31a5a665-hostproc\") pod \"cilium-b4df8\" (UID: \"ff8fd9a5-9179-410c-8555-83ab31a5a665\") " pod="kube-system/cilium-b4df8" Nov 12 18:02:12.184463 kubelet[2457]: I1112 18:02:12.184403 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff8fd9a5-9179-410c-8555-83ab31a5a665-clustermesh-secrets\") pod \"cilium-b4df8\" (UID: \"ff8fd9a5-9179-410c-8555-83ab31a5a665\") " pod="kube-system/cilium-b4df8" Nov 12 18:02:12.184463 kubelet[2457]: I1112 18:02:12.184418 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff8fd9a5-9179-410c-8555-83ab31a5a665-cilium-config-path\") pod \"cilium-b4df8\" (UID: \"ff8fd9a5-9179-410c-8555-83ab31a5a665\") " pod="kube-system/cilium-b4df8" Nov 12 18:02:12.184463 kubelet[2457]: I1112 18:02:12.184433 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff8fd9a5-9179-410c-8555-83ab31a5a665-host-proc-sys-net\") pod \"cilium-b4df8\" (UID: \"ff8fd9a5-9179-410c-8555-83ab31a5a665\") " pod="kube-system/cilium-b4df8" Nov 12 18:02:12.184609 kubelet[2457]: I1112 18:02:12.184448 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ff8fd9a5-9179-410c-8555-83ab31a5a665-cilium-ipsec-secrets\") pod \"cilium-b4df8\" (UID: \"ff8fd9a5-9179-410c-8555-83ab31a5a665\") " pod="kube-system/cilium-b4df8" Nov 12 18:02:12.184609 kubelet[2457]: I1112 18:02:12.184463 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff8fd9a5-9179-410c-8555-83ab31a5a665-cilium-cgroup\") pod \"cilium-b4df8\" (UID: \"ff8fd9a5-9179-410c-8555-83ab31a5a665\") " pod="kube-system/cilium-b4df8" Nov 12 18:02:12.184609 kubelet[2457]: I1112 18:02:12.184478 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff8fd9a5-9179-410c-8555-83ab31a5a665-cni-path\") pod \"cilium-b4df8\" (UID: \"ff8fd9a5-9179-410c-8555-83ab31a5a665\") " pod="kube-system/cilium-b4df8" Nov 12 18:02:12.184609 kubelet[2457]: I1112 18:02:12.184491 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff8fd9a5-9179-410c-8555-83ab31a5a665-lib-modules\") pod \"cilium-b4df8\" (UID: \"ff8fd9a5-9179-410c-8555-83ab31a5a665\") " pod="kube-system/cilium-b4df8" Nov 12 18:02:12.184609 kubelet[2457]: I1112 18:02:12.184505 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff8fd9a5-9179-410c-8555-83ab31a5a665-hubble-tls\") pod \"cilium-b4df8\" (UID: \"ff8fd9a5-9179-410c-8555-83ab31a5a665\") " pod="kube-system/cilium-b4df8" Nov 12 18:02:12.184609 kubelet[2457]: I1112 18:02:12.184573 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff8fd9a5-9179-410c-8555-83ab31a5a665-xtables-lock\") pod \"cilium-b4df8\" (UID: \"ff8fd9a5-9179-410c-8555-83ab31a5a665\") " pod="kube-system/cilium-b4df8" Nov 12 18:02:12.184734 kubelet[2457]: I1112 18:02:12.184606 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff8fd9a5-9179-410c-8555-83ab31a5a665-bpf-maps\") pod \"cilium-b4df8\" (UID: \"ff8fd9a5-9179-410c-8555-83ab31a5a665\") " pod="kube-system/cilium-b4df8" Nov 12 18:02:12.184734 kubelet[2457]: I1112 18:02:12.184623 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r255z\" (UniqueName: \"kubernetes.io/projected/ff8fd9a5-9179-410c-8555-83ab31a5a665-kube-api-access-r255z\") pod \"cilium-b4df8\" (UID: \"ff8fd9a5-9179-410c-8555-83ab31a5a665\") " pod="kube-system/cilium-b4df8" Nov 12 18:02:12.203757 sshd[4286]: pam_unix(sshd:session): session closed for user core Nov 12 18:02:12.211323 systemd[1]: sshd@24-10.0.0.125:22-10.0.0.1:47952.service: Deactivated successfully. Nov 12 18:02:12.213400 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 18:02:12.215066 systemd-logind[1415]: Session 25 logged out. Waiting for processes to exit. Nov 12 18:02:12.222797 systemd[1]: Started sshd@25-10.0.0.125:22-10.0.0.1:47956.service - OpenSSH per-connection server daemon (10.0.0.1:47956). Nov 12 18:02:12.224358 systemd-logind[1415]: Removed session 25. Nov 12 18:02:12.252768 sshd[4294]: Accepted publickey for core from 10.0.0.1 port 47956 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:02:12.253959 sshd[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:02:12.257670 systemd-logind[1415]: New session 26 of user core. Nov 12 18:02:12.269756 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 18:02:12.410982 kubelet[2457]: E1112 18:02:12.410925 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:02:12.411483 containerd[1430]: time="2024-11-12T18:02:12.411438597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b4df8,Uid:ff8fd9a5-9179-410c-8555-83ab31a5a665,Namespace:kube-system,Attempt:0,}" Nov 12 18:02:12.431509 containerd[1430]: time="2024-11-12T18:02:12.431283517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:02:12.431509 containerd[1430]: time="2024-11-12T18:02:12.431339837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:02:12.431509 containerd[1430]: time="2024-11-12T18:02:12.431363677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:02:12.431509 containerd[1430]: time="2024-11-12T18:02:12.431455316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:02:12.450736 systemd[1]: Started cri-containerd-33baed846b3b1c6ea77fe48b2ab1d2002645735fd5b1065d2bc175018df539a5.scope - libcontainer container 33baed846b3b1c6ea77fe48b2ab1d2002645735fd5b1065d2bc175018df539a5. Nov 12 18:02:12.470590 containerd[1430]: time="2024-11-12T18:02:12.470481158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b4df8,Uid:ff8fd9a5-9179-410c-8555-83ab31a5a665,Namespace:kube-system,Attempt:0,} returns sandbox id \"33baed846b3b1c6ea77fe48b2ab1d2002645735fd5b1065d2bc175018df539a5\"" Nov 12 18:02:12.471215 kubelet[2457]: E1112 18:02:12.471196 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:02:12.480336 containerd[1430]: time="2024-11-12T18:02:12.480266578Z" level=info msg="CreateContainer within sandbox \"33baed846b3b1c6ea77fe48b2ab1d2002645735fd5b1065d2bc175018df539a5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 18:02:12.489312 containerd[1430]: time="2024-11-12T18:02:12.489257320Z" level=info msg="CreateContainer within sandbox \"33baed846b3b1c6ea77fe48b2ab1d2002645735fd5b1065d2bc175018df539a5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"63d4e844abf2b67582328b17a5157b0c4838b812b0b38b31dad7acc3dbd7543c\"" Nov 12 18:02:12.490568 containerd[1430]: time="2024-11-12T18:02:12.490520237Z" level=info msg="StartContainer for \"63d4e844abf2b67582328b17a5157b0c4838b812b0b38b31dad7acc3dbd7543c\"" Nov 12 18:02:12.517728 systemd[1]: Started cri-containerd-63d4e844abf2b67582328b17a5157b0c4838b812b0b38b31dad7acc3dbd7543c.scope - libcontainer container 63d4e844abf2b67582328b17a5157b0c4838b812b0b38b31dad7acc3dbd7543c. Nov 12 18:02:12.538664 containerd[1430]: time="2024-11-12T18:02:12.538623180Z" level=info msg="StartContainer for \"63d4e844abf2b67582328b17a5157b0c4838b812b0b38b31dad7acc3dbd7543c\" returns successfully" Nov 12 18:02:12.555408 systemd[1]: cri-containerd-63d4e844abf2b67582328b17a5157b0c4838b812b0b38b31dad7acc3dbd7543c.scope: Deactivated successfully. Nov 12 18:02:12.581025 kubelet[2457]: I1112 18:02:12.580963 2457 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-11-12T18:02:12Z","lastTransitionTime":"2024-11-12T18:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 12 18:02:12.589099 containerd[1430]: time="2024-11-12T18:02:12.589045199Z" level=info msg="shim disconnected" id=63d4e844abf2b67582328b17a5157b0c4838b812b0b38b31dad7acc3dbd7543c namespace=k8s.io Nov 12 18:02:12.589099 containerd[1430]: time="2024-11-12T18:02:12.589095039Z" level=warning msg="cleaning up after shim disconnected" id=63d4e844abf2b67582328b17a5157b0c4838b812b0b38b31dad7acc3dbd7543c namespace=k8s.io Nov 12 18:02:12.589099 containerd[1430]: time="2024-11-12T18:02:12.589104479Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 18:02:12.723706 kubelet[2457]: E1112 18:02:12.723410 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:02:12.727142 containerd[1430]: time="2024-11-12T18:02:12.727085121Z" level=info msg="CreateContainer within sandbox \"33baed846b3b1c6ea77fe48b2ab1d2002645735fd5b1065d2bc175018df539a5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 18:02:12.739045 containerd[1430]: time="2024-11-12T18:02:12.739000097Z" level=info msg="CreateContainer within sandbox \"33baed846b3b1c6ea77fe48b2ab1d2002645735fd5b1065d2bc175018df539a5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8fb62963ece46138eccd8a71ca3742fdba2155bb08de4a35a7bfc655e560a45c\"" Nov 12 18:02:12.741791 containerd[1430]: time="2024-11-12T18:02:12.740261014Z" level=info msg="StartContainer for \"8fb62963ece46138eccd8a71ca3742fdba2155bb08de4a35a7bfc655e560a45c\"" Nov 12 18:02:12.764727 systemd[1]: Started cri-containerd-8fb62963ece46138eccd8a71ca3742fdba2155bb08de4a35a7bfc655e560a45c.scope - libcontainer container 8fb62963ece46138eccd8a71ca3742fdba2155bb08de4a35a7bfc655e560a45c. Nov 12 18:02:12.792627 containerd[1430]: time="2024-11-12T18:02:12.792578069Z" level=info msg="StartContainer for \"8fb62963ece46138eccd8a71ca3742fdba2155bb08de4a35a7bfc655e560a45c\" returns successfully" Nov 12 18:02:12.796393 systemd[1]: cri-containerd-8fb62963ece46138eccd8a71ca3742fdba2155bb08de4a35a7bfc655e560a45c.scope: Deactivated successfully. Nov 12 18:02:12.814891 containerd[1430]: time="2024-11-12T18:02:12.814829264Z" level=info msg="shim disconnected" id=8fb62963ece46138eccd8a71ca3742fdba2155bb08de4a35a7bfc655e560a45c namespace=k8s.io Nov 12 18:02:12.814891 containerd[1430]: time="2024-11-12T18:02:12.814883704Z" level=warning msg="cleaning up after shim disconnected" id=8fb62963ece46138eccd8a71ca3742fdba2155bb08de4a35a7bfc655e560a45c namespace=k8s.io Nov 12 18:02:12.814891 containerd[1430]: time="2024-11-12T18:02:12.814893744Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 18:02:13.726964 kubelet[2457]: E1112 18:02:13.726798 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:02:13.729618 containerd[1430]: time="2024-11-12T18:02:13.729511530Z" level=info msg="CreateContainer within sandbox \"33baed846b3b1c6ea77fe48b2ab1d2002645735fd5b1065d2bc175018df539a5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 18:02:13.743217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1111045801.mount: Deactivated successfully. Nov 12 18:02:13.744093 containerd[1430]: time="2024-11-12T18:02:13.743948302Z" level=info msg="CreateContainer within sandbox \"33baed846b3b1c6ea77fe48b2ab1d2002645735fd5b1065d2bc175018df539a5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1c60dc8917eeab10f84de28638381abc2594402554e07c319fdffaa4d70aeef4\"" Nov 12 18:02:13.744690 containerd[1430]: time="2024-11-12T18:02:13.744660220Z" level=info msg="StartContainer for \"1c60dc8917eeab10f84de28638381abc2594402554e07c319fdffaa4d70aeef4\"" Nov 12 18:02:13.775718 systemd[1]: Started cri-containerd-1c60dc8917eeab10f84de28638381abc2594402554e07c319fdffaa4d70aeef4.scope - libcontainer container 1c60dc8917eeab10f84de28638381abc2594402554e07c319fdffaa4d70aeef4. Nov 12 18:02:13.799002 containerd[1430]: time="2024-11-12T18:02:13.798829233Z" level=info msg="StartContainer for \"1c60dc8917eeab10f84de28638381abc2594402554e07c319fdffaa4d70aeef4\" returns successfully" Nov 12 18:02:13.802063 systemd[1]: cri-containerd-1c60dc8917eeab10f84de28638381abc2594402554e07c319fdffaa4d70aeef4.scope: Deactivated successfully. Nov 12 18:02:13.823259 containerd[1430]: time="2024-11-12T18:02:13.823184865Z" level=info msg="shim disconnected" id=1c60dc8917eeab10f84de28638381abc2594402554e07c319fdffaa4d70aeef4 namespace=k8s.io Nov 12 18:02:13.823259 containerd[1430]: time="2024-11-12T18:02:13.823248545Z" level=warning msg="cleaning up after shim disconnected" id=1c60dc8917eeab10f84de28638381abc2594402554e07c319fdffaa4d70aeef4 namespace=k8s.io Nov 12 18:02:13.823259 containerd[1430]: time="2024-11-12T18:02:13.823258665Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 18:02:14.289569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c60dc8917eeab10f84de28638381abc2594402554e07c319fdffaa4d70aeef4-rootfs.mount: Deactivated successfully. Nov 12 18:02:14.730027 kubelet[2457]: E1112 18:02:14.729967 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:02:14.732082 containerd[1430]: time="2024-11-12T18:02:14.732022139Z" level=info msg="CreateContainer within sandbox \"33baed846b3b1c6ea77fe48b2ab1d2002645735fd5b1065d2bc175018df539a5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 18:02:14.748344 containerd[1430]: time="2024-11-12T18:02:14.748282868Z" level=info msg="CreateContainer within sandbox \"33baed846b3b1c6ea77fe48b2ab1d2002645735fd5b1065d2bc175018df539a5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"52bdb710b6ec0e428b5e243fd5e2eb632145b780e8f2f887cb3cc9b9ba6a570f\"" Nov 12 18:02:14.750168 containerd[1430]: time="2024-11-12T18:02:14.749474546Z" level=info msg="StartContainer for \"52bdb710b6ec0e428b5e243fd5e2eb632145b780e8f2f887cb3cc9b9ba6a570f\"" Nov 12 18:02:14.781737 systemd[1]: Started cri-containerd-52bdb710b6ec0e428b5e243fd5e2eb632145b780e8f2f887cb3cc9b9ba6a570f.scope - libcontainer container 52bdb710b6ec0e428b5e243fd5e2eb632145b780e8f2f887cb3cc9b9ba6a570f. Nov 12 18:02:14.800440 systemd[1]: cri-containerd-52bdb710b6ec0e428b5e243fd5e2eb632145b780e8f2f887cb3cc9b9ba6a570f.scope: Deactivated successfully. Nov 12 18:02:14.817084 containerd[1430]: time="2024-11-12T18:02:14.817042175Z" level=info msg="StartContainer for \"52bdb710b6ec0e428b5e243fd5e2eb632145b780e8f2f887cb3cc9b9ba6a570f\" returns successfully" Nov 12 18:02:14.825389 containerd[1430]: time="2024-11-12T18:02:14.821791526Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff8fd9a5_9179_410c_8555_83ab31a5a665.slice/cri-containerd-52bdb710b6ec0e428b5e243fd5e2eb632145b780e8f2f887cb3cc9b9ba6a570f.scope/memory.events\": no such file or directory" Nov 12 18:02:14.833104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52bdb710b6ec0e428b5e243fd5e2eb632145b780e8f2f887cb3cc9b9ba6a570f-rootfs.mount: Deactivated successfully. Nov 12 18:02:14.837210 containerd[1430]: time="2024-11-12T18:02:14.837148096Z" level=info msg="shim disconnected" id=52bdb710b6ec0e428b5e243fd5e2eb632145b780e8f2f887cb3cc9b9ba6a570f namespace=k8s.io Nov 12 18:02:14.837210 containerd[1430]: time="2024-11-12T18:02:14.837207136Z" level=warning msg="cleaning up after shim disconnected" id=52bdb710b6ec0e428b5e243fd5e2eb632145b780e8f2f887cb3cc9b9ba6a570f namespace=k8s.io Nov 12 18:02:14.837385 containerd[1430]: time="2024-11-12T18:02:14.837217656Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 18:02:15.535044 kubelet[2457]: E1112 18:02:15.534983 2457 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 12 18:02:15.736588 kubelet[2457]: E1112 18:02:15.736486 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:02:15.739717 containerd[1430]: time="2024-11-12T18:02:15.739645377Z" level=info msg="CreateContainer within sandbox \"33baed846b3b1c6ea77fe48b2ab1d2002645735fd5b1065d2bc175018df539a5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 18:02:15.756848 containerd[1430]: time="2024-11-12T18:02:15.756792625Z" level=info msg="CreateContainer within sandbox \"33baed846b3b1c6ea77fe48b2ab1d2002645735fd5b1065d2bc175018df539a5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2481ab1a68c1f84b2f77b212e09bcdf1c36b51ecfdea7a7f2335bac8c207907c\"" Nov 12 18:02:15.757363 containerd[1430]: time="2024-11-12T18:02:15.757328104Z" level=info msg="StartContainer for \"2481ab1a68c1f84b2f77b212e09bcdf1c36b51ecfdea7a7f2335bac8c207907c\"" Nov 12 18:02:15.788755 systemd[1]: Started cri-containerd-2481ab1a68c1f84b2f77b212e09bcdf1c36b51ecfdea7a7f2335bac8c207907c.scope - libcontainer container 2481ab1a68c1f84b2f77b212e09bcdf1c36b51ecfdea7a7f2335bac8c207907c. Nov 12 18:02:15.811638 containerd[1430]: time="2024-11-12T18:02:15.811539681Z" level=info msg="StartContainer for \"2481ab1a68c1f84b2f77b212e09bcdf1c36b51ecfdea7a7f2335bac8c207907c\" returns successfully" Nov 12 18:02:16.086649 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Nov 12 18:02:16.746453 kubelet[2457]: E1112 18:02:16.746424 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:02:16.761312 kubelet[2457]: I1112 18:02:16.761251 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b4df8" podStartSLOduration=4.761233707 podStartE2EDuration="4.761233707s" podCreationTimestamp="2024-11-12 18:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 18:02:16.760905787 +0000 UTC m=+86.367019924" watchObservedRunningTime="2024-11-12 18:02:16.761233707 +0000 UTC m=+86.367347804" Nov 12 18:02:17.475627 kubelet[2457]: E1112 18:02:17.475593 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:02:18.412383 kubelet[2457]: E1112 18:02:18.412354 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:02:18.478283 kubelet[2457]: E1112 18:02:18.478243 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:02:18.820433 systemd-networkd[1376]: lxc_health: Link UP Nov 12 18:02:18.831297 systemd-networkd[1376]: lxc_health: Gained carrier Nov 12 18:02:20.414341 kubelet[2457]: E1112 18:02:20.413166 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:02:20.514729 systemd-networkd[1376]: lxc_health: Gained IPv6LL Nov 12 18:02:20.755004 kubelet[2457]: E1112 18:02:20.754602 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:02:21.755839 kubelet[2457]: E1112 18:02:21.755635 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:02:22.824754 systemd[1]: run-containerd-runc-k8s.io-2481ab1a68c1f84b2f77b212e09bcdf1c36b51ecfdea7a7f2335bac8c207907c-runc.Db3bhL.mount: Deactivated successfully. Nov 12 18:02:24.982788 sshd[4294]: pam_unix(sshd:session): session closed for user core Nov 12 18:02:24.985685 systemd[1]: sshd@25-10.0.0.125:22-10.0.0.1:47956.service: Deactivated successfully. Nov 12 18:02:24.987355 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 18:02:24.988823 systemd-logind[1415]: Session 26 logged out. Waiting for processes to exit. Nov 12 18:02:24.989894 systemd-logind[1415]: Removed session 26.