Nov 12 22:52:43.906487 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 21:10:03 -00 2024 Nov 12 22:52:43.906509 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 12 22:52:43.906519 kernel: BIOS-provided physical RAM map: Nov 12 22:52:43.906526 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 12 22:52:43.906532 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 12 22:52:43.906538 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 12 22:52:43.906545 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 12 22:52:43.906551 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 12 22:52:43.906557 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 12 22:52:43.906563 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 12 22:52:43.906571 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Nov 12 22:52:43.906578 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 12 22:52:43.906584 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 12 22:52:43.906590 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 12 22:52:43.906597 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 12 22:52:43.906604 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 12 22:52:43.906613 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Nov 12 22:52:43.906620 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Nov 12 22:52:43.906626 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Nov 12 22:52:43.906633 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Nov 12 22:52:43.906639 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 12 22:52:43.906646 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 12 22:52:43.906652 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 12 22:52:43.906659 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 12 22:52:43.906665 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 12 22:52:43.906672 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 12 22:52:43.906678 kernel: NX (Execute Disable) protection: active Nov 12 22:52:43.906687 kernel: APIC: Static calls initialized Nov 12 22:52:43.906693 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Nov 12 22:52:43.906700 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Nov 12 22:52:43.906707 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Nov 12 22:52:43.906713 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Nov 12 22:52:43.906719 kernel: extended physical RAM map: Nov 12 22:52:43.906726 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 12 22:52:43.906733 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 12 22:52:43.906739 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 12 22:52:43.906746 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Nov 12 22:52:43.906752 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 12 22:52:43.906761 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 12 22:52:43.906768 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 12 22:52:43.906778 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Nov 12 22:52:43.906785 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Nov 12 22:52:43.906792 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Nov 12 22:52:43.906799 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Nov 12 22:52:43.906806 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Nov 12 22:52:43.906815 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 12 22:52:43.906822 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 12 22:52:43.906829 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 12 22:52:43.906836 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 12 22:52:43.906843 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 12 22:52:43.906850 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Nov 12 22:52:43.906857 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Nov 12 22:52:43.906864 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Nov 12 22:52:43.906871 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Nov 12 22:52:43.906880 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 12 22:52:43.906887 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 12 22:52:43.906894 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 12 22:52:43.906900 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 12 22:52:43.906907 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 12 22:52:43.906914 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 12 22:52:43.906921 kernel: efi: EFI v2.7 by EDK II Nov 12 22:52:43.906928 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Nov 12 22:52:43.906935 kernel: random: crng init done Nov 12 22:52:43.906942 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Nov 12 22:52:43.906949 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Nov 12 22:52:43.906959 kernel: secureboot: Secure boot disabled Nov 12 22:52:43.906966 kernel: SMBIOS 2.8 present. Nov 12 22:52:43.906973 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Nov 12 22:52:43.906980 kernel: Hypervisor detected: KVM Nov 12 22:52:43.906987 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 22:52:43.906994 kernel: kvm-clock: using sched offset of 2659369142 cycles Nov 12 22:52:43.907001 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 22:52:43.907009 kernel: tsc: Detected 2794.746 MHz processor Nov 12 22:52:43.907016 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 22:52:43.907023 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 22:52:43.907030 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 12 22:52:43.907047 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 12 22:52:43.907055 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 22:52:43.907062 kernel: Using GB pages for direct mapping Nov 12 22:52:43.907069 kernel: ACPI: Early table checksum verification disabled Nov 12 22:52:43.907076 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 12 22:52:43.907083 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 12 22:52:43.907091 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:52:43.907098 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:52:43.907105 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 12 22:52:43.907114 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:52:43.907121 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:52:43.907128 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:52:43.907135 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:52:43.907143 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 12 22:52:43.907150 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 12 22:52:43.907157 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Nov 12 22:52:43.907164 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 12 22:52:43.907173 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 12 22:52:43.907180 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 12 22:52:43.907187 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 12 22:52:43.907194 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 12 22:52:43.907201 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 12 22:52:43.907208 kernel: No NUMA configuration found Nov 12 22:52:43.907215 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Nov 12 22:52:43.907222 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Nov 12 22:52:43.907229 kernel: Zone ranges: Nov 12 22:52:43.907237 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 22:52:43.907246 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Nov 12 22:52:43.907253 kernel: Normal empty Nov 12 22:52:43.907260 kernel: Movable zone start for each node Nov 12 22:52:43.907267 kernel: Early memory node ranges Nov 12 22:52:43.907274 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 12 22:52:43.907281 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 12 22:52:43.907288 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 12 22:52:43.907295 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Nov 12 22:52:43.907302 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Nov 12 22:52:43.907311 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Nov 12 22:52:43.907318 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Nov 12 22:52:43.907325 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Nov 12 22:52:43.907332 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Nov 12 22:52:43.907339 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 22:52:43.907348 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 12 22:52:43.907367 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 12 22:52:43.907381 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 22:52:43.907440 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Nov 12 22:52:43.907448 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Nov 12 22:52:43.907456 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 12 22:52:43.907463 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Nov 12 22:52:43.907474 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Nov 12 22:52:43.907482 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 12 22:52:43.907489 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 22:52:43.907497 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 22:52:43.907504 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 22:52:43.907514 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 22:52:43.907522 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 22:52:43.907530 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 22:52:43.907537 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 22:52:43.907545 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 22:52:43.907552 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 22:52:43.907560 kernel: TSC deadline timer available Nov 12 22:52:43.907567 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 12 22:52:43.907575 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 22:52:43.907584 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 12 22:52:43.907592 kernel: kvm-guest: setup PV sched yield Nov 12 22:52:43.907599 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Nov 12 22:52:43.907607 kernel: Booting paravirtualized kernel on KVM Nov 12 22:52:43.907615 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 22:52:43.907622 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 12 22:52:43.907630 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Nov 12 22:52:43.907637 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Nov 12 22:52:43.907645 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 12 22:52:43.907654 kernel: kvm-guest: PV spinlocks enabled Nov 12 22:52:43.907662 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 22:52:43.907671 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 12 22:52:43.907679 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 22:52:43.907686 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 22:52:43.907694 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 22:52:43.907702 kernel: Fallback order for Node 0: 0 Nov 12 22:52:43.907709 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Nov 12 22:52:43.907717 kernel: Policy zone: DMA32 Nov 12 22:52:43.907727 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 22:52:43.907734 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2305K rwdata, 22736K rodata, 42968K init, 2220K bss, 175776K reserved, 0K cma-reserved) Nov 12 22:52:43.907742 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 22:52:43.907750 kernel: ftrace: allocating 37801 entries in 148 pages Nov 12 22:52:43.907757 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 22:52:43.907765 kernel: Dynamic Preempt: voluntary Nov 12 22:52:43.907773 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 22:52:43.907781 kernel: rcu: RCU event tracing is enabled. Nov 12 22:52:43.907789 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 22:52:43.907799 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 22:52:43.907807 kernel: Rude variant of Tasks RCU enabled. Nov 12 22:52:43.907815 kernel: Tracing variant of Tasks RCU enabled. Nov 12 22:52:43.907822 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 22:52:43.907830 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 22:52:43.907837 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 12 22:52:43.907845 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 22:52:43.907852 kernel: Console: colour dummy device 80x25 Nov 12 22:52:43.907860 kernel: printk: console [ttyS0] enabled Nov 12 22:52:43.907870 kernel: ACPI: Core revision 20230628 Nov 12 22:52:43.907892 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 12 22:52:43.907901 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 22:52:43.907908 kernel: x2apic enabled Nov 12 22:52:43.907916 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 22:52:43.907924 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 12 22:52:43.907931 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 12 22:52:43.907939 kernel: kvm-guest: setup PV IPIs Nov 12 22:52:43.907946 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 22:52:43.907956 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 12 22:52:43.907964 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Nov 12 22:52:43.907972 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 12 22:52:43.909084 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 12 22:52:43.909093 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 12 22:52:43.909101 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 22:52:43.909109 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 22:52:43.909117 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 22:52:43.909124 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 22:52:43.909136 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 12 22:52:43.909143 kernel: RETBleed: Mitigation: untrained return thunk Nov 12 22:52:43.909151 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 22:52:43.909159 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 22:52:43.909166 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 12 22:52:43.909175 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 12 22:52:43.909183 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 12 22:52:43.909190 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 22:52:43.909200 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 22:52:43.909208 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 22:52:43.909215 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 22:52:43.909223 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 12 22:52:43.909231 kernel: Freeing SMP alternatives memory: 32K Nov 12 22:52:43.909238 kernel: pid_max: default: 32768 minimum: 301 Nov 12 22:52:43.909246 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 22:52:43.909253 kernel: landlock: Up and running. Nov 12 22:52:43.909261 kernel: SELinux: Initializing. Nov 12 22:52:43.909271 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 22:52:43.909279 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 22:52:43.909287 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 12 22:52:43.909295 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 22:52:43.909302 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 22:52:43.909310 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 22:52:43.909318 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 12 22:52:43.909325 kernel: ... version: 0 Nov 12 22:52:43.909333 kernel: ... bit width: 48 Nov 12 22:52:43.909343 kernel: ... generic registers: 6 Nov 12 22:52:43.909350 kernel: ... value mask: 0000ffffffffffff Nov 12 22:52:43.909358 kernel: ... max period: 00007fffffffffff Nov 12 22:52:43.909365 kernel: ... fixed-purpose events: 0 Nov 12 22:52:43.909373 kernel: ... event mask: 000000000000003f Nov 12 22:52:43.909380 kernel: signal: max sigframe size: 1776 Nov 12 22:52:43.909398 kernel: rcu: Hierarchical SRCU implementation. Nov 12 22:52:43.909406 kernel: rcu: Max phase no-delay instances is 400. Nov 12 22:52:43.909414 kernel: smp: Bringing up secondary CPUs ... Nov 12 22:52:43.909424 kernel: smpboot: x86: Booting SMP configuration: Nov 12 22:52:43.909431 kernel: .... node #0, CPUs: #1 #2 #3 Nov 12 22:52:43.909439 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 22:52:43.909446 kernel: smpboot: Max logical packages: 1 Nov 12 22:52:43.909454 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Nov 12 22:52:43.909462 kernel: devtmpfs: initialized Nov 12 22:52:43.909469 kernel: x86/mm: Memory block size: 128MB Nov 12 22:52:43.909477 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 12 22:52:43.909485 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 12 22:52:43.909495 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Nov 12 22:52:43.909503 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 12 22:52:43.909510 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Nov 12 22:52:43.909518 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 12 22:52:43.909526 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 22:52:43.909534 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 22:52:43.909541 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 22:52:43.909549 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 22:52:43.909557 kernel: audit: initializing netlink subsys (disabled) Nov 12 22:52:43.909567 kernel: audit: type=2000 audit(1731451964.446:1): state=initialized audit_enabled=0 res=1 Nov 12 22:52:43.909574 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 22:52:43.909582 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 22:52:43.909590 kernel: cpuidle: using governor menu Nov 12 22:52:43.909597 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 22:52:43.909605 kernel: dca service started, version 1.12.1 Nov 12 22:52:43.909612 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Nov 12 22:52:43.909620 kernel: PCI: Using configuration type 1 for base access Nov 12 22:52:43.909627 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 22:52:43.909638 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 22:52:43.909645 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 22:52:43.909653 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 22:52:43.909660 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 22:52:43.909668 kernel: ACPI: Added _OSI(Module Device) Nov 12 22:52:43.909676 kernel: ACPI: Added _OSI(Processor Device) Nov 12 22:52:43.909683 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 22:52:43.909691 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 22:52:43.909698 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 22:52:43.909708 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 22:52:43.909716 kernel: ACPI: Interpreter enabled Nov 12 22:52:43.909723 kernel: ACPI: PM: (supports S0 S3 S5) Nov 12 22:52:43.909731 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 22:52:43.909739 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 22:52:43.909747 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 22:52:43.909754 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 12 22:52:43.909762 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 22:52:43.909942 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 22:52:43.910084 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 12 22:52:43.910208 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 12 22:52:43.910218 kernel: PCI host bridge to bus 0000:00 Nov 12 22:52:43.910342 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 22:52:43.910470 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 22:52:43.910581 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 22:52:43.910695 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Nov 12 22:52:43.910805 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Nov 12 22:52:43.910913 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Nov 12 22:52:43.911021 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 22:52:43.911169 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 12 22:52:43.911299 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 12 22:52:43.911451 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Nov 12 22:52:43.911579 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Nov 12 22:52:43.911698 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 12 22:52:43.911817 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Nov 12 22:52:43.911936 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 22:52:43.912084 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 22:52:43.912205 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Nov 12 22:52:43.912330 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Nov 12 22:52:43.913564 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Nov 12 22:52:43.913709 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 12 22:52:43.913830 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Nov 12 22:52:43.913949 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Nov 12 22:52:43.914078 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Nov 12 22:52:43.914207 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 22:52:43.914333 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Nov 12 22:52:43.914478 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Nov 12 22:52:43.914598 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Nov 12 22:52:43.914716 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Nov 12 22:52:43.914841 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 12 22:52:43.914960 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 12 22:52:43.915095 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 12 22:52:43.915220 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Nov 12 22:52:43.915338 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Nov 12 22:52:43.915495 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 12 22:52:43.915615 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Nov 12 22:52:43.915625 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 22:52:43.915633 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 22:52:43.915641 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 22:52:43.915653 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 22:52:43.915661 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 12 22:52:43.915668 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 12 22:52:43.915675 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 12 22:52:43.915683 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 12 22:52:43.915690 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 12 22:52:43.915698 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 12 22:52:43.915705 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 12 22:52:43.915713 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 12 22:52:43.915723 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 12 22:52:43.915730 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 12 22:52:43.915738 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 12 22:52:43.915745 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 12 22:52:43.915752 kernel: iommu: Default domain type: Translated Nov 12 22:52:43.915760 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 22:52:43.915767 kernel: efivars: Registered efivars operations Nov 12 22:52:43.915774 kernel: PCI: Using ACPI for IRQ routing Nov 12 22:52:43.915782 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 22:52:43.915790 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 12 22:52:43.915799 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Nov 12 22:52:43.915806 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Nov 12 22:52:43.915814 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Nov 12 22:52:43.915821 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Nov 12 22:52:43.915828 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Nov 12 22:52:43.915836 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Nov 12 22:52:43.915843 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Nov 12 22:52:43.915963 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 12 22:52:43.916093 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 12 22:52:43.916217 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 22:52:43.916226 kernel: vgaarb: loaded Nov 12 22:52:43.916234 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 12 22:52:43.916241 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 12 22:52:43.916249 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 22:52:43.916256 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 22:52:43.916264 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 22:52:43.916272 kernel: pnp: PnP ACPI init Nov 12 22:52:43.916416 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Nov 12 22:52:43.916428 kernel: pnp: PnP ACPI: found 6 devices Nov 12 22:52:43.916436 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 22:52:43.916443 kernel: NET: Registered PF_INET protocol family Nov 12 22:52:43.916468 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 22:52:43.916478 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 22:52:43.916489 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 22:52:43.916497 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 22:52:43.916507 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 22:52:43.916515 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 22:52:43.916523 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 22:52:43.916530 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 22:52:43.916538 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 22:52:43.916546 kernel: NET: Registered PF_XDP protocol family Nov 12 22:52:43.916669 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Nov 12 22:52:43.916788 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Nov 12 22:52:43.916906 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 22:52:43.917019 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 22:52:43.917150 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 22:52:43.917269 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Nov 12 22:52:43.917379 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Nov 12 22:52:43.917565 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Nov 12 22:52:43.917577 kernel: PCI: CLS 0 bytes, default 64 Nov 12 22:52:43.917585 kernel: Initialise system trusted keyrings Nov 12 22:52:43.917597 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 22:52:43.917605 kernel: Key type asymmetric registered Nov 12 22:52:43.917612 kernel: Asymmetric key parser 'x509' registered Nov 12 22:52:43.917620 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 22:52:43.917627 kernel: io scheduler mq-deadline registered Nov 12 22:52:43.917639 kernel: io scheduler kyber registered Nov 12 22:52:43.917647 kernel: io scheduler bfq registered Nov 12 22:52:43.917655 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 22:52:43.917663 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 12 22:52:43.917674 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 12 22:52:43.917684 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 12 22:52:43.917692 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 22:52:43.917700 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 22:52:43.917708 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 22:52:43.917715 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 22:52:43.917726 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 22:52:43.917856 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 12 22:52:43.917970 kernel: rtc_cmos 00:04: registered as rtc0 Nov 12 22:52:43.917981 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 22:52:43.918103 kernel: rtc_cmos 00:04: setting system clock to 2024-11-12T22:52:43 UTC (1731451963) Nov 12 22:52:43.918216 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 12 22:52:43.918227 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 12 22:52:43.918234 kernel: efifb: probing for efifb Nov 12 22:52:43.918246 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Nov 12 22:52:43.918254 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Nov 12 22:52:43.918261 kernel: efifb: scrolling: redraw Nov 12 22:52:43.918269 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 12 22:52:43.918277 kernel: Console: switching to colour frame buffer device 160x50 Nov 12 22:52:43.918286 kernel: fb0: EFI VGA frame buffer device Nov 12 22:52:43.918294 kernel: pstore: Using crash dump compression: deflate Nov 12 22:52:43.918302 kernel: pstore: Registered efi_pstore as persistent store backend Nov 12 22:52:43.918309 kernel: NET: Registered PF_INET6 protocol family Nov 12 22:52:43.918320 kernel: Segment Routing with IPv6 Nov 12 22:52:43.918328 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 22:52:43.918335 kernel: NET: Registered PF_PACKET protocol family Nov 12 22:52:43.918343 kernel: Key type dns_resolver registered Nov 12 22:52:43.918351 kernel: IPI shorthand broadcast: enabled Nov 12 22:52:43.918359 kernel: sched_clock: Marking stable (632003278, 157838292)->(806731245, -16889675) Nov 12 22:52:43.918367 kernel: registered taskstats version 1 Nov 12 22:52:43.918375 kernel: Loading compiled-in X.509 certificates Nov 12 22:52:43.918383 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: d04cb2ddbd5c3ca82936c51f5645ef0dcbdcd3b4' Nov 12 22:52:43.918486 kernel: Key type .fscrypt registered Nov 12 22:52:43.918497 kernel: Key type fscrypt-provisioning registered Nov 12 22:52:43.918508 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 22:52:43.918519 kernel: ima: Allocated hash algorithm: sha1 Nov 12 22:52:43.918529 kernel: ima: No architecture policies found Nov 12 22:52:43.918538 kernel: clk: Disabling unused clocks Nov 12 22:52:43.918545 kernel: Freeing unused kernel image (initmem) memory: 42968K Nov 12 22:52:43.918553 kernel: Write protecting the kernel read-only data: 36864k Nov 12 22:52:43.918561 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Nov 12 22:52:43.918571 kernel: Run /init as init process Nov 12 22:52:43.918581 kernel: with arguments: Nov 12 22:52:43.918592 kernel: /init Nov 12 22:52:43.918602 kernel: with environment: Nov 12 22:52:43.918612 kernel: HOME=/ Nov 12 22:52:43.918623 kernel: TERM=linux Nov 12 22:52:43.918631 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 22:52:43.918642 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 22:52:43.918654 systemd[1]: Detected virtualization kvm. Nov 12 22:52:43.918664 systemd[1]: Detected architecture x86-64. Nov 12 22:52:43.918675 systemd[1]: Running in initrd. Nov 12 22:52:43.918687 systemd[1]: No hostname configured, using default hostname. Nov 12 22:52:43.918697 systemd[1]: Hostname set to . Nov 12 22:52:43.918709 systemd[1]: Initializing machine ID from VM UUID. Nov 12 22:52:43.918720 systemd[1]: Queued start job for default target initrd.target. Nov 12 22:52:43.918731 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:52:43.918743 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:52:43.918754 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 22:52:43.918766 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 22:52:43.918777 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 22:52:43.918789 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 22:52:43.918802 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 22:52:43.918817 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 22:52:43.918828 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:52:43.918840 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:52:43.918851 systemd[1]: Reached target paths.target - Path Units. Nov 12 22:52:43.918860 systemd[1]: Reached target slices.target - Slice Units. Nov 12 22:52:43.918869 systemd[1]: Reached target swap.target - Swaps. Nov 12 22:52:43.918877 systemd[1]: Reached target timers.target - Timer Units. Nov 12 22:52:43.918885 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 22:52:43.918894 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 22:52:43.918904 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 22:52:43.918912 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 22:52:43.918921 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:52:43.918929 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 22:52:43.918937 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:52:43.918945 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 22:52:43.918954 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 22:52:43.918962 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 22:52:43.918971 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 22:52:43.918981 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 22:52:43.918990 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 22:52:43.918998 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 22:52:43.919006 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:52:43.919015 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 22:52:43.919023 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:52:43.919031 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 22:52:43.919050 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 22:52:43.919080 systemd-journald[194]: Collecting audit messages is disabled. Nov 12 22:52:43.919102 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:52:43.919111 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 22:52:43.919120 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:52:43.919129 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 22:52:43.919138 systemd-journald[194]: Journal started Nov 12 22:52:43.919159 systemd-journald[194]: Runtime Journal (/run/log/journal/17af5222392143ee8ebe1d944287119f) is 6.0M, max 48.3M, 42.2M free. Nov 12 22:52:43.898167 systemd-modules-load[195]: Inserted module 'overlay' Nov 12 22:52:43.922000 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 22:52:43.925799 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 22:52:43.923618 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 22:52:43.928598 systemd-modules-load[195]: Inserted module 'br_netfilter' Nov 12 22:52:43.930331 kernel: Bridge firewalling registered Nov 12 22:52:43.930190 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 22:52:43.933146 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:52:43.936050 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:52:43.939515 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 22:52:43.941196 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:52:43.942726 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:52:43.954916 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:52:43.957229 dracut-cmdline[222]: dracut-dracut-053 Nov 12 22:52:43.958625 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 12 22:52:43.972756 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 22:52:44.013137 systemd-resolved[245]: Positive Trust Anchors: Nov 12 22:52:44.013162 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 22:52:44.013203 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 22:52:44.017859 systemd-resolved[245]: Defaulting to hostname 'linux'. Nov 12 22:52:44.019164 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 22:52:44.025184 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:52:44.050453 kernel: SCSI subsystem initialized Nov 12 22:52:44.060437 kernel: Loading iSCSI transport class v2.0-870. Nov 12 22:52:44.072460 kernel: iscsi: registered transport (tcp) Nov 12 22:52:44.095455 kernel: iscsi: registered transport (qla4xxx) Nov 12 22:52:44.095544 kernel: QLogic iSCSI HBA Driver Nov 12 22:52:44.150716 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 22:52:44.157740 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 22:52:44.185119 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 22:52:44.185196 kernel: device-mapper: uevent: version 1.0.3 Nov 12 22:52:44.185208 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 22:52:44.229435 kernel: raid6: avx2x4 gen() 28854 MB/s Nov 12 22:52:44.246412 kernel: raid6: avx2x2 gen() 30089 MB/s Nov 12 22:52:44.263543 kernel: raid6: avx2x1 gen() 24947 MB/s Nov 12 22:52:44.263583 kernel: raid6: using algorithm avx2x2 gen() 30089 MB/s Nov 12 22:52:44.281535 kernel: raid6: .... xor() 19207 MB/s, rmw enabled Nov 12 22:52:44.281560 kernel: raid6: using avx2x2 recovery algorithm Nov 12 22:52:44.302417 kernel: xor: automatically using best checksumming function avx Nov 12 22:52:44.461457 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 22:52:44.476817 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 22:52:44.483628 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:52:44.498309 systemd-udevd[413]: Using default interface naming scheme 'v255'. Nov 12 22:52:44.503168 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:52:44.510553 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 22:52:44.526829 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Nov 12 22:52:44.559297 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 22:52:44.571650 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 22:52:44.635529 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:52:44.641553 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 22:52:44.656092 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 22:52:44.657561 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 22:52:44.658842 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:52:44.660084 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 22:52:44.671553 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 22:52:44.682987 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 22:52:44.693471 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 12 22:52:44.725313 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 22:52:44.725522 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 22:52:44.725539 kernel: libata version 3.00 loaded. Nov 12 22:52:44.725553 kernel: ahci 0000:00:1f.2: version 3.0 Nov 12 22:52:44.740715 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 12 22:52:44.740735 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 12 22:52:44.740887 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 12 22:52:44.741041 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 22:52:44.741057 kernel: GPT:9289727 != 19775487 Nov 12 22:52:44.741071 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 22:52:44.741084 kernel: GPT:9289727 != 19775487 Nov 12 22:52:44.741098 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 22:52:44.741111 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:52:44.741122 kernel: scsi host0: ahci Nov 12 22:52:44.741285 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 22:52:44.741300 kernel: scsi host1: ahci Nov 12 22:52:44.741463 kernel: AES CTR mode by8 optimization enabled Nov 12 22:52:44.741474 kernel: scsi host2: ahci Nov 12 22:52:44.742756 kernel: scsi host3: ahci Nov 12 22:52:44.742986 kernel: scsi host4: ahci Nov 12 22:52:44.743204 kernel: scsi host5: ahci Nov 12 22:52:44.743442 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Nov 12 22:52:44.743465 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Nov 12 22:52:44.743479 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Nov 12 22:52:44.743493 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Nov 12 22:52:44.743506 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Nov 12 22:52:44.743520 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Nov 12 22:52:44.707943 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 22:52:44.708207 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:52:44.752522 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (465) Nov 12 22:52:44.711044 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:52:44.715562 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 22:52:44.715879 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:52:44.717421 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:52:44.728786 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:52:44.756421 kernel: BTRFS: device fsid d498af32-b44b-4318-a942-3a646ccb9d0a devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (458) Nov 12 22:52:44.761495 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 22:52:44.768263 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 22:52:44.782705 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 22:52:44.787913 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 22:52:44.789404 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 22:52:44.802569 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 22:52:44.803916 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 22:52:44.803983 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:52:44.806779 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:52:44.812448 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:52:44.817989 disk-uuid[557]: Primary Header is updated. Nov 12 22:52:44.817989 disk-uuid[557]: Secondary Entries is updated. Nov 12 22:52:44.817989 disk-uuid[557]: Secondary Header is updated. Nov 12 22:52:44.825052 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:52:44.829410 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:52:44.834189 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:52:44.849740 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:52:44.875539 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:52:45.050323 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 12 22:52:45.050453 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 12 22:52:45.050479 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 12 22:52:45.051412 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 12 22:52:45.052426 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 12 22:52:45.053429 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 12 22:52:45.054424 kernel: ata3.00: applying bridge limits Nov 12 22:52:45.054448 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 12 22:52:45.055441 kernel: ata3.00: configured for UDMA/100 Nov 12 22:52:45.056772 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 12 22:52:45.103505 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 12 22:52:45.116371 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 22:52:45.116419 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 12 22:52:45.844187 disk-uuid[559]: The operation has completed successfully. Nov 12 22:52:45.845616 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:52:45.869876 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 22:52:45.870024 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 22:52:45.910625 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 22:52:45.917379 sh[598]: Success Nov 12 22:52:45.947424 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 12 22:52:45.984380 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 22:52:45.994933 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 22:52:46.000717 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 22:52:46.011525 kernel: BTRFS info (device dm-0): first mount of filesystem d498af32-b44b-4318-a942-3a646ccb9d0a Nov 12 22:52:46.011565 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:52:46.011576 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 22:52:46.014003 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 22:52:46.014027 kernel: BTRFS info (device dm-0): using free space tree Nov 12 22:52:46.018851 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 22:52:46.020534 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 22:52:46.030773 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 22:52:46.033812 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 22:52:46.041494 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:52:46.041543 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:52:46.041560 kernel: BTRFS info (device vda6): using free space tree Nov 12 22:52:46.044416 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 22:52:46.054592 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 22:52:46.056289 kernel: BTRFS info (device vda6): last unmount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:52:46.067788 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 22:52:46.073647 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 22:52:46.139063 ignition[680]: Ignition 2.20.0 Nov 12 22:52:46.139138 ignition[680]: Stage: fetch-offline Nov 12 22:52:46.139176 ignition[680]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:52:46.139186 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:52:46.139628 ignition[680]: parsed url from cmdline: "" Nov 12 22:52:46.139632 ignition[680]: no config URL provided Nov 12 22:52:46.139638 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 22:52:46.139648 ignition[680]: no config at "/usr/lib/ignition/user.ign" Nov 12 22:52:46.139678 ignition[680]: op(1): [started] loading QEMU firmware config module Nov 12 22:52:46.139684 ignition[680]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 22:52:46.149279 ignition[680]: op(1): [finished] loading QEMU firmware config module Nov 12 22:52:46.176617 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 22:52:46.187579 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 22:52:46.193138 ignition[680]: parsing config with SHA512: 1e86e263cc8a796b01c5fc351e6dec864d765fd46a9a855394598ad3406102ed20c156bfa41733f8919f0a6ea503abe13a10177624ac074bb485b16a99b9a808 Nov 12 22:52:46.198277 unknown[680]: fetched base config from "system" Nov 12 22:52:46.198294 unknown[680]: fetched user config from "qemu" Nov 12 22:52:46.199318 ignition[680]: fetch-offline: fetch-offline passed Nov 12 22:52:46.199902 ignition[680]: Ignition finished successfully Nov 12 22:52:46.202639 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 22:52:46.219632 systemd-networkd[787]: lo: Link UP Nov 12 22:52:46.219646 systemd-networkd[787]: lo: Gained carrier Nov 12 22:52:46.221762 systemd-networkd[787]: Enumeration completed Nov 12 22:52:46.221858 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 22:52:46.222305 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:52:46.222311 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:52:46.224153 systemd[1]: Reached target network.target - Network. Nov 12 22:52:46.224611 systemd-networkd[787]: eth0: Link UP Nov 12 22:52:46.224616 systemd-networkd[787]: eth0: Gained carrier Nov 12 22:52:46.224626 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:52:46.226176 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 22:52:46.238675 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 22:52:46.245455 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 22:52:46.257422 ignition[791]: Ignition 2.20.0 Nov 12 22:52:46.257440 ignition[791]: Stage: kargs Nov 12 22:52:46.257699 ignition[791]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:52:46.257715 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:52:46.258937 ignition[791]: kargs: kargs passed Nov 12 22:52:46.259005 ignition[791]: Ignition finished successfully Nov 12 22:52:46.263412 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 22:52:46.276693 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 22:52:46.292760 ignition[801]: Ignition 2.20.0 Nov 12 22:52:46.292772 ignition[801]: Stage: disks Nov 12 22:52:46.292942 ignition[801]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:52:46.292953 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:52:46.293769 ignition[801]: disks: disks passed Nov 12 22:52:46.296293 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 22:52:46.293817 ignition[801]: Ignition finished successfully Nov 12 22:52:46.298241 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 22:52:46.300130 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 22:52:46.302124 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 22:52:46.304202 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 22:52:46.306485 systemd[1]: Reached target basic.target - Basic System. Nov 12 22:52:46.319533 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 22:52:46.334730 systemd-fsck[811]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 22:52:46.341467 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 22:52:46.353490 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 22:52:46.446884 kernel: EXT4-fs (vda9): mounted filesystem 62325592-ead9-4e81-b706-99baa0cf9fff r/w with ordered data mode. Quota mode: none. Nov 12 22:52:46.446405 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 22:52:46.447940 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 22:52:46.461464 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 22:52:46.463507 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 22:52:46.464682 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 22:52:46.464719 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 22:52:46.472417 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (819) Nov 12 22:52:46.464740 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 22:52:46.476705 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:52:46.476732 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:52:46.476743 kernel: BTRFS info (device vda6): using free space tree Nov 12 22:52:46.478422 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 22:52:46.480535 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 22:52:46.498043 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 22:52:46.500807 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 22:52:46.542411 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 22:52:46.548159 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Nov 12 22:52:46.552636 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 22:52:46.556834 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 22:52:46.645970 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 22:52:46.661716 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 22:52:46.664944 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 22:52:46.669416 kernel: BTRFS info (device vda6): last unmount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:52:46.688852 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 22:52:46.691104 ignition[932]: INFO : Ignition 2.20.0 Nov 12 22:52:46.691104 ignition[932]: INFO : Stage: mount Nov 12 22:52:46.691104 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:52:46.691104 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:52:46.691104 ignition[932]: INFO : mount: mount passed Nov 12 22:52:46.691104 ignition[932]: INFO : Ignition finished successfully Nov 12 22:52:46.693558 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 22:52:46.704538 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 22:52:47.010801 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 22:52:47.024557 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 22:52:47.031414 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (946) Nov 12 22:52:47.033992 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:52:47.034013 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:52:47.034031 kernel: BTRFS info (device vda6): using free space tree Nov 12 22:52:47.037420 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 22:52:47.038823 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 22:52:47.064939 ignition[963]: INFO : Ignition 2.20.0 Nov 12 22:52:47.064939 ignition[963]: INFO : Stage: files Nov 12 22:52:47.066637 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:52:47.066637 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:52:47.069303 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Nov 12 22:52:47.070914 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 22:52:47.070914 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 22:52:47.074315 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 22:52:47.076017 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 22:52:47.076017 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 22:52:47.075068 unknown[963]: wrote ssh authorized keys file for user: core Nov 12 22:52:47.080188 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 22:52:47.080188 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 22:52:47.111063 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 22:52:47.202077 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 22:52:47.202077 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 22:52:47.207010 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 12 22:52:47.560687 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 22:52:47.634659 systemd-networkd[787]: eth0: Gained IPv6LL Nov 12 22:52:47.648160 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 22:52:47.650114 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 12 22:52:47.650114 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 22:52:47.650114 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 22:52:47.650114 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 22:52:47.650114 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 22:52:47.650114 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 22:52:47.650114 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 22:52:47.650114 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 22:52:47.650114 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 22:52:47.650114 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 22:52:47.650114 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 22:52:47.650114 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 22:52:47.650114 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 22:52:47.650114 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Nov 12 22:52:47.951815 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 12 22:52:48.239550 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 22:52:48.239550 ignition[963]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 12 22:52:48.243350 ignition[963]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 22:52:48.245492 ignition[963]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 22:52:48.245492 ignition[963]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 12 22:52:48.248557 ignition[963]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 12 22:52:48.248557 ignition[963]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 22:52:48.252074 ignition[963]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 22:52:48.252074 ignition[963]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 12 22:52:48.252074 ignition[963]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 22:52:48.274837 ignition[963]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 22:52:48.280762 ignition[963]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 22:52:48.282469 ignition[963]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 22:52:48.282469 ignition[963]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 12 22:52:48.285249 ignition[963]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 22:52:48.286730 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 22:52:48.288514 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 22:52:48.290185 ignition[963]: INFO : files: files passed Nov 12 22:52:48.290961 ignition[963]: INFO : Ignition finished successfully Nov 12 22:52:48.293934 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 22:52:48.301591 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 22:52:48.304246 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 22:52:48.305746 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 22:52:48.305849 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 22:52:48.327472 initrd-setup-root-after-ignition[992]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 22:52:48.331035 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:52:48.331035 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:52:48.334640 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:52:48.337627 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 22:52:48.339057 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 22:52:48.348528 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 22:52:48.373320 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 22:52:48.373454 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 22:52:48.374655 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 22:52:48.376906 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 22:52:48.377275 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 22:52:48.382184 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 22:52:48.400625 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 22:52:48.423512 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 22:52:48.435300 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:52:48.435509 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:52:48.437770 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 22:52:48.438175 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 22:52:48.438319 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 22:52:48.442234 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 22:52:48.442849 systemd[1]: Stopped target basic.target - Basic System. Nov 12 22:52:48.443266 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 22:52:48.443837 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 22:52:48.444240 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 22:52:48.445008 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 22:52:48.445399 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 22:52:48.445979 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 22:52:48.446367 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 22:52:48.446945 systemd[1]: Stopped target swap.target - Swaps. Nov 12 22:52:48.447298 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 22:52:48.447455 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 22:52:48.448305 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:52:48.448903 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:52:48.449254 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 22:52:48.449352 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:52:48.449657 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 22:52:48.449807 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 22:52:48.474084 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 22:52:48.474239 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 22:52:48.477183 systemd[1]: Stopped target paths.target - Path Units. Nov 12 22:52:48.477649 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 22:52:48.485433 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:52:48.488336 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 22:52:48.488514 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 22:52:48.490359 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 22:52:48.490498 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 22:52:48.492269 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 22:52:48.492403 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 22:52:48.494151 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 22:52:48.494295 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 22:52:48.496152 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 22:52:48.496291 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 22:52:48.505537 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 22:52:48.506408 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 22:52:48.508478 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 22:52:48.508628 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:52:48.511268 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 22:52:48.517945 ignition[1018]: INFO : Ignition 2.20.0 Nov 12 22:52:48.517945 ignition[1018]: INFO : Stage: umount Nov 12 22:52:48.517945 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:52:48.517945 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:52:48.511531 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 22:52:48.523035 ignition[1018]: INFO : umount: umount passed Nov 12 22:52:48.523035 ignition[1018]: INFO : Ignition finished successfully Nov 12 22:52:48.524899 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 22:52:48.525992 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 22:52:48.529067 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 22:52:48.530069 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 22:52:48.533765 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 22:52:48.536719 systemd[1]: Stopped target network.target - Network. Nov 12 22:52:48.538662 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 22:52:48.539779 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 22:52:48.541856 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 22:52:48.542779 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 22:52:48.544747 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 22:52:48.545667 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 22:52:48.547655 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 22:52:48.548684 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 22:52:48.550950 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 22:52:48.553209 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 22:52:48.554423 systemd-networkd[787]: eth0: DHCPv6 lease lost Nov 12 22:52:48.556858 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 22:52:48.558061 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 22:52:48.560666 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 22:52:48.560714 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:52:48.573483 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 22:52:48.575434 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 22:52:48.575487 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 22:52:48.579196 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:52:48.581955 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 22:52:48.582099 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 22:52:48.590708 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 22:52:48.591756 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:52:48.594757 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 22:52:48.595771 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 22:52:48.599563 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 22:52:48.599624 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 22:52:48.602708 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 22:52:48.602751 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:52:48.605765 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 22:52:48.605822 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 22:52:48.608941 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 22:52:48.608998 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 22:52:48.612015 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 22:52:48.612068 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:52:48.627512 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 22:52:48.629776 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 22:52:48.629829 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:52:48.632738 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 22:52:48.632793 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 22:52:48.635899 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 22:52:48.636960 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:52:48.639492 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 22:52:48.639543 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 22:52:48.643303 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 22:52:48.643354 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:52:48.646785 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 22:52:48.646841 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:52:48.650226 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 22:52:48.651224 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:52:48.653782 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 22:52:48.654850 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 22:52:48.722721 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 22:52:48.723830 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 22:52:48.726464 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 22:52:48.728725 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 22:52:48.729813 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 22:52:48.747523 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 22:52:48.756191 systemd[1]: Switching root. Nov 12 22:52:48.787841 systemd-journald[194]: Journal stopped Nov 12 22:52:50.015062 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Nov 12 22:52:50.015129 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 22:52:50.015143 kernel: SELinux: policy capability open_perms=1 Nov 12 22:52:50.015159 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 22:52:50.015170 kernel: SELinux: policy capability always_check_network=0 Nov 12 22:52:50.015182 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 22:52:50.015193 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 22:52:50.015205 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 22:52:50.015216 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 22:52:50.015230 kernel: audit: type=1403 audit(1731451969.211:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 22:52:50.015242 systemd[1]: Successfully loaded SELinux policy in 40.451ms. Nov 12 22:52:50.015266 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.612ms. Nov 12 22:52:50.015284 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 22:52:50.015297 systemd[1]: Detected virtualization kvm. Nov 12 22:52:50.015309 systemd[1]: Detected architecture x86-64. Nov 12 22:52:50.015332 systemd[1]: Detected first boot. Nov 12 22:52:50.015345 systemd[1]: Initializing machine ID from VM UUID. Nov 12 22:52:50.015357 zram_generator::config[1064]: No configuration found. Nov 12 22:52:50.015373 systemd[1]: Populated /etc with preset unit settings. Nov 12 22:52:50.015400 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 22:52:50.015413 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 22:52:50.015425 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 22:52:50.015438 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 22:52:50.015450 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 22:52:50.015462 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 22:52:50.015474 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 22:52:50.015489 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 22:52:50.015502 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 22:52:50.015514 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 22:52:50.015530 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 22:52:50.015542 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:52:50.015555 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:52:50.015567 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 22:52:50.015579 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 22:52:50.015591 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 22:52:50.015606 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 22:52:50.015625 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 22:52:50.015637 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:52:50.015650 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 22:52:50.015664 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 22:52:50.015676 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 22:52:50.015688 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 22:52:50.015702 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:52:50.015714 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 22:52:50.015727 systemd[1]: Reached target slices.target - Slice Units. Nov 12 22:52:50.015739 systemd[1]: Reached target swap.target - Swaps. Nov 12 22:52:50.015751 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 22:52:50.015763 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 22:52:50.015775 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:52:50.015787 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 22:52:50.015799 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:52:50.015811 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 22:52:50.015827 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 22:52:50.015839 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 22:52:50.015851 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 22:52:50.015863 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:52:50.015875 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 22:52:50.015894 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 22:52:50.015912 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 22:52:50.015925 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 22:52:50.015940 systemd[1]: Reached target machines.target - Containers. Nov 12 22:52:50.015952 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 22:52:50.015965 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:52:50.015978 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 22:52:50.015990 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 22:52:50.016002 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:52:50.016014 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 22:52:50.016026 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:52:50.016037 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 22:52:50.016051 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:52:50.016064 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 22:52:50.016076 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 22:52:50.016088 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 22:52:50.016100 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 22:52:50.016112 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 22:52:50.016123 kernel: fuse: init (API version 7.39) Nov 12 22:52:50.016135 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 22:52:50.016149 kernel: loop: module loaded Nov 12 22:52:50.016168 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 22:52:50.016180 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 22:52:50.016192 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 22:52:50.016204 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 22:52:50.016216 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 22:52:50.016228 systemd[1]: Stopped verity-setup.service. Nov 12 22:52:50.016240 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:52:50.016252 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 22:52:50.016284 systemd-journald[1131]: Collecting audit messages is disabled. Nov 12 22:52:50.016305 kernel: ACPI: bus type drm_connector registered Nov 12 22:52:50.016317 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 22:52:50.016329 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 22:52:50.016344 systemd-journald[1131]: Journal started Nov 12 22:52:50.016365 systemd-journald[1131]: Runtime Journal (/run/log/journal/17af5222392143ee8ebe1d944287119f) is 6.0M, max 48.3M, 42.2M free. Nov 12 22:52:49.703749 systemd[1]: Queued start job for default target multi-user.target. Nov 12 22:52:49.719217 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 22:52:49.719637 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 22:52:50.019415 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 22:52:50.020317 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 22:52:50.021552 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 22:52:50.022778 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 22:52:50.024069 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 22:52:50.025538 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:52:50.027263 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 22:52:50.027569 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 22:52:50.029092 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:52:50.029266 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:52:50.030785 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 22:52:50.030979 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 22:52:50.032414 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:52:50.032592 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:52:50.034201 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 22:52:50.034375 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 22:52:50.035761 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:52:50.035944 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:52:50.037313 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 22:52:50.038908 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 22:52:50.040436 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 22:52:50.056248 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 22:52:50.067465 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 22:52:50.069727 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 22:52:50.070877 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 22:52:50.070923 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 22:52:50.072943 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 22:52:50.075231 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 22:52:50.081203 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 22:52:50.082461 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:52:50.084558 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 22:52:50.088761 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 22:52:50.090095 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 22:52:50.092538 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 22:52:50.096654 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 22:52:50.097824 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:52:50.100677 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 22:52:50.104005 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 22:52:50.107544 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 22:52:50.108865 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 22:52:50.110689 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 22:52:50.158250 systemd-journald[1131]: Time spent on flushing to /var/log/journal/17af5222392143ee8ebe1d944287119f is 21.717ms for 1051 entries. Nov 12 22:52:50.158250 systemd-journald[1131]: System Journal (/var/log/journal/17af5222392143ee8ebe1d944287119f) is 8.0M, max 195.6M, 187.6M free. Nov 12 22:52:50.230261 systemd-journald[1131]: Received client request to flush runtime journal. Nov 12 22:52:50.230313 kernel: loop0: detected capacity change from 0 to 138184 Nov 12 22:52:50.197749 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:52:50.235405 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 22:52:50.208661 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 22:52:50.212851 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 22:52:50.215337 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 22:52:50.220194 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 22:52:50.240189 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 22:52:50.241182 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Nov 12 22:52:50.241202 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Nov 12 22:52:50.242143 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:52:50.250747 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 22:52:50.259430 kernel: loop1: detected capacity change from 0 to 140992 Nov 12 22:52:50.262136 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 22:52:50.264455 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 22:52:50.269492 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 22:52:50.270093 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 22:52:50.294090 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 22:52:50.342739 kernel: loop2: detected capacity change from 0 to 210664 Nov 12 22:52:50.342681 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 22:52:50.369256 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Nov 12 22:52:50.369662 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Nov 12 22:52:50.375556 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:52:50.386425 kernel: loop3: detected capacity change from 0 to 138184 Nov 12 22:52:50.400441 kernel: loop4: detected capacity change from 0 to 140992 Nov 12 22:52:50.409425 kernel: loop5: detected capacity change from 0 to 210664 Nov 12 22:52:50.416768 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 22:52:50.417356 (sd-merge)[1205]: Merged extensions into '/usr'. Nov 12 22:52:50.421216 systemd[1]: Reloading requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 22:52:50.421234 systemd[1]: Reloading... Nov 12 22:52:50.557103 zram_generator::config[1240]: No configuration found. Nov 12 22:52:50.620323 ldconfig[1173]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 22:52:50.663583 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:52:50.714624 systemd[1]: Reloading finished in 292 ms. Nov 12 22:52:50.747702 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 22:52:50.749889 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 22:52:50.767694 systemd[1]: Starting ensure-sysext.service... Nov 12 22:52:50.770607 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 22:52:50.856541 systemd[1]: Reloading requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... Nov 12 22:52:50.856684 systemd[1]: Reloading... Nov 12 22:52:50.879061 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 22:52:50.879446 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 22:52:50.880516 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 22:52:50.880808 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Nov 12 22:52:50.880889 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Nov 12 22:52:50.890321 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 22:52:50.890336 systemd-tmpfiles[1269]: Skipping /boot Nov 12 22:52:50.910957 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 22:52:50.913411 zram_generator::config[1299]: No configuration found. Nov 12 22:52:50.913493 systemd-tmpfiles[1269]: Skipping /boot Nov 12 22:52:51.016839 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:52:51.067164 systemd[1]: Reloading finished in 210 ms. Nov 12 22:52:51.087688 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 22:52:51.098825 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:52:51.107711 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 12 22:52:51.110219 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 22:52:51.112679 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 22:52:51.116501 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 22:52:51.122480 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:52:51.126521 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 22:52:51.130228 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:52:51.130452 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:52:51.132131 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:52:51.134712 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:52:51.142621 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:52:51.143989 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:52:51.146186 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 22:52:51.147385 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:52:51.148526 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:52:51.148703 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:52:51.150401 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:52:51.150596 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:52:51.152978 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:52:51.153158 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:52:51.160827 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 22:52:51.163407 systemd-udevd[1339]: Using default interface naming scheme 'v255'. Nov 12 22:52:51.167201 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 22:52:51.167642 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 22:52:51.176684 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 22:52:51.178820 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 22:52:51.183692 augenrules[1369]: No rules Nov 12 22:52:51.186192 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 22:52:51.186466 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 12 22:52:51.191848 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:52:51.198120 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 12 22:52:51.199591 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:52:51.200831 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:52:51.205610 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 22:52:51.208615 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:52:51.213596 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:52:51.214812 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:52:51.214960 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:52:51.215781 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:52:51.217483 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 22:52:51.219101 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 22:52:51.221433 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 22:52:51.227317 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:52:51.227678 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:52:51.229783 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:52:51.232575 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:52:51.235190 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 22:52:51.235380 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 22:52:51.249490 systemd[1]: Finished ensure-sysext.service. Nov 12 22:52:51.256548 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1402) Nov 12 22:52:51.262425 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1402) Nov 12 22:52:51.262506 augenrules[1376]: /sbin/augenrules: No change Nov 12 22:52:51.282407 augenrules[1429]: No rules Nov 12 22:52:51.297769 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 22:52:51.299472 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 22:52:51.308427 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1406) Nov 12 22:52:51.309601 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 22:52:51.310725 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 22:52:51.311157 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 22:52:51.311368 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 12 22:52:51.312717 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:52:51.312952 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:52:51.315888 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 22:52:51.340632 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 22:52:51.380765 systemd-resolved[1338]: Positive Trust Anchors: Nov 12 22:52:51.380783 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 22:52:51.380816 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 22:52:51.403298 systemd-resolved[1338]: Defaulting to hostname 'linux'. Nov 12 22:52:51.405732 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 22:52:51.407341 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:52:51.416418 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 12 22:52:51.421410 kernel: ACPI: button: Power Button [PWRF] Nov 12 22:52:51.431631 systemd-networkd[1430]: lo: Link UP Nov 12 22:52:51.431645 systemd-networkd[1430]: lo: Gained carrier Nov 12 22:52:51.435908 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 22:52:51.438743 systemd-networkd[1430]: Enumeration completed Nov 12 22:52:51.439315 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:52:51.439416 systemd-networkd[1430]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:52:51.440222 systemd-networkd[1430]: eth0: Link UP Nov 12 22:52:51.440290 systemd-networkd[1430]: eth0: Gained carrier Nov 12 22:52:51.440339 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:52:51.444796 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 22:52:51.449999 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 12 22:52:51.450268 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 12 22:52:51.450466 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 12 22:52:51.450651 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 12 22:52:51.451277 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 22:52:51.452986 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 22:52:51.454820 systemd[1]: Reached target network.target - Network. Nov 12 22:52:51.456157 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 22:52:51.460548 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 12 22:52:51.459287 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 22:52:51.464576 systemd-networkd[1430]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 22:52:51.466025 systemd-timesyncd[1437]: Network configuration changed, trying to establish connection. Nov 12 22:52:51.466648 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 22:52:53.075324 systemd-resolved[1338]: Clock change detected. Flushing caches. Nov 12 22:52:53.075403 systemd-timesyncd[1437]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 22:52:53.075441 systemd-timesyncd[1437]: Initial clock synchronization to Tue 2024-11-12 22:52:53.075292 UTC. Nov 12 22:52:53.144917 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:52:53.154493 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 22:52:53.154716 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:52:53.157149 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 22:52:53.175670 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:52:53.244589 kernel: kvm_amd: TSC scaling supported Nov 12 22:52:53.244669 kernel: kvm_amd: Nested Virtualization enabled Nov 12 22:52:53.244683 kernel: kvm_amd: Nested Paging enabled Nov 12 22:52:53.245919 kernel: kvm_amd: LBR virtualization supported Nov 12 22:52:53.245939 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 12 22:52:53.246675 kernel: kvm_amd: Virtual GIF supported Nov 12 22:52:53.276856 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:52:53.305567 kernel: EDAC MC: Ver: 3.0.0 Nov 12 22:52:53.339774 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 22:52:53.351671 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 22:52:53.360698 lvm[1464]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 22:52:53.389868 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 22:52:53.391430 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:52:53.392609 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 22:52:53.393794 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 22:52:53.395093 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 22:52:53.396556 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 22:52:53.397785 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 22:52:53.399074 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 22:52:53.400326 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 22:52:53.400353 systemd[1]: Reached target paths.target - Path Units. Nov 12 22:52:53.401364 systemd[1]: Reached target timers.target - Timer Units. Nov 12 22:52:53.403353 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 22:52:53.406219 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 22:52:53.416975 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 22:52:53.419383 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 22:52:53.420966 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 22:52:53.422129 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 22:52:53.423097 systemd[1]: Reached target basic.target - Basic System. Nov 12 22:52:53.424056 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 22:52:53.424083 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 22:52:53.425033 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 22:52:53.427049 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 22:52:53.431871 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 22:52:53.433403 lvm[1468]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 22:52:53.434923 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 22:52:53.436188 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 22:52:53.438445 jq[1471]: false Nov 12 22:52:53.439418 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 22:52:53.443028 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 22:52:53.448838 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 22:52:53.453266 extend-filesystems[1472]: Found loop3 Nov 12 22:52:53.454713 extend-filesystems[1472]: Found loop4 Nov 12 22:52:53.454713 extend-filesystems[1472]: Found loop5 Nov 12 22:52:53.454713 extend-filesystems[1472]: Found sr0 Nov 12 22:52:53.454713 extend-filesystems[1472]: Found vda Nov 12 22:52:53.454713 extend-filesystems[1472]: Found vda1 Nov 12 22:52:53.454713 extend-filesystems[1472]: Found vda2 Nov 12 22:52:53.454713 extend-filesystems[1472]: Found vda3 Nov 12 22:52:53.454713 extend-filesystems[1472]: Found usr Nov 12 22:52:53.454713 extend-filesystems[1472]: Found vda4 Nov 12 22:52:53.454713 extend-filesystems[1472]: Found vda6 Nov 12 22:52:53.454713 extend-filesystems[1472]: Found vda7 Nov 12 22:52:53.454713 extend-filesystems[1472]: Found vda9 Nov 12 22:52:53.454713 extend-filesystems[1472]: Checking size of /dev/vda9 Nov 12 22:52:53.454570 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 22:52:53.471440 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 22:52:53.473220 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 22:52:53.476909 dbus-daemon[1470]: [system] SELinux support is enabled Nov 12 22:52:53.477986 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 22:52:53.481701 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 22:52:53.486212 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 22:52:53.488221 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 22:52:53.491118 extend-filesystems[1472]: Resized partition /dev/vda9 Nov 12 22:52:53.492877 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 22:52:53.493115 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 22:52:53.493447 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 22:52:53.497587 extend-filesystems[1493]: resize2fs 1.47.1 (20-May-2024) Nov 12 22:52:53.495122 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 22:52:53.498820 jq[1489]: true Nov 12 22:52:53.499218 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 22:52:53.500856 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 22:52:53.501071 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 22:52:53.513572 update_engine[1487]: I20241112 22:52:53.513494 1487 main.cc:92] Flatcar Update Engine starting Nov 12 22:52:53.516423 jq[1496]: true Nov 12 22:52:53.516687 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 22:52:53.517817 update_engine[1487]: I20241112 22:52:53.517789 1487 update_check_scheduler.cc:74] Next update check in 7m22s Nov 12 22:52:53.523551 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1381) Nov 12 22:52:53.524060 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 22:52:53.524141 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 22:52:53.525709 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 22:52:53.525727 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 22:52:53.528183 (ntainerd)[1497]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 22:52:53.530609 systemd[1]: Started update-engine.service - Update Engine. Nov 12 22:52:53.541888 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 22:52:53.551880 tar[1495]: linux-amd64/helm Nov 12 22:52:53.599882 locksmithd[1516]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 22:52:53.602604 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 22:52:53.673639 bash[1524]: Updated "/home/core/.ssh/authorized_keys" Nov 12 22:52:53.666668 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 22:52:53.676910 extend-filesystems[1493]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 22:52:53.676910 extend-filesystems[1493]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 22:52:53.676910 extend-filesystems[1493]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 22:52:53.681125 extend-filesystems[1472]: Resized filesystem in /dev/vda9 Nov 12 22:52:53.679010 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 22:52:53.680186 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 22:52:53.680435 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 22:52:53.684845 systemd-logind[1485]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 22:52:53.684874 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 22:52:53.685234 systemd-logind[1485]: New seat seat0. Nov 12 22:52:53.685966 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 22:52:53.779547 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 22:52:53.782185 sshd_keygen[1492]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 22:52:53.842756 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 22:52:53.855062 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 22:52:53.858562 systemd[1]: Started sshd@0-10.0.0.140:22-10.0.0.1:49898.service - OpenSSH per-connection server daemon (10.0.0.1:49898). Nov 12 22:52:53.865245 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 22:52:53.865635 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 22:52:53.877304 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 22:52:53.919669 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 22:52:53.980145 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 22:52:53.986688 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 22:52:53.988270 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 22:52:54.013567 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 49898 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:52:54.015146 sshd-session[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:52:54.057665 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 22:52:54.069839 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 22:52:54.075246 systemd-logind[1485]: New session 1 of user core. Nov 12 22:52:54.122622 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 22:52:54.134727 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 22:52:54.143645 containerd[1497]: time="2024-11-12T22:52:54.143508457Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Nov 12 22:52:54.147914 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 22:52:54.167085 containerd[1497]: time="2024-11-12T22:52:54.167050894Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:52:54.171089 containerd[1497]: time="2024-11-12T22:52:54.169185699Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:52:54.171089 containerd[1497]: time="2024-11-12T22:52:54.170955820Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 22:52:54.171089 containerd[1497]: time="2024-11-12T22:52:54.170993641Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 22:52:54.171275 containerd[1497]: time="2024-11-12T22:52:54.171209887Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 22:52:54.171275 containerd[1497]: time="2024-11-12T22:52:54.171233271Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 22:52:54.171325 containerd[1497]: time="2024-11-12T22:52:54.171307550Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:52:54.171352 containerd[1497]: time="2024-11-12T22:52:54.171324201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:52:54.171728 containerd[1497]: time="2024-11-12T22:52:54.171621369Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:52:54.171728 containerd[1497]: time="2024-11-12T22:52:54.171646967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 22:52:54.171728 containerd[1497]: time="2024-11-12T22:52:54.171663899Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:52:54.171728 containerd[1497]: time="2024-11-12T22:52:54.171672805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 22:52:54.171851 containerd[1497]: time="2024-11-12T22:52:54.171774136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:52:54.172068 containerd[1497]: time="2024-11-12T22:52:54.172031178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:52:54.172780 containerd[1497]: time="2024-11-12T22:52:54.172159328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:52:54.172780 containerd[1497]: time="2024-11-12T22:52:54.172175849Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 22:52:54.172780 containerd[1497]: time="2024-11-12T22:52:54.172276718Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 22:52:54.172780 containerd[1497]: time="2024-11-12T22:52:54.172335479Z" level=info msg="metadata content store policy set" policy=shared Nov 12 22:52:54.178215 containerd[1497]: time="2024-11-12T22:52:54.178190363Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 22:52:54.178254 containerd[1497]: time="2024-11-12T22:52:54.178246007Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 22:52:54.178274 containerd[1497]: time="2024-11-12T22:52:54.178261937Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 22:52:54.178293 containerd[1497]: time="2024-11-12T22:52:54.178282005Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 22:52:54.178313 containerd[1497]: time="2024-11-12T22:52:54.178298165Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 22:52:54.178453 containerd[1497]: time="2024-11-12T22:52:54.178432387Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 22:52:54.178716 containerd[1497]: time="2024-11-12T22:52:54.178694719Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 22:52:54.178836 containerd[1497]: time="2024-11-12T22:52:54.178815816Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 22:52:54.178860 containerd[1497]: time="2024-11-12T22:52:54.178836435Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 22:52:54.178860 containerd[1497]: time="2024-11-12T22:52:54.178851824Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 22:52:54.178896 containerd[1497]: time="2024-11-12T22:52:54.178865079Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 22:52:54.178896 containerd[1497]: time="2024-11-12T22:52:54.178878504Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 22:52:54.178896 containerd[1497]: time="2024-11-12T22:52:54.178891178Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 22:52:54.178966 containerd[1497]: time="2024-11-12T22:52:54.178905334Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 22:52:54.178966 containerd[1497]: time="2024-11-12T22:52:54.178921204Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 22:52:54.178966 containerd[1497]: time="2024-11-12T22:52:54.178936793Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 22:52:54.178966 containerd[1497]: time="2024-11-12T22:52:54.178948475Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 22:52:54.178966 containerd[1497]: time="2024-11-12T22:52:54.178959426Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 22:52:54.179056 containerd[1497]: time="2024-11-12T22:52:54.178998880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 22:52:54.179056 containerd[1497]: time="2024-11-12T22:52:54.179013046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 22:52:54.179056 containerd[1497]: time="2024-11-12T22:52:54.179027343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 22:52:54.179056 containerd[1497]: time="2024-11-12T22:52:54.179039736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 22:52:54.179139 containerd[1497]: time="2024-11-12T22:52:54.179065475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 22:52:54.179139 containerd[1497]: time="2024-11-12T22:52:54.179079581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 22:52:54.179139 containerd[1497]: time="2024-11-12T22:52:54.179091293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 22:52:54.179139 containerd[1497]: time="2024-11-12T22:52:54.179104287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 22:52:54.179139 containerd[1497]: time="2024-11-12T22:52:54.179117061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 22:52:54.179139 containerd[1497]: time="2024-11-12T22:52:54.179130847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 22:52:54.179251 containerd[1497]: time="2024-11-12T22:52:54.179143882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 22:52:54.179251 containerd[1497]: time="2024-11-12T22:52:54.179156576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 22:52:54.179251 containerd[1497]: time="2024-11-12T22:52:54.179170071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 22:52:54.179251 containerd[1497]: time="2024-11-12T22:52:54.179183145Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 22:52:54.179251 containerd[1497]: time="2024-11-12T22:52:54.179205287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 22:52:54.179251 containerd[1497]: time="2024-11-12T22:52:54.179220806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 22:52:54.179251 containerd[1497]: time="2024-11-12T22:52:54.179231386Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 22:52:54.179374 containerd[1497]: time="2024-11-12T22:52:54.179276270Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 22:52:54.179374 containerd[1497]: time="2024-11-12T22:52:54.179290857Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 22:52:54.179374 containerd[1497]: time="2024-11-12T22:52:54.179299914Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 22:52:54.179374 containerd[1497]: time="2024-11-12T22:52:54.179311065Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 22:52:54.179374 containerd[1497]: time="2024-11-12T22:52:54.179321956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 22:52:54.179374 containerd[1497]: time="2024-11-12T22:52:54.179336974Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 22:52:54.179374 containerd[1497]: time="2024-11-12T22:52:54.179350259Z" level=info msg="NRI interface is disabled by configuration." Nov 12 22:52:54.179374 containerd[1497]: time="2024-11-12T22:52:54.179359586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 22:52:54.179922 containerd[1497]: time="2024-11-12T22:52:54.179653598Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 22:52:54.179922 containerd[1497]: time="2024-11-12T22:52:54.179699584Z" level=info msg="Connect containerd service" Nov 12 22:52:54.179922 containerd[1497]: time="2024-11-12T22:52:54.179740832Z" level=info msg="using legacy CRI server" Nov 12 22:52:54.179922 containerd[1497]: time="2024-11-12T22:52:54.179748997Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 22:52:54.179922 containerd[1497]: time="2024-11-12T22:52:54.179924967Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 22:52:54.180657 containerd[1497]: time="2024-11-12T22:52:54.180623848Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 22:52:54.180849 containerd[1497]: time="2024-11-12T22:52:54.180785421Z" level=info msg="Start subscribing containerd event" Nov 12 22:52:54.180885 containerd[1497]: time="2024-11-12T22:52:54.180865882Z" level=info msg="Start recovering state" Nov 12 22:52:54.181025 containerd[1497]: time="2024-11-12T22:52:54.180984715Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 22:52:54.181086 containerd[1497]: time="2024-11-12T22:52:54.181040109Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 22:52:54.181116 containerd[1497]: time="2024-11-12T22:52:54.181092548Z" level=info msg="Start event monitor" Nov 12 22:52:54.181144 containerd[1497]: time="2024-11-12T22:52:54.181120069Z" level=info msg="Start snapshots syncer" Nov 12 22:52:54.181144 containerd[1497]: time="2024-11-12T22:52:54.181135608Z" level=info msg="Start cni network conf syncer for default" Nov 12 22:52:54.181190 containerd[1497]: time="2024-11-12T22:52:54.181146669Z" level=info msg="Start streaming server" Nov 12 22:52:54.181387 containerd[1497]: time="2024-11-12T22:52:54.181307912Z" level=info msg="containerd successfully booted in 0.041870s" Nov 12 22:52:54.238786 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 22:52:54.343275 tar[1495]: linux-amd64/LICENSE Nov 12 22:52:54.343275 tar[1495]: linux-amd64/README.md Nov 12 22:52:54.357306 systemd[1559]: Queued start job for default target default.target. Nov 12 22:52:54.447188 systemd[1559]: Created slice app.slice - User Application Slice. Nov 12 22:52:54.447218 systemd[1559]: Reached target paths.target - Paths. Nov 12 22:52:54.447232 systemd[1559]: Reached target timers.target - Timers. Nov 12 22:52:54.448949 systemd[1559]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 22:52:54.453939 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 22:52:54.461211 systemd[1559]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 22:52:54.461331 systemd[1559]: Reached target sockets.target - Sockets. Nov 12 22:52:54.461351 systemd[1559]: Reached target basic.target - Basic System. Nov 12 22:52:54.461387 systemd[1559]: Reached target default.target - Main User Target. Nov 12 22:52:54.461420 systemd[1559]: Startup finished in 302ms. Nov 12 22:52:54.462118 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 22:52:54.478744 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 22:52:54.541814 systemd[1]: Started sshd@1-10.0.0.140:22-10.0.0.1:49904.service - OpenSSH per-connection server daemon (10.0.0.1:49904). Nov 12 22:52:54.582043 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 49904 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:52:54.583640 sshd-session[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:52:54.587787 systemd-logind[1485]: New session 2 of user core. Nov 12 22:52:54.597676 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 22:52:54.652655 sshd[1578]: Connection closed by 10.0.0.1 port 49904 Nov 12 22:52:54.653022 sshd-session[1576]: pam_unix(sshd:session): session closed for user core Nov 12 22:52:54.665308 systemd[1]: sshd@1-10.0.0.140:22-10.0.0.1:49904.service: Deactivated successfully. Nov 12 22:52:54.666983 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 22:52:54.668509 systemd-logind[1485]: Session 2 logged out. Waiting for processes to exit. Nov 12 22:52:54.678761 systemd[1]: Started sshd@2-10.0.0.140:22-10.0.0.1:49912.service - OpenSSH per-connection server daemon (10.0.0.1:49912). Nov 12 22:52:54.681240 systemd-logind[1485]: Removed session 2. Nov 12 22:52:54.713338 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 49912 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:52:54.714676 sshd-session[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:52:54.718350 systemd-logind[1485]: New session 3 of user core. Nov 12 22:52:54.727646 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 22:52:54.781728 sshd[1585]: Connection closed by 10.0.0.1 port 49912 Nov 12 22:52:54.782026 sshd-session[1583]: pam_unix(sshd:session): session closed for user core Nov 12 22:52:54.785949 systemd[1]: sshd@2-10.0.0.140:22-10.0.0.1:49912.service: Deactivated successfully. Nov 12 22:52:54.787912 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 22:52:54.788486 systemd-logind[1485]: Session 3 logged out. Waiting for processes to exit. Nov 12 22:52:54.789319 systemd-logind[1485]: Removed session 3. Nov 12 22:52:54.938814 systemd-networkd[1430]: eth0: Gained IPv6LL Nov 12 22:52:54.942955 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 22:52:54.944895 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 22:52:54.958751 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 22:52:54.961280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:52:54.963511 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 22:52:54.983322 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 22:52:54.983688 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 22:52:54.985675 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 22:52:54.989603 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 22:52:56.190099 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:52:56.192082 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 22:52:56.199682 systemd[1]: Startup finished in 764ms (kernel) + 5.502s (initrd) + 5.418s (userspace) = 11.686s. Nov 12 22:52:56.200706 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:52:56.834857 kubelet[1611]: E1112 22:52:56.834779 1611 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:52:56.839801 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:52:56.840002 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:52:56.840363 systemd[1]: kubelet.service: Consumed 1.757s CPU time. Nov 12 22:53:04.793935 systemd[1]: Started sshd@3-10.0.0.140:22-10.0.0.1:39446.service - OpenSSH per-connection server daemon (10.0.0.1:39446). Nov 12 22:53:04.834657 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 39446 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:53:04.836377 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:53:04.840461 systemd-logind[1485]: New session 4 of user core. Nov 12 22:53:04.849687 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 22:53:04.903869 sshd[1627]: Connection closed by 10.0.0.1 port 39446 Nov 12 22:53:04.904284 sshd-session[1625]: pam_unix(sshd:session): session closed for user core Nov 12 22:53:04.915404 systemd[1]: sshd@3-10.0.0.140:22-10.0.0.1:39446.service: Deactivated successfully. Nov 12 22:53:04.917263 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 22:53:04.918899 systemd-logind[1485]: Session 4 logged out. Waiting for processes to exit. Nov 12 22:53:04.920169 systemd[1]: Started sshd@4-10.0.0.140:22-10.0.0.1:39460.service - OpenSSH per-connection server daemon (10.0.0.1:39460). Nov 12 22:53:04.920847 systemd-logind[1485]: Removed session 4. Nov 12 22:53:04.972547 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 39460 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:53:04.974039 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:53:04.977870 systemd-logind[1485]: New session 5 of user core. Nov 12 22:53:04.987643 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 22:53:05.037466 sshd[1634]: Connection closed by 10.0.0.1 port 39460 Nov 12 22:53:05.037885 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Nov 12 22:53:05.048317 systemd[1]: sshd@4-10.0.0.140:22-10.0.0.1:39460.service: Deactivated successfully. Nov 12 22:53:05.050167 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 22:53:05.051973 systemd-logind[1485]: Session 5 logged out. Waiting for processes to exit. Nov 12 22:53:05.053193 systemd[1]: Started sshd@5-10.0.0.140:22-10.0.0.1:39474.service - OpenSSH per-connection server daemon (10.0.0.1:39474). Nov 12 22:53:05.053846 systemd-logind[1485]: Removed session 5. Nov 12 22:53:05.092688 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 39474 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:53:05.094115 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:53:05.097935 systemd-logind[1485]: New session 6 of user core. Nov 12 22:53:05.107652 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 22:53:05.161267 sshd[1641]: Connection closed by 10.0.0.1 port 39474 Nov 12 22:53:05.161628 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Nov 12 22:53:05.171443 systemd[1]: sshd@5-10.0.0.140:22-10.0.0.1:39474.service: Deactivated successfully. Nov 12 22:53:05.173151 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 22:53:05.174864 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. Nov 12 22:53:05.176132 systemd[1]: Started sshd@6-10.0.0.140:22-10.0.0.1:39486.service - OpenSSH per-connection server daemon (10.0.0.1:39486). Nov 12 22:53:05.176976 systemd-logind[1485]: Removed session 6. Nov 12 22:53:05.223016 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 39486 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:53:05.224397 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:53:05.228296 systemd-logind[1485]: New session 7 of user core. Nov 12 22:53:05.237650 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 22:53:05.295643 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 22:53:05.295983 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:53:05.313035 sudo[1649]: pam_unix(sudo:session): session closed for user root Nov 12 22:53:05.314628 sshd[1648]: Connection closed by 10.0.0.1 port 39486 Nov 12 22:53:05.315044 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Nov 12 22:53:05.340573 systemd[1]: sshd@6-10.0.0.140:22-10.0.0.1:39486.service: Deactivated successfully. Nov 12 22:53:05.342368 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 22:53:05.343816 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. Nov 12 22:53:05.345194 systemd[1]: Started sshd@7-10.0.0.140:22-10.0.0.1:39496.service - OpenSSH per-connection server daemon (10.0.0.1:39496). Nov 12 22:53:05.345998 systemd-logind[1485]: Removed session 7. Nov 12 22:53:05.399079 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 39496 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:53:05.401121 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:53:05.405701 systemd-logind[1485]: New session 8 of user core. Nov 12 22:53:05.418722 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 22:53:05.473330 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 22:53:05.473707 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:53:05.477670 sudo[1658]: pam_unix(sudo:session): session closed for user root Nov 12 22:53:05.484163 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 12 22:53:05.484497 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:53:05.501953 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 12 22:53:05.534410 augenrules[1680]: No rules Nov 12 22:53:05.536379 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 22:53:05.536684 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 12 22:53:05.538051 sudo[1657]: pam_unix(sudo:session): session closed for user root Nov 12 22:53:05.539844 sshd[1656]: Connection closed by 10.0.0.1 port 39496 Nov 12 22:53:05.540265 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Nov 12 22:53:05.551630 systemd[1]: sshd@7-10.0.0.140:22-10.0.0.1:39496.service: Deactivated successfully. Nov 12 22:53:05.553437 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 22:53:05.554886 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. Nov 12 22:53:05.566031 systemd[1]: Started sshd@8-10.0.0.140:22-10.0.0.1:39512.service - OpenSSH per-connection server daemon (10.0.0.1:39512). Nov 12 22:53:05.567259 systemd-logind[1485]: Removed session 8. Nov 12 22:53:05.601923 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 39512 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:53:05.603351 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:53:05.607704 systemd-logind[1485]: New session 9 of user core. Nov 12 22:53:05.617677 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 22:53:05.671247 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 22:53:05.671602 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:53:06.133747 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 22:53:06.133890 (dockerd)[1711]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 22:53:06.656387 dockerd[1711]: time="2024-11-12T22:53:06.656304856Z" level=info msg="Starting up" Nov 12 22:53:06.809982 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1993466171-merged.mount: Deactivated successfully. Nov 12 22:53:06.845200 dockerd[1711]: time="2024-11-12T22:53:06.845129815Z" level=info msg="Loading containers: start." Nov 12 22:53:06.886934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 22:53:06.895848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:53:07.177421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:53:07.182859 (kubelet)[1805]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:53:07.257170 kubelet[1805]: E1112 22:53:07.257088 1805 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:53:07.264860 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:53:07.265189 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:53:07.266555 kernel: Initializing XFRM netlink socket Nov 12 22:53:07.362953 systemd-networkd[1430]: docker0: Link UP Nov 12 22:53:07.412407 dockerd[1711]: time="2024-11-12T22:53:07.412335584Z" level=info msg="Loading containers: done." Nov 12 22:53:07.438654 dockerd[1711]: time="2024-11-12T22:53:07.438525799Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 22:53:07.438829 dockerd[1711]: time="2024-11-12T22:53:07.438681791Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Nov 12 22:53:07.438829 dockerd[1711]: time="2024-11-12T22:53:07.438826383Z" level=info msg="Daemon has completed initialization" Nov 12 22:53:07.479522 dockerd[1711]: time="2024-11-12T22:53:07.479271210Z" level=info msg="API listen on /run/docker.sock" Nov 12 22:53:07.479807 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 22:53:07.807180 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck827225091-merged.mount: Deactivated successfully. Nov 12 22:53:08.218631 containerd[1497]: time="2024-11-12T22:53:08.218492131Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.6\"" Nov 12 22:53:09.006933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2428601236.mount: Deactivated successfully. Nov 12 22:53:10.215886 containerd[1497]: time="2024-11-12T22:53:10.215826293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:10.216579 containerd[1497]: time="2024-11-12T22:53:10.216544310Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.6: active requests=0, bytes read=32676443" Nov 12 22:53:10.217765 containerd[1497]: time="2024-11-12T22:53:10.217731367Z" level=info msg="ImageCreate event name:\"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:10.220315 containerd[1497]: time="2024-11-12T22:53:10.220275730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:3a820898379831ecff7cf4ce4954bb7a6505988eefcef146fd1ee2f56a01cdbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:10.221775 containerd[1497]: time="2024-11-12T22:53:10.221728456Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.6\" with image id \"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:3a820898379831ecff7cf4ce4954bb7a6505988eefcef146fd1ee2f56a01cdbb\", size \"32673243\" in 2.003160431s" Nov 12 22:53:10.221826 containerd[1497]: time="2024-11-12T22:53:10.221776716Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.6\" returns image reference \"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\"" Nov 12 22:53:10.244094 containerd[1497]: time="2024-11-12T22:53:10.244051165Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.6\"" Nov 12 22:53:13.087688 containerd[1497]: time="2024-11-12T22:53:13.087297568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:13.088768 containerd[1497]: time="2024-11-12T22:53:13.088718083Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.6: active requests=0, bytes read=29605796" Nov 12 22:53:13.090065 containerd[1497]: time="2024-11-12T22:53:13.089991171Z" level=info msg="ImageCreate event name:\"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:13.093652 containerd[1497]: time="2024-11-12T22:53:13.093611363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a412c3cdf35d39c8d37748b457a486faae7c5f2ee1d1ba2059c709bc5534686\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:13.094904 containerd[1497]: time="2024-11-12T22:53:13.094839256Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.6\" with image id \"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a412c3cdf35d39c8d37748b457a486faae7c5f2ee1d1ba2059c709bc5534686\", size \"31051162\" in 2.850742917s" Nov 12 22:53:13.094904 containerd[1497]: time="2024-11-12T22:53:13.094883059Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.6\" returns image reference \"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\"" Nov 12 22:53:13.126648 containerd[1497]: time="2024-11-12T22:53:13.126288768Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.6\"" Nov 12 22:53:14.610810 containerd[1497]: time="2024-11-12T22:53:14.610756766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:14.611636 containerd[1497]: time="2024-11-12T22:53:14.611587575Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.6: active requests=0, bytes read=17784244" Nov 12 22:53:14.612813 containerd[1497]: time="2024-11-12T22:53:14.612773009Z" level=info msg="ImageCreate event name:\"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:14.615926 containerd[1497]: time="2024-11-12T22:53:14.615898693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:948395c284d82c985f2dc0d99b5b51b3ca85eba97003babbc73834e0ab91fa59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:14.617341 containerd[1497]: time="2024-11-12T22:53:14.617260808Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.6\" with image id \"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:948395c284d82c985f2dc0d99b5b51b3ca85eba97003babbc73834e0ab91fa59\", size \"19229628\" in 1.490903822s" Nov 12 22:53:14.617410 containerd[1497]: time="2024-11-12T22:53:14.617342461Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.6\" returns image reference \"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\"" Nov 12 22:53:14.643632 containerd[1497]: time="2024-11-12T22:53:14.643588831Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.6\"" Nov 12 22:53:16.499355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4201080704.mount: Deactivated successfully. Nov 12 22:53:17.018644 containerd[1497]: time="2024-11-12T22:53:17.018513439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:17.024738 containerd[1497]: time="2024-11-12T22:53:17.024696459Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.6: active requests=0, bytes read=29054624" Nov 12 22:53:17.039569 containerd[1497]: time="2024-11-12T22:53:17.039505752Z" level=info msg="ImageCreate event name:\"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:17.049346 containerd[1497]: time="2024-11-12T22:53:17.049271313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:aaf790f611159ab21713affc2c5676f742c9b31db26dd2e61e46c4257dd11b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:17.050077 containerd[1497]: time="2024-11-12T22:53:17.050031409Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.6\" with image id \"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\", repo tag \"registry.k8s.io/kube-proxy:v1.30.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:aaf790f611159ab21713affc2c5676f742c9b31db26dd2e61e46c4257dd11b76\", size \"29053643\" in 2.406398175s" Nov 12 22:53:17.050077 containerd[1497]: time="2024-11-12T22:53:17.050060053Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.6\" returns image reference \"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\"" Nov 12 22:53:17.107495 containerd[1497]: time="2024-11-12T22:53:17.107429693Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 22:53:17.387035 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 22:53:17.401780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:53:17.631669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:53:17.637103 (kubelet)[2036]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:53:17.706054 kubelet[2036]: E1112 22:53:17.705980 2036 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:53:17.711752 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:53:17.711993 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:53:18.319343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4066162664.mount: Deactivated successfully. Nov 12 22:53:19.693985 containerd[1497]: time="2024-11-12T22:53:19.693903993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:19.694938 containerd[1497]: time="2024-11-12T22:53:19.694844337Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 22:53:19.697555 containerd[1497]: time="2024-11-12T22:53:19.697501182Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:19.743834 containerd[1497]: time="2024-11-12T22:53:19.743769454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:19.744761 containerd[1497]: time="2024-11-12T22:53:19.744725077Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.637228008s" Nov 12 22:53:19.744761 containerd[1497]: time="2024-11-12T22:53:19.744757378Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 22:53:19.767453 containerd[1497]: time="2024-11-12T22:53:19.767400197Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 22:53:20.389988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2233150415.mount: Deactivated successfully. Nov 12 22:53:20.395586 containerd[1497]: time="2024-11-12T22:53:20.395514855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:20.396274 containerd[1497]: time="2024-11-12T22:53:20.396206933Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Nov 12 22:53:20.397399 containerd[1497]: time="2024-11-12T22:53:20.397345169Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:20.400066 containerd[1497]: time="2024-11-12T22:53:20.400028834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:20.400784 containerd[1497]: time="2024-11-12T22:53:20.400757561Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 633.315506ms" Nov 12 22:53:20.400851 containerd[1497]: time="2024-11-12T22:53:20.400789260Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 22:53:20.429587 containerd[1497]: time="2024-11-12T22:53:20.429489484Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Nov 12 22:53:21.087616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1509627250.mount: Deactivated successfully. Nov 12 22:53:23.715567 containerd[1497]: time="2024-11-12T22:53:23.715490637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:23.716300 containerd[1497]: time="2024-11-12T22:53:23.716210978Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Nov 12 22:53:23.717590 containerd[1497]: time="2024-11-12T22:53:23.717560700Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:23.723386 containerd[1497]: time="2024-11-12T22:53:23.723339552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:23.724852 containerd[1497]: time="2024-11-12T22:53:23.724797587Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.295238753s" Nov 12 22:53:23.724901 containerd[1497]: time="2024-11-12T22:53:23.724857780Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Nov 12 22:53:26.186492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:53:26.202850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:53:26.225254 systemd[1]: Reloading requested from client PID 2232 ('systemctl') (unit session-9.scope)... Nov 12 22:53:26.225268 systemd[1]: Reloading... Nov 12 22:53:26.294559 zram_generator::config[2270]: No configuration found. Nov 12 22:53:26.553583 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:53:26.634558 systemd[1]: Reloading finished in 408 ms. Nov 12 22:53:26.689346 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 22:53:26.689438 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 22:53:26.689850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:53:26.692681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:53:26.852436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:53:26.857952 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 22:53:26.901226 kubelet[2320]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:53:26.901226 kubelet[2320]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 22:53:26.901226 kubelet[2320]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:53:26.902201 kubelet[2320]: I1112 22:53:26.902163 2320 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 22:53:27.307454 kubelet[2320]: I1112 22:53:27.307404 2320 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Nov 12 22:53:27.307454 kubelet[2320]: I1112 22:53:27.307436 2320 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 22:53:27.307705 kubelet[2320]: I1112 22:53:27.307681 2320 server.go:927] "Client rotation is on, will bootstrap in background" Nov 12 22:53:27.322677 kubelet[2320]: E1112 22:53:27.322624 2320 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.140:6443: connect: connection refused Nov 12 22:53:27.323202 kubelet[2320]: I1112 22:53:27.323167 2320 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 22:53:27.336801 kubelet[2320]: I1112 22:53:27.336750 2320 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 22:53:27.338209 kubelet[2320]: I1112 22:53:27.338145 2320 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 22:53:27.338472 kubelet[2320]: I1112 22:53:27.338196 2320 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 22:53:27.338964 kubelet[2320]: I1112 22:53:27.338936 2320 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 22:53:27.338964 kubelet[2320]: I1112 22:53:27.338962 2320 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 22:53:27.339192 kubelet[2320]: I1112 22:53:27.339169 2320 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:53:27.339967 kubelet[2320]: I1112 22:53:27.339939 2320 kubelet.go:400] "Attempting to sync node with API server" Nov 12 22:53:27.339967 kubelet[2320]: I1112 22:53:27.339962 2320 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 22:53:27.340032 kubelet[2320]: I1112 22:53:27.340000 2320 kubelet.go:312] "Adding apiserver pod source" Nov 12 22:53:27.340032 kubelet[2320]: I1112 22:53:27.340030 2320 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 22:53:27.344116 kubelet[2320]: W1112 22:53:27.343983 2320 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Nov 12 22:53:27.344116 kubelet[2320]: W1112 22:53:27.344003 2320 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.140:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Nov 12 22:53:27.344116 kubelet[2320]: E1112 22:53:27.344067 2320 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Nov 12 22:53:27.344116 kubelet[2320]: E1112 22:53:27.344075 2320 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.140:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Nov 12 22:53:27.345415 kubelet[2320]: I1112 22:53:27.345378 2320 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 12 22:53:27.350021 kubelet[2320]: I1112 22:53:27.347746 2320 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 22:53:27.350021 kubelet[2320]: W1112 22:53:27.347913 2320 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 22:53:27.350021 kubelet[2320]: I1112 22:53:27.348987 2320 server.go:1264] "Started kubelet" Nov 12 22:53:27.351249 kubelet[2320]: I1112 22:53:27.351196 2320 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 22:53:27.351402 kubelet[2320]: I1112 22:53:27.351338 2320 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 22:53:27.351835 kubelet[2320]: I1112 22:53:27.351809 2320 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 22:53:27.353867 kubelet[2320]: I1112 22:53:27.353842 2320 server.go:455] "Adding debug handlers to kubelet server" Nov 12 22:53:27.356188 kubelet[2320]: I1112 22:53:27.356128 2320 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 22:53:27.356379 kubelet[2320]: I1112 22:53:27.356351 2320 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 22:53:27.357575 kubelet[2320]: I1112 22:53:27.356444 2320 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Nov 12 22:53:27.357575 kubelet[2320]: I1112 22:53:27.356559 2320 reconciler.go:26] "Reconciler: start to sync state" Nov 12 22:53:27.357575 kubelet[2320]: E1112 22:53:27.357223 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="200ms" Nov 12 22:53:27.357575 kubelet[2320]: W1112 22:53:27.357292 2320 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Nov 12 22:53:27.357575 kubelet[2320]: E1112 22:53:27.357331 2320 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Nov 12 22:53:27.357575 kubelet[2320]: E1112 22:53:27.357340 2320 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.140:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.140:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18075a6d53abbf49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 22:53:27.348944713 +0000 UTC m=+0.485888786,LastTimestamp:2024-11-12 22:53:27.348944713 +0000 UTC m=+0.485888786,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 22:53:27.359027 kubelet[2320]: I1112 22:53:27.358999 2320 factory.go:221] Registration of the systemd container factory successfully Nov 12 22:53:27.359353 kubelet[2320]: I1112 22:53:27.359292 2320 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 22:53:27.359751 kubelet[2320]: E1112 22:53:27.359653 2320 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 22:53:27.360674 kubelet[2320]: I1112 22:53:27.360639 2320 factory.go:221] Registration of the containerd container factory successfully Nov 12 22:53:27.378196 kubelet[2320]: I1112 22:53:27.378107 2320 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 22:53:27.380593 kubelet[2320]: I1112 22:53:27.380102 2320 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 22:53:27.380593 kubelet[2320]: I1112 22:53:27.380149 2320 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 22:53:27.380593 kubelet[2320]: I1112 22:53:27.380175 2320 kubelet.go:2337] "Starting kubelet main sync loop" Nov 12 22:53:27.380593 kubelet[2320]: E1112 22:53:27.380226 2320 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 22:53:27.381095 kubelet[2320]: W1112 22:53:27.380893 2320 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Nov 12 22:53:27.381095 kubelet[2320]: E1112 22:53:27.380939 2320 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Nov 12 22:53:27.382234 kubelet[2320]: I1112 22:53:27.381901 2320 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 22:53:27.382234 kubelet[2320]: I1112 22:53:27.381924 2320 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 22:53:27.382234 kubelet[2320]: I1112 22:53:27.381952 2320 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:53:27.458462 kubelet[2320]: I1112 22:53:27.458384 2320 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:53:27.458936 kubelet[2320]: E1112 22:53:27.458897 2320 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Nov 12 22:53:27.481210 kubelet[2320]: E1112 22:53:27.481133 2320 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 22:53:27.558039 kubelet[2320]: E1112 22:53:27.557899 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="400ms" Nov 12 22:53:27.660657 kubelet[2320]: I1112 22:53:27.660606 2320 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:53:27.661002 kubelet[2320]: E1112 22:53:27.660964 2320 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Nov 12 22:53:27.682213 kubelet[2320]: E1112 22:53:27.682149 2320 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 22:53:27.959457 kubelet[2320]: E1112 22:53:27.959380 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="800ms" Nov 12 22:53:28.063339 kubelet[2320]: I1112 22:53:28.063287 2320 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:53:28.063759 kubelet[2320]: E1112 22:53:28.063720 2320 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Nov 12 22:53:28.082889 kubelet[2320]: E1112 22:53:28.082842 2320 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 22:53:28.361432 kubelet[2320]: I1112 22:53:28.361237 2320 policy_none.go:49] "None policy: Start" Nov 12 22:53:28.362195 kubelet[2320]: I1112 22:53:28.362170 2320 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 22:53:28.362259 kubelet[2320]: I1112 22:53:28.362202 2320 state_mem.go:35] "Initializing new in-memory state store" Nov 12 22:53:28.372049 kubelet[2320]: W1112 22:53:28.371990 2320 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Nov 12 22:53:28.372049 kubelet[2320]: E1112 22:53:28.372044 2320 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Nov 12 22:53:28.390477 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 22:53:28.408245 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 22:53:28.421133 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 22:53:28.422922 kubelet[2320]: I1112 22:53:28.422879 2320 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 22:53:28.423153 kubelet[2320]: I1112 22:53:28.423106 2320 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 22:53:28.423357 kubelet[2320]: I1112 22:53:28.423238 2320 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 22:53:28.424294 kubelet[2320]: E1112 22:53:28.424265 2320 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 22:53:28.512177 kubelet[2320]: W1112 22:53:28.512080 2320 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Nov 12 22:53:28.512177 kubelet[2320]: E1112 22:53:28.512168 2320 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Nov 12 22:53:28.686428 kubelet[2320]: W1112 22:53:28.686371 2320 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.140:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Nov 12 22:53:28.686428 kubelet[2320]: E1112 22:53:28.686435 2320 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.140:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Nov 12 22:53:28.752935 kubelet[2320]: W1112 22:53:28.752911 2320 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Nov 12 22:53:28.752935 kubelet[2320]: E1112 22:53:28.752939 2320 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Nov 12 22:53:28.760547 kubelet[2320]: E1112 22:53:28.760484 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="1.6s" Nov 12 22:53:28.865740 kubelet[2320]: I1112 22:53:28.865707 2320 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:53:28.866150 kubelet[2320]: E1112 22:53:28.866104 2320 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Nov 12 22:53:28.883311 kubelet[2320]: I1112 22:53:28.883225 2320 topology_manager.go:215] "Topology Admit Handler" podUID="d018faa635bf467efb29e1a109618fa6" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 22:53:28.884554 kubelet[2320]: I1112 22:53:28.884506 2320 topology_manager.go:215] "Topology Admit Handler" podUID="35a50a3f0f14abbdd3fae477f39e6e18" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 22:53:28.885627 kubelet[2320]: I1112 22:53:28.885579 2320 topology_manager.go:215] "Topology Admit Handler" podUID="c95384ce7f39fb5cff38cd36dacf8a69" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 22:53:28.891252 systemd[1]: Created slice kubepods-burstable-podd018faa635bf467efb29e1a109618fa6.slice - libcontainer container kubepods-burstable-podd018faa635bf467efb29e1a109618fa6.slice. Nov 12 22:53:28.912913 systemd[1]: Created slice kubepods-burstable-pod35a50a3f0f14abbdd3fae477f39e6e18.slice - libcontainer container kubepods-burstable-pod35a50a3f0f14abbdd3fae477f39e6e18.slice. Nov 12 22:53:28.917939 systemd[1]: Created slice kubepods-burstable-podc95384ce7f39fb5cff38cd36dacf8a69.slice - libcontainer container kubepods-burstable-podc95384ce7f39fb5cff38cd36dacf8a69.slice. Nov 12 22:53:28.965046 kubelet[2320]: I1112 22:53:28.964880 2320 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d018faa635bf467efb29e1a109618fa6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d018faa635bf467efb29e1a109618fa6\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:53:28.965046 kubelet[2320]: I1112 22:53:28.964923 2320 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:53:28.965046 kubelet[2320]: I1112 22:53:28.964941 2320 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:53:28.965046 kubelet[2320]: I1112 22:53:28.964958 2320 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c95384ce7f39fb5cff38cd36dacf8a69-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c95384ce7f39fb5cff38cd36dacf8a69\") " pod="kube-system/kube-scheduler-localhost" Nov 12 22:53:28.965046 kubelet[2320]: I1112 22:53:28.965010 2320 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d018faa635bf467efb29e1a109618fa6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d018faa635bf467efb29e1a109618fa6\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:53:28.965811 kubelet[2320]: I1112 22:53:28.965027 2320 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d018faa635bf467efb29e1a109618fa6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d018faa635bf467efb29e1a109618fa6\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:53:28.965811 kubelet[2320]: I1112 22:53:28.965045 2320 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:53:28.965811 kubelet[2320]: I1112 22:53:28.965067 2320 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:53:28.965811 kubelet[2320]: I1112 22:53:28.965082 2320 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:53:29.211127 kubelet[2320]: E1112 22:53:29.211083 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:29.211922 containerd[1497]: time="2024-11-12T22:53:29.211883611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d018faa635bf467efb29e1a109618fa6,Namespace:kube-system,Attempt:0,}" Nov 12 22:53:29.216118 kubelet[2320]: E1112 22:53:29.215999 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:29.216396 containerd[1497]: time="2024-11-12T22:53:29.216330080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:35a50a3f0f14abbdd3fae477f39e6e18,Namespace:kube-system,Attempt:0,}" Nov 12 22:53:29.220843 kubelet[2320]: E1112 22:53:29.220816 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:29.221150 containerd[1497]: time="2024-11-12T22:53:29.221126727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c95384ce7f39fb5cff38cd36dacf8a69,Namespace:kube-system,Attempt:0,}" Nov 12 22:53:29.348430 kubelet[2320]: E1112 22:53:29.348366 2320 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.140:6443: connect: connection refused Nov 12 22:53:29.789338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1462665551.mount: Deactivated successfully. Nov 12 22:53:29.795961 containerd[1497]: time="2024-11-12T22:53:29.795918311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:53:29.798581 containerd[1497]: time="2024-11-12T22:53:29.798527644Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 22:53:29.799474 containerd[1497]: time="2024-11-12T22:53:29.799443496Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:53:29.801351 containerd[1497]: time="2024-11-12T22:53:29.801310278Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:53:29.802066 containerd[1497]: time="2024-11-12T22:53:29.802026980Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 22:53:29.802905 containerd[1497]: time="2024-11-12T22:53:29.802860134Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:53:29.803722 containerd[1497]: time="2024-11-12T22:53:29.803679301Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 22:53:29.804691 containerd[1497]: time="2024-11-12T22:53:29.804659217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:53:29.805836 containerd[1497]: time="2024-11-12T22:53:29.805799057Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 589.399715ms" Nov 12 22:53:29.807205 containerd[1497]: time="2024-11-12T22:53:29.807167456Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 585.981565ms" Nov 12 22:53:29.811318 containerd[1497]: time="2024-11-12T22:53:29.811265998Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 599.269011ms" Nov 12 22:53:30.037545 containerd[1497]: time="2024-11-12T22:53:30.037258395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:53:30.037545 containerd[1497]: time="2024-11-12T22:53:30.037313831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:53:30.037545 containerd[1497]: time="2024-11-12T22:53:30.037329191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:30.037545 containerd[1497]: time="2024-11-12T22:53:30.037405907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:30.037545 containerd[1497]: time="2024-11-12T22:53:30.037230301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:53:30.037545 containerd[1497]: time="2024-11-12T22:53:30.037281189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:53:30.037545 containerd[1497]: time="2024-11-12T22:53:30.037290947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:30.037545 containerd[1497]: time="2024-11-12T22:53:30.037376400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:30.039758 containerd[1497]: time="2024-11-12T22:53:30.036968481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:53:30.039758 containerd[1497]: time="2024-11-12T22:53:30.039326818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:53:30.039758 containerd[1497]: time="2024-11-12T22:53:30.039339122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:30.039758 containerd[1497]: time="2024-11-12T22:53:30.039414947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:30.068836 systemd[1]: Started cri-containerd-cec39591ee50ce202285cdf6b69b00205c550c58bc67e038e043399ee782040d.scope - libcontainer container cec39591ee50ce202285cdf6b69b00205c550c58bc67e038e043399ee782040d. Nov 12 22:53:30.073975 systemd[1]: Started cri-containerd-17dcc6fa29ce0e30d20614d9319f1f87ddd4b8fd4caa0fe728bab093617d47b3.scope - libcontainer container 17dcc6fa29ce0e30d20614d9319f1f87ddd4b8fd4caa0fe728bab093617d47b3. Nov 12 22:53:30.097276 systemd[1]: Started cri-containerd-c4d6c111379d9652a166f0955f4200bd4a6d9d67b42273d1421d151a854c6018.scope - libcontainer container c4d6c111379d9652a166f0955f4200bd4a6d9d67b42273d1421d151a854c6018. Nov 12 22:53:30.112946 kubelet[2320]: W1112 22:53:30.112902 2320 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Nov 12 22:53:30.112946 kubelet[2320]: E1112 22:53:30.112949 2320 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Nov 12 22:53:30.150324 containerd[1497]: time="2024-11-12T22:53:30.149806948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d018faa635bf467efb29e1a109618fa6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4d6c111379d9652a166f0955f4200bd4a6d9d67b42273d1421d151a854c6018\"" Nov 12 22:53:30.152207 kubelet[2320]: E1112 22:53:30.152181 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:30.152458 containerd[1497]: time="2024-11-12T22:53:30.152416876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:35a50a3f0f14abbdd3fae477f39e6e18,Namespace:kube-system,Attempt:0,} returns sandbox id \"cec39591ee50ce202285cdf6b69b00205c550c58bc67e038e043399ee782040d\"" Nov 12 22:53:30.154489 kubelet[2320]: E1112 22:53:30.154468 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:30.154721 containerd[1497]: time="2024-11-12T22:53:30.154692485Z" level=info msg="CreateContainer within sandbox \"c4d6c111379d9652a166f0955f4200bd4a6d9d67b42273d1421d151a854c6018\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 22:53:30.156459 containerd[1497]: time="2024-11-12T22:53:30.156428673Z" level=info msg="CreateContainer within sandbox \"cec39591ee50ce202285cdf6b69b00205c550c58bc67e038e043399ee782040d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 22:53:30.160555 containerd[1497]: time="2024-11-12T22:53:30.160514391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c95384ce7f39fb5cff38cd36dacf8a69,Namespace:kube-system,Attempt:0,} returns sandbox id \"17dcc6fa29ce0e30d20614d9319f1f87ddd4b8fd4caa0fe728bab093617d47b3\"" Nov 12 22:53:30.161122 kubelet[2320]: E1112 22:53:30.161098 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:30.162934 containerd[1497]: time="2024-11-12T22:53:30.162907965Z" level=info msg="CreateContainer within sandbox \"17dcc6fa29ce0e30d20614d9319f1f87ddd4b8fd4caa0fe728bab093617d47b3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 22:53:30.179960 containerd[1497]: time="2024-11-12T22:53:30.179902725Z" level=info msg="CreateContainer within sandbox \"c4d6c111379d9652a166f0955f4200bd4a6d9d67b42273d1421d151a854c6018\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"28c2ae91fd5e2574680980e0c6aa7ef78237e0948a401fb837378064252fcb6d\"" Nov 12 22:53:30.180898 containerd[1497]: time="2024-11-12T22:53:30.180512359Z" level=info msg="StartContainer for \"28c2ae91fd5e2574680980e0c6aa7ef78237e0948a401fb837378064252fcb6d\"" Nov 12 22:53:30.190082 containerd[1497]: time="2024-11-12T22:53:30.190033887Z" level=info msg="CreateContainer within sandbox \"17dcc6fa29ce0e30d20614d9319f1f87ddd4b8fd4caa0fe728bab093617d47b3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c60a52bdba3bf56b5fd3553b6947c55e362a1bf7607489d329fa6ed75aa49321\"" Nov 12 22:53:30.190527 containerd[1497]: time="2024-11-12T22:53:30.190506309Z" level=info msg="StartContainer for \"c60a52bdba3bf56b5fd3553b6947c55e362a1bf7607489d329fa6ed75aa49321\"" Nov 12 22:53:30.191733 containerd[1497]: time="2024-11-12T22:53:30.191704609Z" level=info msg="CreateContainer within sandbox \"cec39591ee50ce202285cdf6b69b00205c550c58bc67e038e043399ee782040d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2e1bf3a862d74a3995958fefe7db35713b85d369e6cc8f0869779c603bd7c1fc\"" Nov 12 22:53:30.192883 containerd[1497]: time="2024-11-12T22:53:30.192839829Z" level=info msg="StartContainer for \"2e1bf3a862d74a3995958fefe7db35713b85d369e6cc8f0869779c603bd7c1fc\"" Nov 12 22:53:30.207896 systemd[1]: Started cri-containerd-28c2ae91fd5e2574680980e0c6aa7ef78237e0948a401fb837378064252fcb6d.scope - libcontainer container 28c2ae91fd5e2574680980e0c6aa7ef78237e0948a401fb837378064252fcb6d. Nov 12 22:53:30.232339 systemd[1]: Started cri-containerd-2e1bf3a862d74a3995958fefe7db35713b85d369e6cc8f0869779c603bd7c1fc.scope - libcontainer container 2e1bf3a862d74a3995958fefe7db35713b85d369e6cc8f0869779c603bd7c1fc. Nov 12 22:53:30.289689 systemd[1]: Started cri-containerd-c60a52bdba3bf56b5fd3553b6947c55e362a1bf7607489d329fa6ed75aa49321.scope - libcontainer container c60a52bdba3bf56b5fd3553b6947c55e362a1bf7607489d329fa6ed75aa49321. Nov 12 22:53:30.326513 containerd[1497]: time="2024-11-12T22:53:30.325665093Z" level=info msg="StartContainer for \"28c2ae91fd5e2574680980e0c6aa7ef78237e0948a401fb837378064252fcb6d\" returns successfully" Nov 12 22:53:30.332977 containerd[1497]: time="2024-11-12T22:53:30.332920539Z" level=info msg="StartContainer for \"2e1bf3a862d74a3995958fefe7db35713b85d369e6cc8f0869779c603bd7c1fc\" returns successfully" Nov 12 22:53:30.362170 kubelet[2320]: E1112 22:53:30.362013 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="3.2s" Nov 12 22:53:30.367673 containerd[1497]: time="2024-11-12T22:53:30.367614593Z" level=info msg="StartContainer for \"c60a52bdba3bf56b5fd3553b6947c55e362a1bf7607489d329fa6ed75aa49321\" returns successfully" Nov 12 22:53:30.393017 kubelet[2320]: E1112 22:53:30.392779 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:30.396718 kubelet[2320]: E1112 22:53:30.396679 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:30.402549 kubelet[2320]: E1112 22:53:30.401387 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:30.468722 kubelet[2320]: I1112 22:53:30.468664 2320 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:53:31.403057 kubelet[2320]: E1112 22:53:31.403010 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:31.836502 kubelet[2320]: I1112 22:53:31.835388 2320 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 22:53:32.345074 kubelet[2320]: I1112 22:53:32.345013 2320 apiserver.go:52] "Watching apiserver" Nov 12 22:53:32.357581 kubelet[2320]: I1112 22:53:32.357561 2320 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Nov 12 22:53:32.411464 kubelet[2320]: E1112 22:53:32.411410 2320 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 12 22:53:32.411970 kubelet[2320]: E1112 22:53:32.411920 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:32.605753 kubelet[2320]: E1112 22:53:32.605634 2320 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 12 22:53:32.605891 kubelet[2320]: E1112 22:53:32.605884 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:34.295197 systemd[1]: Reloading requested from client PID 2606 ('systemctl') (unit session-9.scope)... Nov 12 22:53:34.295212 systemd[1]: Reloading... Nov 12 22:53:34.382566 zram_generator::config[2648]: No configuration found. Nov 12 22:53:34.495952 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:53:34.595595 systemd[1]: Reloading finished in 299 ms. Nov 12 22:53:34.649141 kubelet[2320]: E1112 22:53:34.648992 2320 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.18075a6d53abbf49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 22:53:27.348944713 +0000 UTC m=+0.485888786,LastTimestamp:2024-11-12 22:53:27.348944713 +0000 UTC m=+0.485888786,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 22:53:34.649185 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:53:34.662150 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 22:53:34.662485 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:53:34.662565 systemd[1]: kubelet.service: Consumed 1.079s CPU time, 121.5M memory peak, 0B memory swap peak. Nov 12 22:53:34.677837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:53:34.832508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:53:34.849065 (kubelet)[2690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 22:53:34.895216 kubelet[2690]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:53:34.895216 kubelet[2690]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 22:53:34.895216 kubelet[2690]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:53:34.895649 kubelet[2690]: I1112 22:53:34.895273 2690 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 22:53:34.902096 kubelet[2690]: I1112 22:53:34.902049 2690 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Nov 12 22:53:34.902096 kubelet[2690]: I1112 22:53:34.902103 2690 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 22:53:34.902743 kubelet[2690]: I1112 22:53:34.902718 2690 server.go:927] "Client rotation is on, will bootstrap in background" Nov 12 22:53:34.904231 kubelet[2690]: I1112 22:53:34.904212 2690 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 22:53:34.905339 kubelet[2690]: I1112 22:53:34.905290 2690 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 22:53:34.912262 kubelet[2690]: I1112 22:53:34.912228 2690 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 22:53:34.912602 kubelet[2690]: I1112 22:53:34.912492 2690 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 22:53:34.912706 kubelet[2690]: I1112 22:53:34.912546 2690 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 22:53:34.912812 kubelet[2690]: I1112 22:53:34.912724 2690 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 22:53:34.912812 kubelet[2690]: I1112 22:53:34.912735 2690 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 22:53:34.912812 kubelet[2690]: I1112 22:53:34.912781 2690 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:53:34.912882 kubelet[2690]: I1112 22:53:34.912866 2690 kubelet.go:400] "Attempting to sync node with API server" Nov 12 22:53:34.912882 kubelet[2690]: I1112 22:53:34.912877 2690 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 22:53:34.912936 kubelet[2690]: I1112 22:53:34.912897 2690 kubelet.go:312] "Adding apiserver pod source" Nov 12 22:53:34.912936 kubelet[2690]: I1112 22:53:34.912912 2690 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 22:53:34.915672 kubelet[2690]: I1112 22:53:34.914720 2690 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 12 22:53:34.915672 kubelet[2690]: I1112 22:53:34.914954 2690 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 22:53:34.915672 kubelet[2690]: I1112 22:53:34.915324 2690 server.go:1264] "Started kubelet" Nov 12 22:53:34.917319 kubelet[2690]: I1112 22:53:34.917009 2690 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 22:53:34.918261 kubelet[2690]: I1112 22:53:34.918234 2690 server.go:455] "Adding debug handlers to kubelet server" Nov 12 22:53:34.921547 kubelet[2690]: I1112 22:53:34.920096 2690 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 22:53:34.921547 kubelet[2690]: I1112 22:53:34.920410 2690 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 22:53:34.923500 kubelet[2690]: I1112 22:53:34.922678 2690 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 22:53:34.924056 kubelet[2690]: E1112 22:53:34.924038 2690 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 22:53:34.927016 kubelet[2690]: I1112 22:53:34.926985 2690 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 22:53:34.927251 kubelet[2690]: I1112 22:53:34.927228 2690 reconciler.go:26] "Reconciler: start to sync state" Nov 12 22:53:34.928078 kubelet[2690]: I1112 22:53:34.928047 2690 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Nov 12 22:53:34.932823 kubelet[2690]: I1112 22:53:34.932784 2690 factory.go:221] Registration of the containerd container factory successfully Nov 12 22:53:34.932880 kubelet[2690]: I1112 22:53:34.932872 2690 factory.go:221] Registration of the systemd container factory successfully Nov 12 22:53:34.933051 kubelet[2690]: I1112 22:53:34.933030 2690 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 22:53:34.936445 kubelet[2690]: I1112 22:53:34.936401 2690 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 22:53:34.937549 kubelet[2690]: I1112 22:53:34.937509 2690 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 22:53:34.937599 kubelet[2690]: I1112 22:53:34.937565 2690 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 22:53:34.937599 kubelet[2690]: I1112 22:53:34.937584 2690 kubelet.go:2337] "Starting kubelet main sync loop" Nov 12 22:53:34.937659 kubelet[2690]: E1112 22:53:34.937625 2690 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 22:53:34.967982 kubelet[2690]: I1112 22:53:34.967956 2690 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 22:53:34.967982 kubelet[2690]: I1112 22:53:34.967975 2690 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 22:53:34.968106 kubelet[2690]: I1112 22:53:34.967992 2690 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:53:34.968141 kubelet[2690]: I1112 22:53:34.968123 2690 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 22:53:34.968163 kubelet[2690]: I1112 22:53:34.968138 2690 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 22:53:34.968163 kubelet[2690]: I1112 22:53:34.968155 2690 policy_none.go:49] "None policy: Start" Nov 12 22:53:34.968702 kubelet[2690]: I1112 22:53:34.968684 2690 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 22:53:34.968741 kubelet[2690]: I1112 22:53:34.968705 2690 state_mem.go:35] "Initializing new in-memory state store" Nov 12 22:53:34.968822 kubelet[2690]: I1112 22:53:34.968808 2690 state_mem.go:75] "Updated machine memory state" Nov 12 22:53:34.972950 kubelet[2690]: I1112 22:53:34.972924 2690 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 22:53:34.973245 kubelet[2690]: I1112 22:53:34.973076 2690 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 22:53:34.977637 kubelet[2690]: I1112 22:53:34.977604 2690 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 22:53:35.038251 kubelet[2690]: I1112 22:53:35.038181 2690 topology_manager.go:215] "Topology Admit Handler" podUID="d018faa635bf467efb29e1a109618fa6" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 22:53:35.038385 kubelet[2690]: I1112 22:53:35.038306 2690 topology_manager.go:215] "Topology Admit Handler" podUID="35a50a3f0f14abbdd3fae477f39e6e18" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 22:53:35.038385 kubelet[2690]: I1112 22:53:35.038376 2690 topology_manager.go:215] "Topology Admit Handler" podUID="c95384ce7f39fb5cff38cd36dacf8a69" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 22:53:35.081747 kubelet[2690]: I1112 22:53:35.081714 2690 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:53:35.088467 kubelet[2690]: I1112 22:53:35.088434 2690 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Nov 12 22:53:35.088524 kubelet[2690]: I1112 22:53:35.088507 2690 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 22:53:35.228996 kubelet[2690]: I1112 22:53:35.228969 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:53:35.229106 kubelet[2690]: I1112 22:53:35.228998 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:53:35.229106 kubelet[2690]: I1112 22:53:35.229022 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:53:35.229106 kubelet[2690]: I1112 22:53:35.229036 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:53:35.229106 kubelet[2690]: I1112 22:53:35.229052 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d018faa635bf467efb29e1a109618fa6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d018faa635bf467efb29e1a109618fa6\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:53:35.229106 kubelet[2690]: I1112 22:53:35.229082 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d018faa635bf467efb29e1a109618fa6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d018faa635bf467efb29e1a109618fa6\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:53:35.229246 kubelet[2690]: I1112 22:53:35.229096 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:53:35.229246 kubelet[2690]: I1112 22:53:35.229110 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c95384ce7f39fb5cff38cd36dacf8a69-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c95384ce7f39fb5cff38cd36dacf8a69\") " pod="kube-system/kube-scheduler-localhost" Nov 12 22:53:35.229246 kubelet[2690]: I1112 22:53:35.229125 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d018faa635bf467efb29e1a109618fa6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d018faa635bf467efb29e1a109618fa6\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:53:35.292228 sudo[2724]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 12 22:53:35.292692 sudo[2724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 12 22:53:35.345197 kubelet[2690]: E1112 22:53:35.345168 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:35.347511 kubelet[2690]: E1112 22:53:35.347482 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:35.348154 kubelet[2690]: E1112 22:53:35.347750 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:35.756603 sudo[2724]: pam_unix(sudo:session): session closed for user root Nov 12 22:53:35.913670 kubelet[2690]: I1112 22:53:35.913563 2690 apiserver.go:52] "Watching apiserver" Nov 12 22:53:35.931389 kubelet[2690]: I1112 22:53:35.931370 2690 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Nov 12 22:53:35.958316 kubelet[2690]: E1112 22:53:35.958243 2690 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 12 22:53:35.958845 kubelet[2690]: E1112 22:53:35.958806 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:35.961208 kubelet[2690]: E1112 22:53:35.961152 2690 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 22:53:35.961658 kubelet[2690]: E1112 22:53:35.961633 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:35.961817 kubelet[2690]: E1112 22:53:35.961791 2690 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 12 22:53:35.962037 kubelet[2690]: E1112 22:53:35.962004 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:36.002273 kubelet[2690]: I1112 22:53:36.002189 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.002162976 podStartE2EDuration="1.002162976s" podCreationTimestamp="2024-11-12 22:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:53:35.99983469 +0000 UTC m=+1.145541208" watchObservedRunningTime="2024-11-12 22:53:36.002162976 +0000 UTC m=+1.147869494" Nov 12 22:53:36.018550 kubelet[2690]: I1112 22:53:36.018145 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.018127285 podStartE2EDuration="1.018127285s" podCreationTimestamp="2024-11-12 22:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:53:36.010781329 +0000 UTC m=+1.156487847" watchObservedRunningTime="2024-11-12 22:53:36.018127285 +0000 UTC m=+1.163833803" Nov 12 22:53:36.881632 sudo[1691]: pam_unix(sudo:session): session closed for user root Nov 12 22:53:36.883846 sshd[1690]: Connection closed by 10.0.0.1 port 39512 Nov 12 22:53:36.884303 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Nov 12 22:53:36.889653 systemd[1]: sshd@8-10.0.0.140:22-10.0.0.1:39512.service: Deactivated successfully. Nov 12 22:53:36.891736 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 22:53:36.891932 systemd[1]: session-9.scope: Consumed 4.861s CPU time, 187.1M memory peak, 0B memory swap peak. Nov 12 22:53:36.892444 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. Nov 12 22:53:36.893421 systemd-logind[1485]: Removed session 9. Nov 12 22:53:36.954612 kubelet[2690]: E1112 22:53:36.954572 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:36.957548 kubelet[2690]: E1112 22:53:36.955738 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:36.957548 kubelet[2690]: E1112 22:53:36.955821 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:37.957484 kubelet[2690]: E1112 22:53:37.957377 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:38.516182 update_engine[1487]: I20241112 22:53:38.516073 1487 update_attempter.cc:509] Updating boot flags... Nov 12 22:53:38.790578 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2772) Nov 12 22:53:38.837596 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2774) Nov 12 22:53:38.867627 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2774) Nov 12 22:53:41.770273 kubelet[2690]: E1112 22:53:41.770235 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:41.781976 kubelet[2690]: I1112 22:53:41.781907 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.781862829 podStartE2EDuration="6.781862829s" podCreationTimestamp="2024-11-12 22:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:53:36.019074105 +0000 UTC m=+1.164780623" watchObservedRunningTime="2024-11-12 22:53:41.781862829 +0000 UTC m=+6.927569347" Nov 12 22:53:41.962396 kubelet[2690]: E1112 22:53:41.962150 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:45.310593 kubelet[2690]: E1112 22:53:45.310555 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:45.968052 kubelet[2690]: E1112 22:53:45.968010 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:47.379906 kubelet[2690]: I1112 22:53:47.379863 2690 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 22:53:47.380377 containerd[1497]: time="2024-11-12T22:53:47.380261857Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 22:53:47.380711 kubelet[2690]: I1112 22:53:47.380489 2690 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 22:53:47.687564 kubelet[2690]: E1112 22:53:47.687501 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:47.917017 kubelet[2690]: I1112 22:53:47.916957 2690 topology_manager.go:215] "Topology Admit Handler" podUID="70e6b112-6efa-43c6-a0f2-2455d5fc7318" podNamespace="kube-system" podName="kube-proxy-vx8wp" Nov 12 22:53:47.928601 kubelet[2690]: I1112 22:53:47.925427 2690 topology_manager.go:215] "Topology Admit Handler" podUID="1e24a339-b48b-4760-89ff-09531df1b4fb" podNamespace="kube-system" podName="cilium-zxzgg" Nov 12 22:53:47.925795 systemd[1]: Created slice kubepods-besteffort-pod70e6b112_6efa_43c6_a0f2_2455d5fc7318.slice - libcontainer container kubepods-besteffort-pod70e6b112_6efa_43c6_a0f2_2455d5fc7318.slice. Nov 12 22:53:47.956817 systemd[1]: Created slice kubepods-burstable-pod1e24a339_b48b_4760_89ff_09531df1b4fb.slice - libcontainer container kubepods-burstable-pod1e24a339_b48b_4760_89ff_09531df1b4fb.slice. Nov 12 22:53:48.007706 kubelet[2690]: I1112 22:53:48.007664 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-etc-cni-netd\") pod \"cilium-zxzgg\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " pod="kube-system/cilium-zxzgg" Nov 12 22:53:48.007706 kubelet[2690]: I1112 22:53:48.007699 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-cilium-run\") pod \"cilium-zxzgg\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " pod="kube-system/cilium-zxzgg" Nov 12 22:53:48.007706 kubelet[2690]: I1112 22:53:48.007719 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-hostproc\") pod \"cilium-zxzgg\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " pod="kube-system/cilium-zxzgg" Nov 12 22:53:48.007922 kubelet[2690]: I1112 22:53:48.007736 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-host-proc-sys-net\") pod \"cilium-zxzgg\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " pod="kube-system/cilium-zxzgg" Nov 12 22:53:48.007922 kubelet[2690]: I1112 22:53:48.007757 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e24a339-b48b-4760-89ff-09531df1b4fb-hubble-tls\") pod \"cilium-zxzgg\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " pod="kube-system/cilium-zxzgg" Nov 12 22:53:48.007922 kubelet[2690]: I1112 22:53:48.007773 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-cni-path\") pod \"cilium-zxzgg\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " pod="kube-system/cilium-zxzgg" Nov 12 22:53:48.007922 kubelet[2690]: I1112 22:53:48.007790 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/70e6b112-6efa-43c6-a0f2-2455d5fc7318-kube-proxy\") pod \"kube-proxy-vx8wp\" (UID: \"70e6b112-6efa-43c6-a0f2-2455d5fc7318\") " pod="kube-system/kube-proxy-vx8wp" Nov 12 22:53:48.007922 kubelet[2690]: I1112 22:53:48.007844 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-xtables-lock\") pod \"cilium-zxzgg\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " pod="kube-system/cilium-zxzgg" Nov 12 22:53:48.007922 kubelet[2690]: I1112 22:53:48.007887 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70e6b112-6efa-43c6-a0f2-2455d5fc7318-lib-modules\") pod \"kube-proxy-vx8wp\" (UID: \"70e6b112-6efa-43c6-a0f2-2455d5fc7318\") " pod="kube-system/kube-proxy-vx8wp" Nov 12 22:53:48.008063 kubelet[2690]: I1112 22:53:48.007913 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-cilium-cgroup\") pod \"cilium-zxzgg\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " pod="kube-system/cilium-zxzgg" Nov 12 22:53:48.008063 kubelet[2690]: I1112 22:53:48.007939 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxrjl\" (UniqueName: \"kubernetes.io/projected/1e24a339-b48b-4760-89ff-09531df1b4fb-kube-api-access-bxrjl\") pod \"cilium-zxzgg\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " pod="kube-system/cilium-zxzgg" Nov 12 22:53:48.008063 kubelet[2690]: I1112 22:53:48.007965 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70e6b112-6efa-43c6-a0f2-2455d5fc7318-xtables-lock\") pod \"kube-proxy-vx8wp\" (UID: \"70e6b112-6efa-43c6-a0f2-2455d5fc7318\") " pod="kube-system/kube-proxy-vx8wp" Nov 12 22:53:48.008063 kubelet[2690]: I1112 22:53:48.007986 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-bpf-maps\") pod \"cilium-zxzgg\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " pod="kube-system/cilium-zxzgg" Nov 12 22:53:48.008063 kubelet[2690]: I1112 22:53:48.008009 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lstz4\" (UniqueName: \"kubernetes.io/projected/70e6b112-6efa-43c6-a0f2-2455d5fc7318-kube-api-access-lstz4\") pod \"kube-proxy-vx8wp\" (UID: \"70e6b112-6efa-43c6-a0f2-2455d5fc7318\") " pod="kube-system/kube-proxy-vx8wp" Nov 12 22:53:48.008255 kubelet[2690]: I1112 22:53:48.008041 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-lib-modules\") pod \"cilium-zxzgg\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " pod="kube-system/cilium-zxzgg" Nov 12 22:53:48.008255 kubelet[2690]: I1112 22:53:48.008063 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e24a339-b48b-4760-89ff-09531df1b4fb-clustermesh-secrets\") pod \"cilium-zxzgg\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " pod="kube-system/cilium-zxzgg" Nov 12 22:53:48.008255 kubelet[2690]: I1112 22:53:48.008085 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e24a339-b48b-4760-89ff-09531df1b4fb-cilium-config-path\") pod \"cilium-zxzgg\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " pod="kube-system/cilium-zxzgg" Nov 12 22:53:48.008255 kubelet[2690]: I1112 22:53:48.008106 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-host-proc-sys-kernel\") pod \"cilium-zxzgg\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " pod="kube-system/cilium-zxzgg" Nov 12 22:53:48.114381 kubelet[2690]: E1112 22:53:48.114038 2690 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 12 22:53:48.114381 kubelet[2690]: E1112 22:53:48.114073 2690 projected.go:200] Error preparing data for projected volume kube-api-access-lstz4 for pod kube-system/kube-proxy-vx8wp: configmap "kube-root-ca.crt" not found Nov 12 22:53:48.114381 kubelet[2690]: E1112 22:53:48.114151 2690 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/70e6b112-6efa-43c6-a0f2-2455d5fc7318-kube-api-access-lstz4 podName:70e6b112-6efa-43c6-a0f2-2455d5fc7318 nodeName:}" failed. No retries permitted until 2024-11-12 22:53:48.61411532 +0000 UTC m=+13.759821928 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lstz4" (UniqueName: "kubernetes.io/projected/70e6b112-6efa-43c6-a0f2-2455d5fc7318-kube-api-access-lstz4") pod "kube-proxy-vx8wp" (UID: "70e6b112-6efa-43c6-a0f2-2455d5fc7318") : configmap "kube-root-ca.crt" not found Nov 12 22:53:48.116170 kubelet[2690]: E1112 22:53:48.116132 2690 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 12 22:53:48.116170 kubelet[2690]: E1112 22:53:48.116172 2690 projected.go:200] Error preparing data for projected volume kube-api-access-bxrjl for pod kube-system/cilium-zxzgg: configmap "kube-root-ca.crt" not found Nov 12 22:53:48.116259 kubelet[2690]: E1112 22:53:48.116243 2690 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1e24a339-b48b-4760-89ff-09531df1b4fb-kube-api-access-bxrjl podName:1e24a339-b48b-4760-89ff-09531df1b4fb nodeName:}" failed. No retries permitted until 2024-11-12 22:53:48.616218128 +0000 UTC m=+13.761924646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bxrjl" (UniqueName: "kubernetes.io/projected/1e24a339-b48b-4760-89ff-09531df1b4fb-kube-api-access-bxrjl") pod "cilium-zxzgg" (UID: "1e24a339-b48b-4760-89ff-09531df1b4fb") : configmap "kube-root-ca.crt" not found Nov 12 22:53:48.482678 kubelet[2690]: I1112 22:53:48.482633 2690 topology_manager.go:215] "Topology Admit Handler" podUID="4feabc2f-5985-42c4-b2f6-2015262cd112" podNamespace="kube-system" podName="cilium-operator-599987898-j6ptf" Nov 12 22:53:48.488118 systemd[1]: Created slice kubepods-besteffort-pod4feabc2f_5985_42c4_b2f6_2015262cd112.slice - libcontainer container kubepods-besteffort-pod4feabc2f_5985_42c4_b2f6_2015262cd112.slice. Nov 12 22:53:48.613610 kubelet[2690]: I1112 22:53:48.613555 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4feabc2f-5985-42c4-b2f6-2015262cd112-cilium-config-path\") pod \"cilium-operator-599987898-j6ptf\" (UID: \"4feabc2f-5985-42c4-b2f6-2015262cd112\") " pod="kube-system/cilium-operator-599987898-j6ptf" Nov 12 22:53:48.613610 kubelet[2690]: I1112 22:53:48.613592 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlh25\" (UniqueName: \"kubernetes.io/projected/4feabc2f-5985-42c4-b2f6-2015262cd112-kube-api-access-xlh25\") pod \"cilium-operator-599987898-j6ptf\" (UID: \"4feabc2f-5985-42c4-b2f6-2015262cd112\") " pod="kube-system/cilium-operator-599987898-j6ptf" Nov 12 22:53:48.791103 kubelet[2690]: E1112 22:53:48.791002 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:48.791633 containerd[1497]: time="2024-11-12T22:53:48.791598908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-j6ptf,Uid:4feabc2f-5985-42c4-b2f6-2015262cd112,Namespace:kube-system,Attempt:0,}" Nov 12 22:53:48.817183 containerd[1497]: time="2024-11-12T22:53:48.817086834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:53:48.817183 containerd[1497]: time="2024-11-12T22:53:48.817146626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:53:48.817183 containerd[1497]: time="2024-11-12T22:53:48.817160663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:48.817392 containerd[1497]: time="2024-11-12T22:53:48.817249360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:48.835683 systemd[1]: Started cri-containerd-633d703a22090a309bf85ae314bd29aa83157277897e591b4e88106e387d66f5.scope - libcontainer container 633d703a22090a309bf85ae314bd29aa83157277897e591b4e88106e387d66f5. Nov 12 22:53:48.850678 kubelet[2690]: E1112 22:53:48.850654 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:48.852190 containerd[1497]: time="2024-11-12T22:53:48.851747109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vx8wp,Uid:70e6b112-6efa-43c6-a0f2-2455d5fc7318,Namespace:kube-system,Attempt:0,}" Nov 12 22:53:48.860821 kubelet[2690]: E1112 22:53:48.860787 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:48.863569 containerd[1497]: time="2024-11-12T22:53:48.861320946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zxzgg,Uid:1e24a339-b48b-4760-89ff-09531df1b4fb,Namespace:kube-system,Attempt:0,}" Nov 12 22:53:48.881106 containerd[1497]: time="2024-11-12T22:53:48.880948824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-j6ptf,Uid:4feabc2f-5985-42c4-b2f6-2015262cd112,Namespace:kube-system,Attempt:0,} returns sandbox id \"633d703a22090a309bf85ae314bd29aa83157277897e591b4e88106e387d66f5\"" Nov 12 22:53:48.882274 kubelet[2690]: E1112 22:53:48.882232 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:48.885603 containerd[1497]: time="2024-11-12T22:53:48.885231736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:53:48.885603 containerd[1497]: time="2024-11-12T22:53:48.885304333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:53:48.885603 containerd[1497]: time="2024-11-12T22:53:48.885318359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:48.885603 containerd[1497]: time="2024-11-12T22:53:48.885391598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:48.889242 containerd[1497]: time="2024-11-12T22:53:48.888461491Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 12 22:53:48.901653 containerd[1497]: time="2024-11-12T22:53:48.901148181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:53:48.901653 containerd[1497]: time="2024-11-12T22:53:48.901236047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:53:48.901653 containerd[1497]: time="2024-11-12T22:53:48.901255964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:48.901653 containerd[1497]: time="2024-11-12T22:53:48.901388655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:53:48.912751 systemd[1]: Started cri-containerd-8b26cbc68e03a5b103d87a28e6e6d632913acf6131acaa2b6a08d07c8c9162d8.scope - libcontainer container 8b26cbc68e03a5b103d87a28e6e6d632913acf6131acaa2b6a08d07c8c9162d8. Nov 12 22:53:48.920213 systemd[1]: Started cri-containerd-6ceb205f3d26ab8d663c1f087c7ddece9159bbc3ea6798a087dc5bbabd9351d3.scope - libcontainer container 6ceb205f3d26ab8d663c1f087c7ddece9159bbc3ea6798a087dc5bbabd9351d3. Nov 12 22:53:48.947736 containerd[1497]: time="2024-11-12T22:53:48.947682323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vx8wp,Uid:70e6b112-6efa-43c6-a0f2-2455d5fc7318,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b26cbc68e03a5b103d87a28e6e6d632913acf6131acaa2b6a08d07c8c9162d8\"" Nov 12 22:53:48.948971 kubelet[2690]: E1112 22:53:48.948923 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:48.951365 containerd[1497]: time="2024-11-12T22:53:48.951325047Z" level=info msg="CreateContainer within sandbox \"8b26cbc68e03a5b103d87a28e6e6d632913acf6131acaa2b6a08d07c8c9162d8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 22:53:48.954361 containerd[1497]: time="2024-11-12T22:53:48.954149327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zxzgg,Uid:1e24a339-b48b-4760-89ff-09531df1b4fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ceb205f3d26ab8d663c1f087c7ddece9159bbc3ea6798a087dc5bbabd9351d3\"" Nov 12 22:53:48.955783 kubelet[2690]: E1112 22:53:48.955760 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:48.970160 containerd[1497]: time="2024-11-12T22:53:48.970104765Z" level=info msg="CreateContainer within sandbox \"8b26cbc68e03a5b103d87a28e6e6d632913acf6131acaa2b6a08d07c8c9162d8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"91ed7b4e7c05fa76c8221c83e9e7ba7ffdd75efe751ddc30af6d385b661f131c\"" Nov 12 22:53:48.971097 containerd[1497]: time="2024-11-12T22:53:48.970603415Z" level=info msg="StartContainer for \"91ed7b4e7c05fa76c8221c83e9e7ba7ffdd75efe751ddc30af6d385b661f131c\"" Nov 12 22:53:48.998673 systemd[1]: Started cri-containerd-91ed7b4e7c05fa76c8221c83e9e7ba7ffdd75efe751ddc30af6d385b661f131c.scope - libcontainer container 91ed7b4e7c05fa76c8221c83e9e7ba7ffdd75efe751ddc30af6d385b661f131c. Nov 12 22:53:49.040557 containerd[1497]: time="2024-11-12T22:53:49.038332708Z" level=info msg="StartContainer for \"91ed7b4e7c05fa76c8221c83e9e7ba7ffdd75efe751ddc30af6d385b661f131c\" returns successfully" Nov 12 22:53:49.977636 kubelet[2690]: E1112 22:53:49.977598 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:49.986263 kubelet[2690]: I1112 22:53:49.986174 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vx8wp" podStartSLOduration=2.986147952 podStartE2EDuration="2.986147952s" podCreationTimestamp="2024-11-12 22:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:53:49.986090944 +0000 UTC m=+15.131797462" watchObservedRunningTime="2024-11-12 22:53:49.986147952 +0000 UTC m=+15.131854470" Nov 12 22:53:50.582171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount95939592.mount: Deactivated successfully. Nov 12 22:53:50.859456 containerd[1497]: time="2024-11-12T22:53:50.859336310Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:50.860118 containerd[1497]: time="2024-11-12T22:53:50.860082719Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907221" Nov 12 22:53:50.861314 containerd[1497]: time="2024-11-12T22:53:50.861285516Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:53:50.862687 containerd[1497]: time="2024-11-12T22:53:50.862660620Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.974092097s" Nov 12 22:53:50.862740 containerd[1497]: time="2024-11-12T22:53:50.862690616Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 12 22:53:50.863663 containerd[1497]: time="2024-11-12T22:53:50.863635468Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 12 22:53:50.864929 containerd[1497]: time="2024-11-12T22:53:50.864906234Z" level=info msg="CreateContainer within sandbox \"633d703a22090a309bf85ae314bd29aa83157277897e591b4e88106e387d66f5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 12 22:53:50.878190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3464735721.mount: Deactivated successfully. Nov 12 22:53:50.878816 containerd[1497]: time="2024-11-12T22:53:50.878783773Z" level=info msg="CreateContainer within sandbox \"633d703a22090a309bf85ae314bd29aa83157277897e591b4e88106e387d66f5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb\"" Nov 12 22:53:50.879340 containerd[1497]: time="2024-11-12T22:53:50.879303603Z" level=info msg="StartContainer for \"1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb\"" Nov 12 22:53:50.908659 systemd[1]: Started cri-containerd-1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb.scope - libcontainer container 1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb. Nov 12 22:53:50.934689 containerd[1497]: time="2024-11-12T22:53:50.934648678Z" level=info msg="StartContainer for \"1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb\" returns successfully" Nov 12 22:53:50.980472 kubelet[2690]: E1112 22:53:50.980431 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:50.981022 kubelet[2690]: E1112 22:53:50.981000 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:51.981814 kubelet[2690]: E1112 22:53:51.981777 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:53:54.975110 kubelet[2690]: I1112 22:53:54.975009 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-j6ptf" podStartSLOduration=4.998159766 podStartE2EDuration="6.974993926s" podCreationTimestamp="2024-11-12 22:53:48 +0000 UTC" firstStartedPulling="2024-11-12 22:53:48.886605118 +0000 UTC m=+14.032311636" lastFinishedPulling="2024-11-12 22:53:50.863439278 +0000 UTC m=+16.009145796" observedRunningTime="2024-11-12 22:53:50.992815413 +0000 UTC m=+16.138521931" watchObservedRunningTime="2024-11-12 22:53:54.974993926 +0000 UTC m=+20.120700444" Nov 12 22:54:00.602782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1004234793.mount: Deactivated successfully. Nov 12 22:54:01.345879 systemd[1]: Started sshd@9-10.0.0.140:22-10.0.0.1:51678.service - OpenSSH per-connection server daemon (10.0.0.1:51678). Nov 12 22:54:01.550127 sshd[3125]: Accepted publickey for core from 10.0.0.1 port 51678 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:54:01.552561 sshd-session[3125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:01.558229 systemd-logind[1485]: New session 10 of user core. Nov 12 22:54:01.574655 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 22:54:01.726271 sshd[3135]: Connection closed by 10.0.0.1 port 51678 Nov 12 22:54:01.726930 sshd-session[3125]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:01.731732 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. Nov 12 22:54:01.732177 systemd[1]: sshd@9-10.0.0.140:22-10.0.0.1:51678.service: Deactivated successfully. Nov 12 22:54:01.735933 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 22:54:01.737193 systemd-logind[1485]: Removed session 10. Nov 12 22:54:03.481199 containerd[1497]: time="2024-11-12T22:54:03.481137645Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:54:03.481800 containerd[1497]: time="2024-11-12T22:54:03.481758071Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735347" Nov 12 22:54:03.482901 containerd[1497]: time="2024-11-12T22:54:03.482874240Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:54:03.485255 containerd[1497]: time="2024-11-12T22:54:03.485228937Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.621562602s" Nov 12 22:54:03.485302 containerd[1497]: time="2024-11-12T22:54:03.485258603Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 12 22:54:03.491028 containerd[1497]: time="2024-11-12T22:54:03.490992715Z" level=info msg="CreateContainer within sandbox \"6ceb205f3d26ab8d663c1f087c7ddece9159bbc3ea6798a087dc5bbabd9351d3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 22:54:03.504450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3990331098.mount: Deactivated successfully. Nov 12 22:54:03.504862 containerd[1497]: time="2024-11-12T22:54:03.504762234Z" level=info msg="CreateContainer within sandbox \"6ceb205f3d26ab8d663c1f087c7ddece9159bbc3ea6798a087dc5bbabd9351d3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6b7f026fc380f8ca030dc61e8cde69beb3002ce2abe6e777cc8d5fb4459bad59\"" Nov 12 22:54:03.505387 containerd[1497]: time="2024-11-12T22:54:03.505349960Z" level=info msg="StartContainer for \"6b7f026fc380f8ca030dc61e8cde69beb3002ce2abe6e777cc8d5fb4459bad59\"" Nov 12 22:54:03.545666 systemd[1]: Started cri-containerd-6b7f026fc380f8ca030dc61e8cde69beb3002ce2abe6e777cc8d5fb4459bad59.scope - libcontainer container 6b7f026fc380f8ca030dc61e8cde69beb3002ce2abe6e777cc8d5fb4459bad59. Nov 12 22:54:03.573706 containerd[1497]: time="2024-11-12T22:54:03.573653997Z" level=info msg="StartContainer for \"6b7f026fc380f8ca030dc61e8cde69beb3002ce2abe6e777cc8d5fb4459bad59\" returns successfully" Nov 12 22:54:03.587158 systemd[1]: cri-containerd-6b7f026fc380f8ca030dc61e8cde69beb3002ce2abe6e777cc8d5fb4459bad59.scope: Deactivated successfully. Nov 12 22:54:04.103100 containerd[1497]: time="2024-11-12T22:54:04.103003363Z" level=info msg="shim disconnected" id=6b7f026fc380f8ca030dc61e8cde69beb3002ce2abe6e777cc8d5fb4459bad59 namespace=k8s.io Nov 12 22:54:04.103100 containerd[1497]: time="2024-11-12T22:54:04.103100766Z" level=warning msg="cleaning up after shim disconnected" id=6b7f026fc380f8ca030dc61e8cde69beb3002ce2abe6e777cc8d5fb4459bad59 namespace=k8s.io Nov 12 22:54:04.103365 containerd[1497]: time="2024-11-12T22:54:04.103112849Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:54:04.116074 kubelet[2690]: E1112 22:54:04.116041 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:04.501043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b7f026fc380f8ca030dc61e8cde69beb3002ce2abe6e777cc8d5fb4459bad59-rootfs.mount: Deactivated successfully. Nov 12 22:54:05.118381 kubelet[2690]: E1112 22:54:05.118339 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:05.120134 containerd[1497]: time="2024-11-12T22:54:05.120100823Z" level=info msg="CreateContainer within sandbox \"6ceb205f3d26ab8d663c1f087c7ddece9159bbc3ea6798a087dc5bbabd9351d3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 22:54:05.136073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1903368821.mount: Deactivated successfully. Nov 12 22:54:05.139129 containerd[1497]: time="2024-11-12T22:54:05.139088053Z" level=info msg="CreateContainer within sandbox \"6ceb205f3d26ab8d663c1f087c7ddece9159bbc3ea6798a087dc5bbabd9351d3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"67fbb520812b6c8b94f572510aacc1b899a5a92e07e21fc96cdbe217edfb481d\"" Nov 12 22:54:05.139640 containerd[1497]: time="2024-11-12T22:54:05.139618550Z" level=info msg="StartContainer for \"67fbb520812b6c8b94f572510aacc1b899a5a92e07e21fc96cdbe217edfb481d\"" Nov 12 22:54:05.172707 systemd[1]: Started cri-containerd-67fbb520812b6c8b94f572510aacc1b899a5a92e07e21fc96cdbe217edfb481d.scope - libcontainer container 67fbb520812b6c8b94f572510aacc1b899a5a92e07e21fc96cdbe217edfb481d. Nov 12 22:54:05.201012 containerd[1497]: time="2024-11-12T22:54:05.200955978Z" level=info msg="StartContainer for \"67fbb520812b6c8b94f572510aacc1b899a5a92e07e21fc96cdbe217edfb481d\" returns successfully" Nov 12 22:54:05.212203 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 22:54:05.212499 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:54:05.212581 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:54:05.221891 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:54:05.222179 systemd[1]: cri-containerd-67fbb520812b6c8b94f572510aacc1b899a5a92e07e21fc96cdbe217edfb481d.scope: Deactivated successfully. Nov 12 22:54:05.243221 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:54:05.245068 containerd[1497]: time="2024-11-12T22:54:05.245006472Z" level=info msg="shim disconnected" id=67fbb520812b6c8b94f572510aacc1b899a5a92e07e21fc96cdbe217edfb481d namespace=k8s.io Nov 12 22:54:05.245068 containerd[1497]: time="2024-11-12T22:54:05.245066344Z" level=warning msg="cleaning up after shim disconnected" id=67fbb520812b6c8b94f572510aacc1b899a5a92e07e21fc96cdbe217edfb481d namespace=k8s.io Nov 12 22:54:05.245212 containerd[1497]: time="2024-11-12T22:54:05.245076112Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:54:05.501384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67fbb520812b6c8b94f572510aacc1b899a5a92e07e21fc96cdbe217edfb481d-rootfs.mount: Deactivated successfully. Nov 12 22:54:06.122276 kubelet[2690]: E1112 22:54:06.122227 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:06.124086 containerd[1497]: time="2024-11-12T22:54:06.124027898Z" level=info msg="CreateContainer within sandbox \"6ceb205f3d26ab8d663c1f087c7ddece9159bbc3ea6798a087dc5bbabd9351d3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 22:54:06.144130 containerd[1497]: time="2024-11-12T22:54:06.144082891Z" level=info msg="CreateContainer within sandbox \"6ceb205f3d26ab8d663c1f087c7ddece9159bbc3ea6798a087dc5bbabd9351d3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"04bf2eb6d95ea759e15f97427ecaa41caf54de44a5d83cacb62b28b5cc059833\"" Nov 12 22:54:06.144718 containerd[1497]: time="2024-11-12T22:54:06.144677489Z" level=info msg="StartContainer for \"04bf2eb6d95ea759e15f97427ecaa41caf54de44a5d83cacb62b28b5cc059833\"" Nov 12 22:54:06.177673 systemd[1]: Started cri-containerd-04bf2eb6d95ea759e15f97427ecaa41caf54de44a5d83cacb62b28b5cc059833.scope - libcontainer container 04bf2eb6d95ea759e15f97427ecaa41caf54de44a5d83cacb62b28b5cc059833. Nov 12 22:54:06.209004 containerd[1497]: time="2024-11-12T22:54:06.208943276Z" level=info msg="StartContainer for \"04bf2eb6d95ea759e15f97427ecaa41caf54de44a5d83cacb62b28b5cc059833\" returns successfully" Nov 12 22:54:06.211134 systemd[1]: cri-containerd-04bf2eb6d95ea759e15f97427ecaa41caf54de44a5d83cacb62b28b5cc059833.scope: Deactivated successfully. Nov 12 22:54:06.238942 containerd[1497]: time="2024-11-12T22:54:06.238855386Z" level=info msg="shim disconnected" id=04bf2eb6d95ea759e15f97427ecaa41caf54de44a5d83cacb62b28b5cc059833 namespace=k8s.io Nov 12 22:54:06.238942 containerd[1497]: time="2024-11-12T22:54:06.238922813Z" level=warning msg="cleaning up after shim disconnected" id=04bf2eb6d95ea759e15f97427ecaa41caf54de44a5d83cacb62b28b5cc059833 namespace=k8s.io Nov 12 22:54:06.238942 containerd[1497]: time="2024-11-12T22:54:06.238936279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:54:06.501583 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04bf2eb6d95ea759e15f97427ecaa41caf54de44a5d83cacb62b28b5cc059833-rootfs.mount: Deactivated successfully. Nov 12 22:54:06.739371 systemd[1]: Started sshd@10-10.0.0.140:22-10.0.0.1:51680.service - OpenSSH per-connection server daemon (10.0.0.1:51680). Nov 12 22:54:06.783740 sshd[3350]: Accepted publickey for core from 10.0.0.1 port 51680 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:54:06.785345 sshd-session[3350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:06.789309 systemd-logind[1485]: New session 11 of user core. Nov 12 22:54:06.800680 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 22:54:06.911040 sshd[3352]: Connection closed by 10.0.0.1 port 51680 Nov 12 22:54:06.911390 sshd-session[3350]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:06.915064 systemd[1]: sshd@10-10.0.0.140:22-10.0.0.1:51680.service: Deactivated successfully. Nov 12 22:54:06.916981 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 22:54:06.917635 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. Nov 12 22:54:06.918599 systemd-logind[1485]: Removed session 11. Nov 12 22:54:07.125302 kubelet[2690]: E1112 22:54:07.125182 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:07.127279 containerd[1497]: time="2024-11-12T22:54:07.126681752Z" level=info msg="CreateContainer within sandbox \"6ceb205f3d26ab8d663c1f087c7ddece9159bbc3ea6798a087dc5bbabd9351d3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 22:54:07.222914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2704231390.mount: Deactivated successfully. Nov 12 22:54:07.224392 containerd[1497]: time="2024-11-12T22:54:07.224354708Z" level=info msg="CreateContainer within sandbox \"6ceb205f3d26ab8d663c1f087c7ddece9159bbc3ea6798a087dc5bbabd9351d3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d29c70cb05d50eae9a88575f29715e028d52c2f5b895fc11f89542fa60d44bb3\"" Nov 12 22:54:07.224928 containerd[1497]: time="2024-11-12T22:54:07.224888681Z" level=info msg="StartContainer for \"d29c70cb05d50eae9a88575f29715e028d52c2f5b895fc11f89542fa60d44bb3\"" Nov 12 22:54:07.256708 systemd[1]: Started cri-containerd-d29c70cb05d50eae9a88575f29715e028d52c2f5b895fc11f89542fa60d44bb3.scope - libcontainer container d29c70cb05d50eae9a88575f29715e028d52c2f5b895fc11f89542fa60d44bb3. Nov 12 22:54:07.280042 systemd[1]: cri-containerd-d29c70cb05d50eae9a88575f29715e028d52c2f5b895fc11f89542fa60d44bb3.scope: Deactivated successfully. Nov 12 22:54:07.282633 containerd[1497]: time="2024-11-12T22:54:07.282594408Z" level=info msg="StartContainer for \"d29c70cb05d50eae9a88575f29715e028d52c2f5b895fc11f89542fa60d44bb3\" returns successfully" Nov 12 22:54:07.419556 containerd[1497]: time="2024-11-12T22:54:07.419457944Z" level=info msg="shim disconnected" id=d29c70cb05d50eae9a88575f29715e028d52c2f5b895fc11f89542fa60d44bb3 namespace=k8s.io Nov 12 22:54:07.419556 containerd[1497]: time="2024-11-12T22:54:07.419522585Z" level=warning msg="cleaning up after shim disconnected" id=d29c70cb05d50eae9a88575f29715e028d52c2f5b895fc11f89542fa60d44bb3 namespace=k8s.io Nov 12 22:54:07.419556 containerd[1497]: time="2024-11-12T22:54:07.419545618Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:54:07.501150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d29c70cb05d50eae9a88575f29715e028d52c2f5b895fc11f89542fa60d44bb3-rootfs.mount: Deactivated successfully. Nov 12 22:54:08.129519 kubelet[2690]: E1112 22:54:08.129469 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:08.133341 containerd[1497]: time="2024-11-12T22:54:08.133290255Z" level=info msg="CreateContainer within sandbox \"6ceb205f3d26ab8d663c1f087c7ddece9159bbc3ea6798a087dc5bbabd9351d3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 22:54:08.154808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount39130243.mount: Deactivated successfully. Nov 12 22:54:08.155858 containerd[1497]: time="2024-11-12T22:54:08.155809070Z" level=info msg="CreateContainer within sandbox \"6ceb205f3d26ab8d663c1f087c7ddece9159bbc3ea6798a087dc5bbabd9351d3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417\"" Nov 12 22:54:08.156487 containerd[1497]: time="2024-11-12T22:54:08.156438272Z" level=info msg="StartContainer for \"d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417\"" Nov 12 22:54:08.191685 systemd[1]: Started cri-containerd-d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417.scope - libcontainer container d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417. Nov 12 22:54:08.232582 containerd[1497]: time="2024-11-12T22:54:08.232521258Z" level=info msg="StartContainer for \"d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417\" returns successfully" Nov 12 22:54:08.370469 kubelet[2690]: I1112 22:54:08.370392 2690 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 22:54:08.431647 kubelet[2690]: I1112 22:54:08.431592 2690 topology_manager.go:215] "Topology Admit Handler" podUID="8fae55a7-ea76-4f16-bb58-fe5b27658a81" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gdr4d" Nov 12 22:54:08.432359 kubelet[2690]: I1112 22:54:08.431998 2690 topology_manager.go:215] "Topology Admit Handler" podUID="579ebb22-a717-4c18-b6e7-b7d650ce8402" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9ncqk" Nov 12 22:54:08.445704 systemd[1]: Created slice kubepods-burstable-pod8fae55a7_ea76_4f16_bb58_fe5b27658a81.slice - libcontainer container kubepods-burstable-pod8fae55a7_ea76_4f16_bb58_fe5b27658a81.slice. Nov 12 22:54:08.456951 systemd[1]: Created slice kubepods-burstable-pod579ebb22_a717_4c18_b6e7_b7d650ce8402.slice - libcontainer container kubepods-burstable-pod579ebb22_a717_4c18_b6e7_b7d650ce8402.slice. Nov 12 22:54:08.546806 kubelet[2690]: I1112 22:54:08.546755 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fae55a7-ea76-4f16-bb58-fe5b27658a81-config-volume\") pod \"coredns-7db6d8ff4d-gdr4d\" (UID: \"8fae55a7-ea76-4f16-bb58-fe5b27658a81\") " pod="kube-system/coredns-7db6d8ff4d-gdr4d" Nov 12 22:54:08.547023 kubelet[2690]: I1112 22:54:08.546814 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/579ebb22-a717-4c18-b6e7-b7d650ce8402-config-volume\") pod \"coredns-7db6d8ff4d-9ncqk\" (UID: \"579ebb22-a717-4c18-b6e7-b7d650ce8402\") " pod="kube-system/coredns-7db6d8ff4d-9ncqk" Nov 12 22:54:08.547023 kubelet[2690]: I1112 22:54:08.546908 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc94v\" (UniqueName: \"kubernetes.io/projected/579ebb22-a717-4c18-b6e7-b7d650ce8402-kube-api-access-xc94v\") pod \"coredns-7db6d8ff4d-9ncqk\" (UID: \"579ebb22-a717-4c18-b6e7-b7d650ce8402\") " pod="kube-system/coredns-7db6d8ff4d-9ncqk" Nov 12 22:54:08.547023 kubelet[2690]: I1112 22:54:08.546961 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd8fd\" (UniqueName: \"kubernetes.io/projected/8fae55a7-ea76-4f16-bb58-fe5b27658a81-kube-api-access-gd8fd\") pod \"coredns-7db6d8ff4d-gdr4d\" (UID: \"8fae55a7-ea76-4f16-bb58-fe5b27658a81\") " pod="kube-system/coredns-7db6d8ff4d-gdr4d" Nov 12 22:54:08.751173 kubelet[2690]: E1112 22:54:08.751000 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:08.752811 containerd[1497]: time="2024-11-12T22:54:08.752070327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gdr4d,Uid:8fae55a7-ea76-4f16-bb58-fe5b27658a81,Namespace:kube-system,Attempt:0,}" Nov 12 22:54:08.761289 kubelet[2690]: E1112 22:54:08.761226 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:08.763088 containerd[1497]: time="2024-11-12T22:54:08.762768630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9ncqk,Uid:579ebb22-a717-4c18-b6e7-b7d650ce8402,Namespace:kube-system,Attempt:0,}" Nov 12 22:54:09.133852 kubelet[2690]: E1112 22:54:09.133583 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:10.135297 kubelet[2690]: E1112 22:54:10.135254 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:10.595596 systemd-networkd[1430]: cilium_host: Link UP Nov 12 22:54:10.595822 systemd-networkd[1430]: cilium_net: Link UP Nov 12 22:54:10.595826 systemd-networkd[1430]: cilium_net: Gained carrier Nov 12 22:54:10.596039 systemd-networkd[1430]: cilium_host: Gained carrier Nov 12 22:54:10.597986 systemd-networkd[1430]: cilium_host: Gained IPv6LL Nov 12 22:54:10.704384 systemd-networkd[1430]: cilium_vxlan: Link UP Nov 12 22:54:10.704394 systemd-networkd[1430]: cilium_vxlan: Gained carrier Nov 12 22:54:10.912558 kernel: NET: Registered PF_ALG protocol family Nov 12 22:54:11.137559 kubelet[2690]: E1112 22:54:11.137492 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:11.355486 systemd-networkd[1430]: cilium_net: Gained IPv6LL Nov 12 22:54:11.556510 systemd-networkd[1430]: lxc_health: Link UP Nov 12 22:54:11.568028 systemd-networkd[1430]: lxc_health: Gained carrier Nov 12 22:54:11.840950 systemd-networkd[1430]: lxca530ab3cdf21: Link UP Nov 12 22:54:11.848568 kernel: eth0: renamed from tmpb13f6 Nov 12 22:54:11.855714 systemd-networkd[1430]: lxcbcf2fcab3bf5: Link UP Nov 12 22:54:11.866950 kernel: eth0: renamed from tmp18387 Nov 12 22:54:11.872140 systemd-networkd[1430]: lxca530ab3cdf21: Gained carrier Nov 12 22:54:11.877006 systemd-networkd[1430]: lxcbcf2fcab3bf5: Gained carrier Nov 12 22:54:11.929979 systemd[1]: Started sshd@11-10.0.0.140:22-10.0.0.1:49024.service - OpenSSH per-connection server daemon (10.0.0.1:49024). Nov 12 22:54:11.991863 sshd[3927]: Accepted publickey for core from 10.0.0.1 port 49024 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:54:11.993371 sshd-session[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:11.994813 systemd-networkd[1430]: cilium_vxlan: Gained IPv6LL Nov 12 22:54:11.998028 systemd-logind[1485]: New session 12 of user core. Nov 12 22:54:12.002652 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 22:54:12.134108 sshd[3929]: Connection closed by 10.0.0.1 port 49024 Nov 12 22:54:12.134411 sshd-session[3927]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:12.138993 systemd[1]: sshd@11-10.0.0.140:22-10.0.0.1:49024.service: Deactivated successfully. Nov 12 22:54:12.141354 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 22:54:12.143374 systemd-logind[1485]: Session 12 logged out. Waiting for processes to exit. Nov 12 22:54:12.144951 systemd-logind[1485]: Removed session 12. Nov 12 22:54:12.864404 kubelet[2690]: E1112 22:54:12.864362 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:12.880120 kubelet[2690]: I1112 22:54:12.880039 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zxzgg" podStartSLOduration=11.350321889 podStartE2EDuration="25.88002075s" podCreationTimestamp="2024-11-12 22:53:47 +0000 UTC" firstStartedPulling="2024-11-12 22:53:48.956282772 +0000 UTC m=+14.101989290" lastFinishedPulling="2024-11-12 22:54:03.485981633 +0000 UTC m=+28.631688151" observedRunningTime="2024-11-12 22:54:09.282594996 +0000 UTC m=+34.428301514" watchObservedRunningTime="2024-11-12 22:54:12.88002075 +0000 UTC m=+38.025727268" Nov 12 22:54:13.142277 kubelet[2690]: E1112 22:54:13.142231 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:13.146678 systemd-networkd[1430]: lxc_health: Gained IPv6LL Nov 12 22:54:13.402691 systemd-networkd[1430]: lxca530ab3cdf21: Gained IPv6LL Nov 12 22:54:13.530700 systemd-networkd[1430]: lxcbcf2fcab3bf5: Gained IPv6LL Nov 12 22:54:14.144259 kubelet[2690]: E1112 22:54:14.144205 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:15.491294 containerd[1497]: time="2024-11-12T22:54:15.491159659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:54:15.491294 containerd[1497]: time="2024-11-12T22:54:15.491264746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:54:15.491822 containerd[1497]: time="2024-11-12T22:54:15.491279804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:54:15.491822 containerd[1497]: time="2024-11-12T22:54:15.491421039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:54:15.499717 containerd[1497]: time="2024-11-12T22:54:15.499474845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:54:15.499717 containerd[1497]: time="2024-11-12T22:54:15.499571095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:54:15.499717 containerd[1497]: time="2024-11-12T22:54:15.499601472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:54:15.504547 containerd[1497]: time="2024-11-12T22:54:15.500364796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:54:15.533680 systemd[1]: Started cri-containerd-18387e8d15662d1ae530637750a0bc662e94aa5a5d293f4dddd5b9d2ab0dfdf4.scope - libcontainer container 18387e8d15662d1ae530637750a0bc662e94aa5a5d293f4dddd5b9d2ab0dfdf4. Nov 12 22:54:15.535999 systemd[1]: Started cri-containerd-b13f6ce7f3cad171ace4155f6e94849a73bf381b544804b2f7f206a6d9520fc6.scope - libcontainer container b13f6ce7f3cad171ace4155f6e94849a73bf381b544804b2f7f206a6d9520fc6. Nov 12 22:54:15.549259 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 22:54:15.550628 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 22:54:15.577961 containerd[1497]: time="2024-11-12T22:54:15.577922749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9ncqk,Uid:579ebb22-a717-4c18-b6e7-b7d650ce8402,Namespace:kube-system,Attempt:0,} returns sandbox id \"18387e8d15662d1ae530637750a0bc662e94aa5a5d293f4dddd5b9d2ab0dfdf4\"" Nov 12 22:54:15.579473 kubelet[2690]: E1112 22:54:15.579448 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:15.582458 containerd[1497]: time="2024-11-12T22:54:15.582414695Z" level=info msg="CreateContainer within sandbox \"18387e8d15662d1ae530637750a0bc662e94aa5a5d293f4dddd5b9d2ab0dfdf4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 22:54:15.583283 containerd[1497]: time="2024-11-12T22:54:15.583254662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gdr4d,Uid:8fae55a7-ea76-4f16-bb58-fe5b27658a81,Namespace:kube-system,Attempt:0,} returns sandbox id \"b13f6ce7f3cad171ace4155f6e94849a73bf381b544804b2f7f206a6d9520fc6\"" Nov 12 22:54:15.584056 kubelet[2690]: E1112 22:54:15.584035 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:15.586616 containerd[1497]: time="2024-11-12T22:54:15.586586420Z" level=info msg="CreateContainer within sandbox \"b13f6ce7f3cad171ace4155f6e94849a73bf381b544804b2f7f206a6d9520fc6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 22:54:15.605129 containerd[1497]: time="2024-11-12T22:54:15.605086953Z" level=info msg="CreateContainer within sandbox \"18387e8d15662d1ae530637750a0bc662e94aa5a5d293f4dddd5b9d2ab0dfdf4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a8acb77133146a0cc4b45e02e5be56a139ae1082a35534e63bb1c001462506ab\"" Nov 12 22:54:15.605682 containerd[1497]: time="2024-11-12T22:54:15.605641374Z" level=info msg="StartContainer for \"a8acb77133146a0cc4b45e02e5be56a139ae1082a35534e63bb1c001462506ab\"" Nov 12 22:54:15.612066 containerd[1497]: time="2024-11-12T22:54:15.612021506Z" level=info msg="CreateContainer within sandbox \"b13f6ce7f3cad171ace4155f6e94849a73bf381b544804b2f7f206a6d9520fc6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aa1e392e4f634a4f85b7d73591b6f4ae76268eb22f261445a1c1e371c615c5c0\"" Nov 12 22:54:15.612878 containerd[1497]: time="2024-11-12T22:54:15.612840575Z" level=info msg="StartContainer for \"aa1e392e4f634a4f85b7d73591b6f4ae76268eb22f261445a1c1e371c615c5c0\"" Nov 12 22:54:15.632789 systemd[1]: Started cri-containerd-a8acb77133146a0cc4b45e02e5be56a139ae1082a35534e63bb1c001462506ab.scope - libcontainer container a8acb77133146a0cc4b45e02e5be56a139ae1082a35534e63bb1c001462506ab. Nov 12 22:54:15.636714 systemd[1]: Started cri-containerd-aa1e392e4f634a4f85b7d73591b6f4ae76268eb22f261445a1c1e371c615c5c0.scope - libcontainer container aa1e392e4f634a4f85b7d73591b6f4ae76268eb22f261445a1c1e371c615c5c0. Nov 12 22:54:15.665347 containerd[1497]: time="2024-11-12T22:54:15.664821164Z" level=info msg="StartContainer for \"a8acb77133146a0cc4b45e02e5be56a139ae1082a35534e63bb1c001462506ab\" returns successfully" Nov 12 22:54:15.670796 containerd[1497]: time="2024-11-12T22:54:15.670762651Z" level=info msg="StartContainer for \"aa1e392e4f634a4f85b7d73591b6f4ae76268eb22f261445a1c1e371c615c5c0\" returns successfully" Nov 12 22:54:16.148549 kubelet[2690]: E1112 22:54:16.148496 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:16.150465 kubelet[2690]: E1112 22:54:16.150440 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:16.160891 kubelet[2690]: I1112 22:54:16.160442 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9ncqk" podStartSLOduration=28.160423051 podStartE2EDuration="28.160423051s" podCreationTimestamp="2024-11-12 22:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:54:16.160173562 +0000 UTC m=+41.305880091" watchObservedRunningTime="2024-11-12 22:54:16.160423051 +0000 UTC m=+41.306129570" Nov 12 22:54:16.191928 kubelet[2690]: I1112 22:54:16.191846 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gdr4d" podStartSLOduration=28.191822635 podStartE2EDuration="28.191822635s" podCreationTimestamp="2024-11-12 22:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:54:16.191251964 +0000 UTC m=+41.336958482" watchObservedRunningTime="2024-11-12 22:54:16.191822635 +0000 UTC m=+41.337529153" Nov 12 22:54:16.497498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3974378564.mount: Deactivated successfully. Nov 12 22:54:17.149127 systemd[1]: Started sshd@12-10.0.0.140:22-10.0.0.1:43066.service - OpenSSH per-connection server daemon (10.0.0.1:43066). Nov 12 22:54:17.152774 kubelet[2690]: E1112 22:54:17.152750 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:17.153142 kubelet[2690]: E1112 22:54:17.152851 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:17.196215 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 43066 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:54:17.197946 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:17.202057 systemd-logind[1485]: New session 13 of user core. Nov 12 22:54:17.212693 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 22:54:17.339047 sshd[4139]: Connection closed by 10.0.0.1 port 43066 Nov 12 22:54:17.339435 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:17.352702 systemd[1]: sshd@12-10.0.0.140:22-10.0.0.1:43066.service: Deactivated successfully. Nov 12 22:54:17.354783 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 22:54:17.356397 systemd-logind[1485]: Session 13 logged out. Waiting for processes to exit. Nov 12 22:54:17.364812 systemd[1]: Started sshd@13-10.0.0.140:22-10.0.0.1:43072.service - OpenSSH per-connection server daemon (10.0.0.1:43072). Nov 12 22:54:17.365819 systemd-logind[1485]: Removed session 13. Nov 12 22:54:17.403311 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 43072 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:54:17.404800 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:17.408948 systemd-logind[1485]: New session 14 of user core. Nov 12 22:54:17.415654 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 22:54:17.585903 sshd[4158]: Connection closed by 10.0.0.1 port 43072 Nov 12 22:54:17.586288 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:17.594825 systemd[1]: sshd@13-10.0.0.140:22-10.0.0.1:43072.service: Deactivated successfully. Nov 12 22:54:17.597964 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 22:54:17.602174 systemd-logind[1485]: Session 14 logged out. Waiting for processes to exit. Nov 12 22:54:17.607082 systemd[1]: Started sshd@14-10.0.0.140:22-10.0.0.1:43084.service - OpenSSH per-connection server daemon (10.0.0.1:43084). Nov 12 22:54:17.609745 systemd-logind[1485]: Removed session 14. Nov 12 22:54:17.642142 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 43084 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:54:17.643430 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:17.647093 systemd-logind[1485]: New session 15 of user core. Nov 12 22:54:17.656659 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 22:54:17.766766 sshd[4170]: Connection closed by 10.0.0.1 port 43084 Nov 12 22:54:17.767124 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:17.770398 systemd[1]: sshd@14-10.0.0.140:22-10.0.0.1:43084.service: Deactivated successfully. Nov 12 22:54:17.772373 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 22:54:17.773142 systemd-logind[1485]: Session 15 logged out. Waiting for processes to exit. Nov 12 22:54:17.773918 systemd-logind[1485]: Removed session 15. Nov 12 22:54:18.155018 kubelet[2690]: E1112 22:54:18.154988 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:18.155493 kubelet[2690]: E1112 22:54:18.155092 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:22.778348 systemd[1]: Started sshd@15-10.0.0.140:22-10.0.0.1:43086.service - OpenSSH per-connection server daemon (10.0.0.1:43086). Nov 12 22:54:22.817265 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 43086 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:54:22.818695 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:22.822200 systemd-logind[1485]: New session 16 of user core. Nov 12 22:54:22.832655 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 22:54:22.945935 sshd[4188]: Connection closed by 10.0.0.1 port 43086 Nov 12 22:54:22.946270 sshd-session[4186]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:22.949774 systemd[1]: sshd@15-10.0.0.140:22-10.0.0.1:43086.service: Deactivated successfully. Nov 12 22:54:22.951764 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 22:54:22.952637 systemd-logind[1485]: Session 16 logged out. Waiting for processes to exit. Nov 12 22:54:22.953551 systemd-logind[1485]: Removed session 16. Nov 12 22:54:27.957829 systemd[1]: Started sshd@16-10.0.0.140:22-10.0.0.1:33528.service - OpenSSH per-connection server daemon (10.0.0.1:33528). Nov 12 22:54:27.996446 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 33528 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:54:27.997810 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:28.001375 systemd-logind[1485]: New session 17 of user core. Nov 12 22:54:28.010644 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 22:54:28.114610 sshd[4203]: Connection closed by 10.0.0.1 port 33528 Nov 12 22:54:28.115002 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:28.127193 systemd[1]: sshd@16-10.0.0.140:22-10.0.0.1:33528.service: Deactivated successfully. Nov 12 22:54:28.129267 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 22:54:28.130807 systemd-logind[1485]: Session 17 logged out. Waiting for processes to exit. Nov 12 22:54:28.136784 systemd[1]: Started sshd@17-10.0.0.140:22-10.0.0.1:33530.service - OpenSSH per-connection server daemon (10.0.0.1:33530). Nov 12 22:54:28.137733 systemd-logind[1485]: Removed session 17. Nov 12 22:54:28.172136 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 33530 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:54:28.173765 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:28.177467 systemd-logind[1485]: New session 18 of user core. Nov 12 22:54:28.184670 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 22:54:28.359732 sshd[4218]: Connection closed by 10.0.0.1 port 33530 Nov 12 22:54:28.360039 sshd-session[4216]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:28.371623 systemd[1]: sshd@17-10.0.0.140:22-10.0.0.1:33530.service: Deactivated successfully. Nov 12 22:54:28.373647 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 22:54:28.375063 systemd-logind[1485]: Session 18 logged out. Waiting for processes to exit. Nov 12 22:54:28.380775 systemd[1]: Started sshd@18-10.0.0.140:22-10.0.0.1:33544.service - OpenSSH per-connection server daemon (10.0.0.1:33544). Nov 12 22:54:28.381830 systemd-logind[1485]: Removed session 18. Nov 12 22:54:28.419136 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 33544 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:54:28.420605 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:28.424600 systemd-logind[1485]: New session 19 of user core. Nov 12 22:54:28.440652 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 22:54:30.236817 sshd[4230]: Connection closed by 10.0.0.1 port 33544 Nov 12 22:54:30.237119 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:30.251259 systemd[1]: sshd@18-10.0.0.140:22-10.0.0.1:33544.service: Deactivated successfully. Nov 12 22:54:30.254053 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 22:54:30.257292 systemd-logind[1485]: Session 19 logged out. Waiting for processes to exit. Nov 12 22:54:30.276540 systemd[1]: Started sshd@19-10.0.0.140:22-10.0.0.1:33546.service - OpenSSH per-connection server daemon (10.0.0.1:33546). Nov 12 22:54:30.278498 systemd-logind[1485]: Removed session 19. Nov 12 22:54:30.338370 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 33546 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:54:30.340010 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:30.349705 systemd-logind[1485]: New session 20 of user core. Nov 12 22:54:30.360996 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 22:54:30.867016 sshd[4252]: Connection closed by 10.0.0.1 port 33546 Nov 12 22:54:30.869150 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:30.888500 systemd[1]: sshd@19-10.0.0.140:22-10.0.0.1:33546.service: Deactivated successfully. Nov 12 22:54:30.892298 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 22:54:30.895211 systemd-logind[1485]: Session 20 logged out. Waiting for processes to exit. Nov 12 22:54:30.927386 systemd[1]: Started sshd@20-10.0.0.140:22-10.0.0.1:33558.service - OpenSSH per-connection server daemon (10.0.0.1:33558). Nov 12 22:54:30.929504 systemd-logind[1485]: Removed session 20. Nov 12 22:54:30.997561 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 33558 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:54:30.999359 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:31.017397 systemd-logind[1485]: New session 21 of user core. Nov 12 22:54:31.029851 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 22:54:31.217810 sshd[4264]: Connection closed by 10.0.0.1 port 33558 Nov 12 22:54:31.218688 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:31.239771 systemd[1]: sshd@20-10.0.0.140:22-10.0.0.1:33558.service: Deactivated successfully. Nov 12 22:54:31.242295 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 22:54:31.245365 systemd-logind[1485]: Session 21 logged out. Waiting for processes to exit. Nov 12 22:54:31.248090 systemd-logind[1485]: Removed session 21. Nov 12 22:54:36.246783 systemd[1]: Started sshd@21-10.0.0.140:22-10.0.0.1:33564.service - OpenSSH per-connection server daemon (10.0.0.1:33564). Nov 12 22:54:36.304677 sshd[4279]: Accepted publickey for core from 10.0.0.1 port 33564 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:54:36.305998 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:36.317885 systemd-logind[1485]: New session 22 of user core. Nov 12 22:54:36.324596 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 22:54:36.559233 sshd[4281]: Connection closed by 10.0.0.1 port 33564 Nov 12 22:54:36.560259 sshd-session[4279]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:36.565968 systemd[1]: sshd@21-10.0.0.140:22-10.0.0.1:33564.service: Deactivated successfully. Nov 12 22:54:36.569381 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 22:54:36.573070 systemd-logind[1485]: Session 22 logged out. Waiting for processes to exit. Nov 12 22:54:36.575451 systemd-logind[1485]: Removed session 22. Nov 12 22:54:41.585094 systemd[1]: Started sshd@22-10.0.0.140:22-10.0.0.1:45934.service - OpenSSH per-connection server daemon (10.0.0.1:45934). Nov 12 22:54:41.633683 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 45934 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:54:41.636158 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:41.642848 systemd-logind[1485]: New session 23 of user core. Nov 12 22:54:41.651856 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 22:54:41.780339 sshd[4298]: Connection closed by 10.0.0.1 port 45934 Nov 12 22:54:41.780941 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:41.785385 systemd[1]: sshd@22-10.0.0.140:22-10.0.0.1:45934.service: Deactivated successfully. Nov 12 22:54:41.788878 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 22:54:41.796998 systemd-logind[1485]: Session 23 logged out. Waiting for processes to exit. Nov 12 22:54:41.799112 systemd-logind[1485]: Removed session 23. Nov 12 22:54:46.807123 systemd[1]: Started sshd@23-10.0.0.140:22-10.0.0.1:45950.service - OpenSSH per-connection server daemon (10.0.0.1:45950). Nov 12 22:54:46.869163 sshd[4311]: Accepted publickey for core from 10.0.0.1 port 45950 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:54:46.872887 sshd-session[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:46.892104 systemd-logind[1485]: New session 24 of user core. Nov 12 22:54:46.902903 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 22:54:47.085989 sshd[4313]: Connection closed by 10.0.0.1 port 45950 Nov 12 22:54:47.085665 sshd-session[4311]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:47.091893 systemd[1]: sshd@23-10.0.0.140:22-10.0.0.1:45950.service: Deactivated successfully. Nov 12 22:54:47.094557 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 22:54:47.098440 systemd-logind[1485]: Session 24 logged out. Waiting for processes to exit. Nov 12 22:54:47.100580 systemd-logind[1485]: Removed session 24. Nov 12 22:54:52.097766 systemd[1]: Started sshd@24-10.0.0.140:22-10.0.0.1:38654.service - OpenSSH per-connection server daemon (10.0.0.1:38654). Nov 12 22:54:52.139214 sshd[4327]: Accepted publickey for core from 10.0.0.1 port 38654 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:54:52.140840 sshd-session[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:52.145593 systemd-logind[1485]: New session 25 of user core. Nov 12 22:54:52.154695 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 22:54:52.261025 sshd[4329]: Connection closed by 10.0.0.1 port 38654 Nov 12 22:54:52.261366 sshd-session[4327]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:52.265625 systemd[1]: sshd@24-10.0.0.140:22-10.0.0.1:38654.service: Deactivated successfully. Nov 12 22:54:52.267453 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 22:54:52.268071 systemd-logind[1485]: Session 25 logged out. Waiting for processes to exit. Nov 12 22:54:52.268903 systemd-logind[1485]: Removed session 25. Nov 12 22:54:55.938802 kubelet[2690]: E1112 22:54:55.938763 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:56.938737 kubelet[2690]: E1112 22:54:56.938705 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:57.272155 systemd[1]: Started sshd@25-10.0.0.140:22-10.0.0.1:43240.service - OpenSSH per-connection server daemon (10.0.0.1:43240). Nov 12 22:54:57.311570 sshd[4341]: Accepted publickey for core from 10.0.0.1 port 43240 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:54:57.312931 sshd-session[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:57.316700 systemd-logind[1485]: New session 26 of user core. Nov 12 22:54:57.326669 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 22:54:57.462252 sshd[4343]: Connection closed by 10.0.0.1 port 43240 Nov 12 22:54:57.462652 sshd-session[4341]: pam_unix(sshd:session): session closed for user core Nov 12 22:54:57.481266 systemd[1]: sshd@25-10.0.0.140:22-10.0.0.1:43240.service: Deactivated successfully. Nov 12 22:54:57.483107 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 22:54:57.484449 systemd-logind[1485]: Session 26 logged out. Waiting for processes to exit. Nov 12 22:54:57.496788 systemd[1]: Started sshd@26-10.0.0.140:22-10.0.0.1:43256.service - OpenSSH per-connection server daemon (10.0.0.1:43256). Nov 12 22:54:57.497686 systemd-logind[1485]: Removed session 26. Nov 12 22:54:57.530468 sshd[4356]: Accepted publickey for core from 10.0.0.1 port 43256 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:54:57.531857 sshd-session[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:54:57.535549 systemd-logind[1485]: New session 27 of user core. Nov 12 22:54:57.547674 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 12 22:54:58.939321 kubelet[2690]: E1112 22:54:58.939284 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:54:59.344920 containerd[1497]: time="2024-11-12T22:54:59.344645723Z" level=info msg="StopContainer for \"1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb\" with timeout 30 (s)" Nov 12 22:54:59.357685 containerd[1497]: time="2024-11-12T22:54:59.357653493Z" level=info msg="Stop container \"1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb\" with signal terminated" Nov 12 22:54:59.373509 systemd[1]: cri-containerd-1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb.scope: Deactivated successfully. Nov 12 22:54:59.386481 containerd[1497]: time="2024-11-12T22:54:59.386431642Z" level=info msg="StopContainer for \"d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417\" with timeout 2 (s)" Nov 12 22:54:59.386924 containerd[1497]: time="2024-11-12T22:54:59.386899030Z" level=info msg="Stop container \"d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417\" with signal terminated" Nov 12 22:54:59.397694 containerd[1497]: time="2024-11-12T22:54:59.397639510Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 22:54:59.411255 systemd-networkd[1430]: lxc_health: Link DOWN Nov 12 22:54:59.411263 systemd-networkd[1430]: lxc_health: Lost carrier Nov 12 22:54:59.433546 systemd[1]: cri-containerd-d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417.scope: Deactivated successfully. Nov 12 22:54:59.434107 systemd[1]: cri-containerd-d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417.scope: Consumed 7.203s CPU time. Nov 12 22:54:59.445953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb-rootfs.mount: Deactivated successfully. Nov 12 22:54:59.455181 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417-rootfs.mount: Deactivated successfully. Nov 12 22:54:59.558708 containerd[1497]: time="2024-11-12T22:54:59.558631426Z" level=info msg="shim disconnected" id=d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417 namespace=k8s.io Nov 12 22:54:59.558708 containerd[1497]: time="2024-11-12T22:54:59.558700207Z" level=warning msg="cleaning up after shim disconnected" id=d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417 namespace=k8s.io Nov 12 22:54:59.558708 containerd[1497]: time="2024-11-12T22:54:59.558710326Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:54:59.558989 containerd[1497]: time="2024-11-12T22:54:59.558673797Z" level=info msg="shim disconnected" id=1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb namespace=k8s.io Nov 12 22:54:59.558989 containerd[1497]: time="2024-11-12T22:54:59.558852467Z" level=warning msg="cleaning up after shim disconnected" id=1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb namespace=k8s.io Nov 12 22:54:59.558989 containerd[1497]: time="2024-11-12T22:54:59.558864560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:54:59.651553 containerd[1497]: time="2024-11-12T22:54:59.651477439Z" level=info msg="StopContainer for \"1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb\" returns successfully" Nov 12 22:54:59.651734 containerd[1497]: time="2024-11-12T22:54:59.651567661Z" level=info msg="StopContainer for \"d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417\" returns successfully" Nov 12 22:54:59.656053 containerd[1497]: time="2024-11-12T22:54:59.656000235Z" level=info msg="StopPodSandbox for \"6ceb205f3d26ab8d663c1f087c7ddece9159bbc3ea6798a087dc5bbabd9351d3\"" Nov 12 22:54:59.657001 containerd[1497]: time="2024-11-12T22:54:59.656954498Z" level=info msg="StopPodSandbox for \"633d703a22090a309bf85ae314bd29aa83157277897e591b4e88106e387d66f5\"" Nov 12 22:54:59.665065 containerd[1497]: time="2024-11-12T22:54:59.656047184Z" level=info msg="Container to stop \"d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:54:59.665065 containerd[1497]: time="2024-11-12T22:54:59.665039603Z" level=info msg="Container to stop \"6b7f026fc380f8ca030dc61e8cde69beb3002ce2abe6e777cc8d5fb4459bad59\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:54:59.665065 containerd[1497]: time="2024-11-12T22:54:59.665055473Z" level=info msg="Container to stop \"67fbb520812b6c8b94f572510aacc1b899a5a92e07e21fc96cdbe217edfb481d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:54:59.665065 containerd[1497]: time="2024-11-12T22:54:59.665068006Z" level=info msg="Container to stop \"04bf2eb6d95ea759e15f97427ecaa41caf54de44a5d83cacb62b28b5cc059833\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:54:59.665065 containerd[1497]: time="2024-11-12T22:54:59.665079088Z" level=info msg="Container to stop \"d29c70cb05d50eae9a88575f29715e028d52c2f5b895fc11f89542fa60d44bb3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:54:59.666248 containerd[1497]: time="2024-11-12T22:54:59.657000986Z" level=info msg="Container to stop \"1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:54:59.667515 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6ceb205f3d26ab8d663c1f087c7ddece9159bbc3ea6798a087dc5bbabd9351d3-shm.mount: Deactivated successfully. Nov 12 22:54:59.670496 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-633d703a22090a309bf85ae314bd29aa83157277897e591b4e88106e387d66f5-shm.mount: Deactivated successfully. Nov 12 22:54:59.681559 systemd[1]: cri-containerd-6ceb205f3d26ab8d663c1f087c7ddece9159bbc3ea6798a087dc5bbabd9351d3.scope: Deactivated successfully. Nov 12 22:54:59.684663 systemd[1]: cri-containerd-633d703a22090a309bf85ae314bd29aa83157277897e591b4e88106e387d66f5.scope: Deactivated successfully. Nov 12 22:54:59.755017 containerd[1497]: time="2024-11-12T22:54:59.754724769Z" level=info msg="shim disconnected" id=633d703a22090a309bf85ae314bd29aa83157277897e591b4e88106e387d66f5 namespace=k8s.io Nov 12 22:54:59.755017 containerd[1497]: time="2024-11-12T22:54:59.754798198Z" level=warning msg="cleaning up after shim disconnected" id=633d703a22090a309bf85ae314bd29aa83157277897e591b4e88106e387d66f5 namespace=k8s.io Nov 12 22:54:59.755017 containerd[1497]: time="2024-11-12T22:54:59.754810241Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:54:59.755017 containerd[1497]: time="2024-11-12T22:54:59.754876777Z" level=info msg="shim disconnected" id=6ceb205f3d26ab8d663c1f087c7ddece9159bbc3ea6798a087dc5bbabd9351d3 namespace=k8s.io Nov 12 22:54:59.755376 containerd[1497]: time="2024-11-12T22:54:59.755142202Z" level=warning msg="cleaning up after shim disconnected" id=6ceb205f3d26ab8d663c1f087c7ddece9159bbc3ea6798a087dc5bbabd9351d3 namespace=k8s.io Nov 12 22:54:59.755376 containerd[1497]: time="2024-11-12T22:54:59.755158071Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:54:59.773955 containerd[1497]: time="2024-11-12T22:54:59.773900329Z" level=info msg="TearDown network for sandbox \"6ceb205f3d26ab8d663c1f087c7ddece9159bbc3ea6798a087dc5bbabd9351d3\" successfully" Nov 12 22:54:59.773955 containerd[1497]: time="2024-11-12T22:54:59.773941798Z" level=info msg="StopPodSandbox for \"6ceb205f3d26ab8d663c1f087c7ddece9159bbc3ea6798a087dc5bbabd9351d3\" returns successfully" Nov 12 22:54:59.775920 containerd[1497]: time="2024-11-12T22:54:59.775879891Z" level=info msg="TearDown network for sandbox \"633d703a22090a309bf85ae314bd29aa83157277897e591b4e88106e387d66f5\" successfully" Nov 12 22:54:59.775920 containerd[1497]: time="2024-11-12T22:54:59.775911151Z" level=info msg="StopPodSandbox for \"633d703a22090a309bf85ae314bd29aa83157277897e591b4e88106e387d66f5\" returns successfully" Nov 12 22:54:59.902092 kubelet[2690]: I1112 22:54:59.901945 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e24a339-b48b-4760-89ff-09531df1b4fb-hubble-tls\") pod \"1e24a339-b48b-4760-89ff-09531df1b4fb\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " Nov 12 22:54:59.902092 kubelet[2690]: I1112 22:54:59.901987 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-cilium-run\") pod \"1e24a339-b48b-4760-89ff-09531df1b4fb\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " Nov 12 22:54:59.902092 kubelet[2690]: I1112 22:54:59.902030 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-etc-cni-netd\") pod \"1e24a339-b48b-4760-89ff-09531df1b4fb\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " Nov 12 22:54:59.902092 kubelet[2690]: I1112 22:54:59.902050 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-hostproc\") pod \"1e24a339-b48b-4760-89ff-09531df1b4fb\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " Nov 12 22:54:59.902092 kubelet[2690]: I1112 22:54:59.902073 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-cni-path\") pod \"1e24a339-b48b-4760-89ff-09531df1b4fb\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " Nov 12 22:54:59.902092 kubelet[2690]: I1112 22:54:59.902096 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-xtables-lock\") pod \"1e24a339-b48b-4760-89ff-09531df1b4fb\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " Nov 12 22:54:59.902619 kubelet[2690]: I1112 22:54:59.902118 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxrjl\" (UniqueName: \"kubernetes.io/projected/1e24a339-b48b-4760-89ff-09531df1b4fb-kube-api-access-bxrjl\") pod \"1e24a339-b48b-4760-89ff-09531df1b4fb\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " Nov 12 22:54:59.902619 kubelet[2690]: I1112 22:54:59.902143 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e24a339-b48b-4760-89ff-09531df1b4fb-cilium-config-path\") pod \"1e24a339-b48b-4760-89ff-09531df1b4fb\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " Nov 12 22:54:59.902619 kubelet[2690]: I1112 22:54:59.902161 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-host-proc-sys-net\") pod \"1e24a339-b48b-4760-89ff-09531df1b4fb\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " Nov 12 22:54:59.902619 kubelet[2690]: I1112 22:54:59.902187 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-cilium-cgroup\") pod \"1e24a339-b48b-4760-89ff-09531df1b4fb\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " Nov 12 22:54:59.902619 kubelet[2690]: I1112 22:54:59.902206 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-bpf-maps\") pod \"1e24a339-b48b-4760-89ff-09531df1b4fb\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " Nov 12 22:54:59.902619 kubelet[2690]: I1112 22:54:59.902225 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-host-proc-sys-kernel\") pod \"1e24a339-b48b-4760-89ff-09531df1b4fb\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " Nov 12 22:54:59.902913 kubelet[2690]: I1112 22:54:59.902241 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-lib-modules\") pod \"1e24a339-b48b-4760-89ff-09531df1b4fb\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " Nov 12 22:54:59.902913 kubelet[2690]: I1112 22:54:59.902261 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlh25\" (UniqueName: \"kubernetes.io/projected/4feabc2f-5985-42c4-b2f6-2015262cd112-kube-api-access-xlh25\") pod \"4feabc2f-5985-42c4-b2f6-2015262cd112\" (UID: \"4feabc2f-5985-42c4-b2f6-2015262cd112\") " Nov 12 22:54:59.902913 kubelet[2690]: I1112 22:54:59.902281 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4feabc2f-5985-42c4-b2f6-2015262cd112-cilium-config-path\") pod \"4feabc2f-5985-42c4-b2f6-2015262cd112\" (UID: \"4feabc2f-5985-42c4-b2f6-2015262cd112\") " Nov 12 22:54:59.902913 kubelet[2690]: I1112 22:54:59.902305 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e24a339-b48b-4760-89ff-09531df1b4fb-clustermesh-secrets\") pod \"1e24a339-b48b-4760-89ff-09531df1b4fb\" (UID: \"1e24a339-b48b-4760-89ff-09531df1b4fb\") " Nov 12 22:54:59.903266 kubelet[2690]: I1112 22:54:59.903223 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1e24a339-b48b-4760-89ff-09531df1b4fb" (UID: "1e24a339-b48b-4760-89ff-09531df1b4fb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:54:59.904224 kubelet[2690]: I1112 22:54:59.903326 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1e24a339-b48b-4760-89ff-09531df1b4fb" (UID: "1e24a339-b48b-4760-89ff-09531df1b4fb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:54:59.904224 kubelet[2690]: I1112 22:54:59.903246 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1e24a339-b48b-4760-89ff-09531df1b4fb" (UID: "1e24a339-b48b-4760-89ff-09531df1b4fb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:54:59.904224 kubelet[2690]: I1112 22:54:59.903283 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1e24a339-b48b-4760-89ff-09531df1b4fb" (UID: "1e24a339-b48b-4760-89ff-09531df1b4fb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:54:59.904224 kubelet[2690]: I1112 22:54:59.903297 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-hostproc" (OuterVolumeSpecName: "hostproc") pod "1e24a339-b48b-4760-89ff-09531df1b4fb" (UID: "1e24a339-b48b-4760-89ff-09531df1b4fb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:54:59.904224 kubelet[2690]: I1112 22:54:59.903311 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-cni-path" (OuterVolumeSpecName: "cni-path") pod "1e24a339-b48b-4760-89ff-09531df1b4fb" (UID: "1e24a339-b48b-4760-89ff-09531df1b4fb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:54:59.906128 kubelet[2690]: I1112 22:54:59.906075 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1e24a339-b48b-4760-89ff-09531df1b4fb" (UID: "1e24a339-b48b-4760-89ff-09531df1b4fb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:54:59.906281 kubelet[2690]: I1112 22:54:59.906226 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1e24a339-b48b-4760-89ff-09531df1b4fb" (UID: "1e24a339-b48b-4760-89ff-09531df1b4fb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:54:59.906281 kubelet[2690]: I1112 22:54:59.906264 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1e24a339-b48b-4760-89ff-09531df1b4fb" (UID: "1e24a339-b48b-4760-89ff-09531df1b4fb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:54:59.906765 kubelet[2690]: I1112 22:54:59.906741 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1e24a339-b48b-4760-89ff-09531df1b4fb" (UID: "1e24a339-b48b-4760-89ff-09531df1b4fb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:54:59.908716 kubelet[2690]: I1112 22:54:59.908667 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e24a339-b48b-4760-89ff-09531df1b4fb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1e24a339-b48b-4760-89ff-09531df1b4fb" (UID: "1e24a339-b48b-4760-89ff-09531df1b4fb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 12 22:54:59.908781 kubelet[2690]: I1112 22:54:59.908733 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e24a339-b48b-4760-89ff-09531df1b4fb-kube-api-access-bxrjl" (OuterVolumeSpecName: "kube-api-access-bxrjl") pod "1e24a339-b48b-4760-89ff-09531df1b4fb" (UID: "1e24a339-b48b-4760-89ff-09531df1b4fb"). InnerVolumeSpecName "kube-api-access-bxrjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 22:54:59.909046 kubelet[2690]: I1112 22:54:59.908815 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e24a339-b48b-4760-89ff-09531df1b4fb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1e24a339-b48b-4760-89ff-09531df1b4fb" (UID: "1e24a339-b48b-4760-89ff-09531df1b4fb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 22:54:59.909046 kubelet[2690]: I1112 22:54:59.909006 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e24a339-b48b-4760-89ff-09531df1b4fb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1e24a339-b48b-4760-89ff-09531df1b4fb" (UID: "1e24a339-b48b-4760-89ff-09531df1b4fb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 22:54:59.910260 kubelet[2690]: I1112 22:54:59.910216 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4feabc2f-5985-42c4-b2f6-2015262cd112-kube-api-access-xlh25" (OuterVolumeSpecName: "kube-api-access-xlh25") pod "4feabc2f-5985-42c4-b2f6-2015262cd112" (UID: "4feabc2f-5985-42c4-b2f6-2015262cd112"). InnerVolumeSpecName "kube-api-access-xlh25". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 22:54:59.911312 kubelet[2690]: I1112 22:54:59.911271 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4feabc2f-5985-42c4-b2f6-2015262cd112-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4feabc2f-5985-42c4-b2f6-2015262cd112" (UID: "4feabc2f-5985-42c4-b2f6-2015262cd112"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 22:54:59.993239 kubelet[2690]: E1112 22:54:59.993178 2690 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 12 22:55:00.003467 kubelet[2690]: I1112 22:55:00.003426 2690 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 12 22:55:00.003467 kubelet[2690]: I1112 22:55:00.003451 2690 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xlh25\" (UniqueName: \"kubernetes.io/projected/4feabc2f-5985-42c4-b2f6-2015262cd112-kube-api-access-xlh25\") on node \"localhost\" DevicePath \"\"" Nov 12 22:55:00.003467 kubelet[2690]: I1112 22:55:00.003462 2690 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 12 22:55:00.003467 kubelet[2690]: I1112 22:55:00.003473 2690 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 12 22:55:00.003467 kubelet[2690]: I1112 22:55:00.003481 2690 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4feabc2f-5985-42c4-b2f6-2015262cd112-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 22:55:00.003740 kubelet[2690]: I1112 22:55:00.003490 2690 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e24a339-b48b-4760-89ff-09531df1b4fb-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 12 22:55:00.003740 kubelet[2690]: I1112 22:55:00.003498 2690 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e24a339-b48b-4760-89ff-09531df1b4fb-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 12 22:55:00.003740 kubelet[2690]: I1112 22:55:00.003506 2690 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 12 22:55:00.003740 kubelet[2690]: I1112 22:55:00.003514 2690 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 12 22:55:00.003740 kubelet[2690]: I1112 22:55:00.003523 2690 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bxrjl\" (UniqueName: \"kubernetes.io/projected/1e24a339-b48b-4760-89ff-09531df1b4fb-kube-api-access-bxrjl\") on node \"localhost\" DevicePath \"\"" Nov 12 22:55:00.003740 kubelet[2690]: I1112 22:55:00.003548 2690 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e24a339-b48b-4760-89ff-09531df1b4fb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 22:55:00.003740 kubelet[2690]: I1112 22:55:00.003557 2690 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 12 22:55:00.003740 kubelet[2690]: I1112 22:55:00.003564 2690 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 12 22:55:00.003942 kubelet[2690]: I1112 22:55:00.003572 2690 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 12 22:55:00.003942 kubelet[2690]: I1112 22:55:00.003579 2690 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 12 22:55:00.003942 kubelet[2690]: I1112 22:55:00.003587 2690 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e24a339-b48b-4760-89ff-09531df1b4fb-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 12 22:55:00.301037 kubelet[2690]: I1112 22:55:00.300918 2690 scope.go:117] "RemoveContainer" containerID="d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417" Nov 12 22:55:00.308412 systemd[1]: Removed slice kubepods-burstable-pod1e24a339_b48b_4760_89ff_09531df1b4fb.slice - libcontainer container kubepods-burstable-pod1e24a339_b48b_4760_89ff_09531df1b4fb.slice. Nov 12 22:55:00.308827 systemd[1]: kubepods-burstable-pod1e24a339_b48b_4760_89ff_09531df1b4fb.slice: Consumed 7.308s CPU time. Nov 12 22:55:00.309135 containerd[1497]: time="2024-11-12T22:55:00.309093059Z" level=info msg="RemoveContainer for \"d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417\"" Nov 12 22:55:00.310632 systemd[1]: Removed slice kubepods-besteffort-pod4feabc2f_5985_42c4_b2f6_2015262cd112.slice - libcontainer container kubepods-besteffort-pod4feabc2f_5985_42c4_b2f6_2015262cd112.slice. Nov 12 22:55:00.317054 containerd[1497]: time="2024-11-12T22:55:00.316999800Z" level=info msg="RemoveContainer for \"d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417\" returns successfully" Nov 12 22:55:00.317407 kubelet[2690]: I1112 22:55:00.317356 2690 scope.go:117] "RemoveContainer" containerID="d29c70cb05d50eae9a88575f29715e028d52c2f5b895fc11f89542fa60d44bb3" Nov 12 22:55:00.318326 containerd[1497]: time="2024-11-12T22:55:00.318287045Z" level=info msg="RemoveContainer for \"d29c70cb05d50eae9a88575f29715e028d52c2f5b895fc11f89542fa60d44bb3\"" Nov 12 22:55:00.353330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ceb205f3d26ab8d663c1f087c7ddece9159bbc3ea6798a087dc5bbabd9351d3-rootfs.mount: Deactivated successfully. Nov 12 22:55:00.353455 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-633d703a22090a309bf85ae314bd29aa83157277897e591b4e88106e387d66f5-rootfs.mount: Deactivated successfully. Nov 12 22:55:00.353565 systemd[1]: var-lib-kubelet-pods-4feabc2f\x2d5985\x2d42c4\x2db2f6\x2d2015262cd112-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxlh25.mount: Deactivated successfully. Nov 12 22:55:00.353661 systemd[1]: var-lib-kubelet-pods-1e24a339\x2db48b\x2d4760\x2d89ff\x2d09531df1b4fb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbxrjl.mount: Deactivated successfully. Nov 12 22:55:00.353770 systemd[1]: var-lib-kubelet-pods-1e24a339\x2db48b\x2d4760\x2d89ff\x2d09531df1b4fb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 12 22:55:00.353875 systemd[1]: var-lib-kubelet-pods-1e24a339\x2db48b\x2d4760\x2d89ff\x2d09531df1b4fb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 12 22:55:00.427889 containerd[1497]: time="2024-11-12T22:55:00.427840676Z" level=info msg="RemoveContainer for \"d29c70cb05d50eae9a88575f29715e028d52c2f5b895fc11f89542fa60d44bb3\" returns successfully" Nov 12 22:55:00.428298 kubelet[2690]: I1112 22:55:00.428141 2690 scope.go:117] "RemoveContainer" containerID="04bf2eb6d95ea759e15f97427ecaa41caf54de44a5d83cacb62b28b5cc059833" Nov 12 22:55:00.429094 containerd[1497]: time="2024-11-12T22:55:00.429072195Z" level=info msg="RemoveContainer for \"04bf2eb6d95ea759e15f97427ecaa41caf54de44a5d83cacb62b28b5cc059833\"" Nov 12 22:55:00.581009 containerd[1497]: time="2024-11-12T22:55:00.580879218Z" level=info msg="RemoveContainer for \"04bf2eb6d95ea759e15f97427ecaa41caf54de44a5d83cacb62b28b5cc059833\" returns successfully" Nov 12 22:55:00.581205 kubelet[2690]: I1112 22:55:00.581168 2690 scope.go:117] "RemoveContainer" containerID="67fbb520812b6c8b94f572510aacc1b899a5a92e07e21fc96cdbe217edfb481d" Nov 12 22:55:00.582777 containerd[1497]: time="2024-11-12T22:55:00.582753820Z" level=info msg="RemoveContainer for \"67fbb520812b6c8b94f572510aacc1b899a5a92e07e21fc96cdbe217edfb481d\"" Nov 12 22:55:00.655282 containerd[1497]: time="2024-11-12T22:55:00.655239568Z" level=info msg="RemoveContainer for \"67fbb520812b6c8b94f572510aacc1b899a5a92e07e21fc96cdbe217edfb481d\" returns successfully" Nov 12 22:55:00.655559 kubelet[2690]: I1112 22:55:00.655502 2690 scope.go:117] "RemoveContainer" containerID="6b7f026fc380f8ca030dc61e8cde69beb3002ce2abe6e777cc8d5fb4459bad59" Nov 12 22:55:00.657115 containerd[1497]: time="2024-11-12T22:55:00.656786357Z" level=info msg="RemoveContainer for \"6b7f026fc380f8ca030dc61e8cde69beb3002ce2abe6e777cc8d5fb4459bad59\"" Nov 12 22:55:00.718052 containerd[1497]: time="2024-11-12T22:55:00.717994650Z" level=info msg="RemoveContainer for \"6b7f026fc380f8ca030dc61e8cde69beb3002ce2abe6e777cc8d5fb4459bad59\" returns successfully" Nov 12 22:55:00.718331 kubelet[2690]: I1112 22:55:00.718304 2690 scope.go:117] "RemoveContainer" containerID="d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417" Nov 12 22:55:00.718820 containerd[1497]: time="2024-11-12T22:55:00.718762309Z" level=error msg="ContainerStatus for \"d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417\": not found" Nov 12 22:55:00.742196 kubelet[2690]: E1112 22:55:00.742147 2690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417\": not found" containerID="d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417" Nov 12 22:55:00.742341 kubelet[2690]: I1112 22:55:00.742196 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417"} err="failed to get container status \"d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417\": rpc error: code = NotFound desc = an error occurred when try to find container \"d71fc9e79d38ee40b90edfe70d6b8e5eefb820e94e6265fbf90568bf59ae8417\": not found" Nov 12 22:55:00.742341 kubelet[2690]: I1112 22:55:00.742289 2690 scope.go:117] "RemoveContainer" containerID="d29c70cb05d50eae9a88575f29715e028d52c2f5b895fc11f89542fa60d44bb3" Nov 12 22:55:00.742652 containerd[1497]: time="2024-11-12T22:55:00.742609153Z" level=error msg="ContainerStatus for \"d29c70cb05d50eae9a88575f29715e028d52c2f5b895fc11f89542fa60d44bb3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d29c70cb05d50eae9a88575f29715e028d52c2f5b895fc11f89542fa60d44bb3\": not found" Nov 12 22:55:00.742832 kubelet[2690]: E1112 22:55:00.742798 2690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d29c70cb05d50eae9a88575f29715e028d52c2f5b895fc11f89542fa60d44bb3\": not found" containerID="d29c70cb05d50eae9a88575f29715e028d52c2f5b895fc11f89542fa60d44bb3" Nov 12 22:55:00.742903 kubelet[2690]: I1112 22:55:00.742836 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d29c70cb05d50eae9a88575f29715e028d52c2f5b895fc11f89542fa60d44bb3"} err="failed to get container status \"d29c70cb05d50eae9a88575f29715e028d52c2f5b895fc11f89542fa60d44bb3\": rpc error: code = NotFound desc = an error occurred when try to find container \"d29c70cb05d50eae9a88575f29715e028d52c2f5b895fc11f89542fa60d44bb3\": not found" Nov 12 22:55:00.742903 kubelet[2690]: I1112 22:55:00.742880 2690 scope.go:117] "RemoveContainer" containerID="04bf2eb6d95ea759e15f97427ecaa41caf54de44a5d83cacb62b28b5cc059833" Nov 12 22:55:00.743243 containerd[1497]: time="2024-11-12T22:55:00.743199374Z" level=error msg="ContainerStatus for \"04bf2eb6d95ea759e15f97427ecaa41caf54de44a5d83cacb62b28b5cc059833\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04bf2eb6d95ea759e15f97427ecaa41caf54de44a5d83cacb62b28b5cc059833\": not found" Nov 12 22:55:00.743370 kubelet[2690]: E1112 22:55:00.743353 2690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"04bf2eb6d95ea759e15f97427ecaa41caf54de44a5d83cacb62b28b5cc059833\": not found" containerID="04bf2eb6d95ea759e15f97427ecaa41caf54de44a5d83cacb62b28b5cc059833" Nov 12 22:55:00.743414 kubelet[2690]: I1112 22:55:00.743373 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"04bf2eb6d95ea759e15f97427ecaa41caf54de44a5d83cacb62b28b5cc059833"} err="failed to get container status \"04bf2eb6d95ea759e15f97427ecaa41caf54de44a5d83cacb62b28b5cc059833\": rpc error: code = NotFound desc = an error occurred when try to find container \"04bf2eb6d95ea759e15f97427ecaa41caf54de44a5d83cacb62b28b5cc059833\": not found" Nov 12 22:55:00.743414 kubelet[2690]: I1112 22:55:00.743396 2690 scope.go:117] "RemoveContainer" containerID="67fbb520812b6c8b94f572510aacc1b899a5a92e07e21fc96cdbe217edfb481d" Nov 12 22:55:00.743598 containerd[1497]: time="2024-11-12T22:55:00.743568496Z" level=error msg="ContainerStatus for \"67fbb520812b6c8b94f572510aacc1b899a5a92e07e21fc96cdbe217edfb481d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"67fbb520812b6c8b94f572510aacc1b899a5a92e07e21fc96cdbe217edfb481d\": not found" Nov 12 22:55:00.743689 kubelet[2690]: E1112 22:55:00.743660 2690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"67fbb520812b6c8b94f572510aacc1b899a5a92e07e21fc96cdbe217edfb481d\": not found" containerID="67fbb520812b6c8b94f572510aacc1b899a5a92e07e21fc96cdbe217edfb481d" Nov 12 22:55:00.743737 kubelet[2690]: I1112 22:55:00.743688 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"67fbb520812b6c8b94f572510aacc1b899a5a92e07e21fc96cdbe217edfb481d"} err="failed to get container status \"67fbb520812b6c8b94f572510aacc1b899a5a92e07e21fc96cdbe217edfb481d\": rpc error: code = NotFound desc = an error occurred when try to find container \"67fbb520812b6c8b94f572510aacc1b899a5a92e07e21fc96cdbe217edfb481d\": not found" Nov 12 22:55:00.743737 kubelet[2690]: I1112 22:55:00.743706 2690 scope.go:117] "RemoveContainer" containerID="6b7f026fc380f8ca030dc61e8cde69beb3002ce2abe6e777cc8d5fb4459bad59" Nov 12 22:55:00.743958 containerd[1497]: time="2024-11-12T22:55:00.743911918Z" level=error msg="ContainerStatus for \"6b7f026fc380f8ca030dc61e8cde69beb3002ce2abe6e777cc8d5fb4459bad59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6b7f026fc380f8ca030dc61e8cde69beb3002ce2abe6e777cc8d5fb4459bad59\": not found" Nov 12 22:55:00.744117 kubelet[2690]: E1112 22:55:00.744044 2690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b7f026fc380f8ca030dc61e8cde69beb3002ce2abe6e777cc8d5fb4459bad59\": not found" containerID="6b7f026fc380f8ca030dc61e8cde69beb3002ce2abe6e777cc8d5fb4459bad59" Nov 12 22:55:00.744117 kubelet[2690]: I1112 22:55:00.744065 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6b7f026fc380f8ca030dc61e8cde69beb3002ce2abe6e777cc8d5fb4459bad59"} err="failed to get container status \"6b7f026fc380f8ca030dc61e8cde69beb3002ce2abe6e777cc8d5fb4459bad59\": rpc error: code = NotFound desc = an error occurred when try to find container \"6b7f026fc380f8ca030dc61e8cde69beb3002ce2abe6e777cc8d5fb4459bad59\": not found" Nov 12 22:55:00.744117 kubelet[2690]: I1112 22:55:00.744080 2690 scope.go:117] "RemoveContainer" containerID="1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb" Nov 12 22:55:00.744979 containerd[1497]: time="2024-11-12T22:55:00.744958506Z" level=info msg="RemoveContainer for \"1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb\"" Nov 12 22:55:00.748271 containerd[1497]: time="2024-11-12T22:55:00.748236384Z" level=info msg="RemoveContainer for \"1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb\" returns successfully" Nov 12 22:55:00.748391 kubelet[2690]: I1112 22:55:00.748367 2690 scope.go:117] "RemoveContainer" containerID="1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb" Nov 12 22:55:00.748556 containerd[1497]: time="2024-11-12T22:55:00.748507198Z" level=error msg="ContainerStatus for \"1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb\": not found" Nov 12 22:55:00.748653 kubelet[2690]: E1112 22:55:00.748629 2690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb\": not found" containerID="1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb" Nov 12 22:55:00.748684 kubelet[2690]: I1112 22:55:00.748650 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb"} err="failed to get container status \"1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb\": rpc error: code = NotFound desc = an error occurred when try to find container \"1c0bf28a155d15776847546ab5797aeefd5564280063eb9dd325803f36235cbb\": not found" Nov 12 22:55:00.944109 kubelet[2690]: I1112 22:55:00.944066 2690 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e24a339-b48b-4760-89ff-09531df1b4fb" path="/var/lib/kubelet/pods/1e24a339-b48b-4760-89ff-09531df1b4fb/volumes" Nov 12 22:55:00.944947 kubelet[2690]: I1112 22:55:00.944924 2690 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4feabc2f-5985-42c4-b2f6-2015262cd112" path="/var/lib/kubelet/pods/4feabc2f-5985-42c4-b2f6-2015262cd112/volumes" Nov 12 22:55:01.025389 sshd[4358]: Connection closed by 10.0.0.1 port 43256 Nov 12 22:55:01.025842 sshd-session[4356]: pam_unix(sshd:session): session closed for user core Nov 12 22:55:01.038080 systemd[1]: sshd@26-10.0.0.140:22-10.0.0.1:43256.service: Deactivated successfully. Nov 12 22:55:01.040521 systemd[1]: session-27.scope: Deactivated successfully. Nov 12 22:55:01.042596 systemd-logind[1485]: Session 27 logged out. Waiting for processes to exit. Nov 12 22:55:01.050888 systemd[1]: Started sshd@27-10.0.0.140:22-10.0.0.1:43262.service - OpenSSH per-connection server daemon (10.0.0.1:43262). Nov 12 22:55:01.052001 systemd-logind[1485]: Removed session 27. Nov 12 22:55:01.094874 sshd[4517]: Accepted publickey for core from 10.0.0.1 port 43262 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:55:01.096434 sshd-session[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:55:01.102966 systemd-logind[1485]: New session 28 of user core. Nov 12 22:55:01.111692 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 12 22:55:01.713331 sshd[4520]: Connection closed by 10.0.0.1 port 43262 Nov 12 22:55:01.713965 sshd-session[4517]: pam_unix(sshd:session): session closed for user core Nov 12 22:55:01.723513 systemd[1]: sshd@27-10.0.0.140:22-10.0.0.1:43262.service: Deactivated successfully. Nov 12 22:55:01.723975 kubelet[2690]: I1112 22:55:01.723692 2690 topology_manager.go:215] "Topology Admit Handler" podUID="8f85cfda-ea59-4902-a46b-25278a61ab1c" podNamespace="kube-system" podName="cilium-ztc8r" Nov 12 22:55:01.723975 kubelet[2690]: E1112 22:55:01.723773 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e24a339-b48b-4760-89ff-09531df1b4fb" containerName="apply-sysctl-overwrites" Nov 12 22:55:01.723975 kubelet[2690]: E1112 22:55:01.723784 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e24a339-b48b-4760-89ff-09531df1b4fb" containerName="clean-cilium-state" Nov 12 22:55:01.723975 kubelet[2690]: E1112 22:55:01.723794 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4feabc2f-5985-42c4-b2f6-2015262cd112" containerName="cilium-operator" Nov 12 22:55:01.723975 kubelet[2690]: E1112 22:55:01.723802 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e24a339-b48b-4760-89ff-09531df1b4fb" containerName="mount-cgroup" Nov 12 22:55:01.723975 kubelet[2690]: E1112 22:55:01.723809 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e24a339-b48b-4760-89ff-09531df1b4fb" containerName="mount-bpf-fs" Nov 12 22:55:01.723975 kubelet[2690]: E1112 22:55:01.723817 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e24a339-b48b-4760-89ff-09531df1b4fb" containerName="cilium-agent" Nov 12 22:55:01.726026 systemd[1]: session-28.scope: Deactivated successfully. Nov 12 22:55:01.728115 kubelet[2690]: I1112 22:55:01.727036 2690 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e24a339-b48b-4760-89ff-09531df1b4fb" containerName="cilium-agent" Nov 12 22:55:01.728115 kubelet[2690]: I1112 22:55:01.727079 2690 memory_manager.go:354] "RemoveStaleState removing state" podUID="4feabc2f-5985-42c4-b2f6-2015262cd112" containerName="cilium-operator" Nov 12 22:55:01.732610 systemd-logind[1485]: Session 28 logged out. Waiting for processes to exit. Nov 12 22:55:01.741921 systemd[1]: Started sshd@28-10.0.0.140:22-10.0.0.1:43278.service - OpenSSH per-connection server daemon (10.0.0.1:43278). Nov 12 22:55:01.746038 systemd-logind[1485]: Removed session 28. Nov 12 22:55:01.756341 systemd[1]: Created slice kubepods-burstable-pod8f85cfda_ea59_4902_a46b_25278a61ab1c.slice - libcontainer container kubepods-burstable-pod8f85cfda_ea59_4902_a46b_25278a61ab1c.slice. Nov 12 22:55:01.786630 sshd[4532]: Accepted publickey for core from 10.0.0.1 port 43278 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:55:01.788323 sshd-session[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:55:01.793388 systemd-logind[1485]: New session 29 of user core. Nov 12 22:55:01.804667 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 12 22:55:01.814475 kubelet[2690]: I1112 22:55:01.814423 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f85cfda-ea59-4902-a46b-25278a61ab1c-hubble-tls\") pod \"cilium-ztc8r\" (UID: \"8f85cfda-ea59-4902-a46b-25278a61ab1c\") " pod="kube-system/cilium-ztc8r" Nov 12 22:55:01.814475 kubelet[2690]: I1112 22:55:01.814462 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f85cfda-ea59-4902-a46b-25278a61ab1c-etc-cni-netd\") pod \"cilium-ztc8r\" (UID: \"8f85cfda-ea59-4902-a46b-25278a61ab1c\") " pod="kube-system/cilium-ztc8r" Nov 12 22:55:01.814608 kubelet[2690]: I1112 22:55:01.814489 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f85cfda-ea59-4902-a46b-25278a61ab1c-lib-modules\") pod \"cilium-ztc8r\" (UID: \"8f85cfda-ea59-4902-a46b-25278a61ab1c\") " pod="kube-system/cilium-ztc8r" Nov 12 22:55:01.814608 kubelet[2690]: I1112 22:55:01.814528 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8f85cfda-ea59-4902-a46b-25278a61ab1c-cilium-ipsec-secrets\") pod \"cilium-ztc8r\" (UID: \"8f85cfda-ea59-4902-a46b-25278a61ab1c\") " pod="kube-system/cilium-ztc8r" Nov 12 22:55:01.814608 kubelet[2690]: I1112 22:55:01.814569 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f85cfda-ea59-4902-a46b-25278a61ab1c-hostproc\") pod \"cilium-ztc8r\" (UID: \"8f85cfda-ea59-4902-a46b-25278a61ab1c\") " pod="kube-system/cilium-ztc8r" Nov 12 22:55:01.814608 kubelet[2690]: I1112 22:55:01.814585 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r866v\" (UniqueName: \"kubernetes.io/projected/8f85cfda-ea59-4902-a46b-25278a61ab1c-kube-api-access-r866v\") pod \"cilium-ztc8r\" (UID: \"8f85cfda-ea59-4902-a46b-25278a61ab1c\") " pod="kube-system/cilium-ztc8r" Nov 12 22:55:01.814608 kubelet[2690]: I1112 22:55:01.814601 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f85cfda-ea59-4902-a46b-25278a61ab1c-bpf-maps\") pod \"cilium-ztc8r\" (UID: \"8f85cfda-ea59-4902-a46b-25278a61ab1c\") " pod="kube-system/cilium-ztc8r" Nov 12 22:55:01.814731 kubelet[2690]: I1112 22:55:01.814616 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f85cfda-ea59-4902-a46b-25278a61ab1c-cilium-cgroup\") pod \"cilium-ztc8r\" (UID: \"8f85cfda-ea59-4902-a46b-25278a61ab1c\") " pod="kube-system/cilium-ztc8r" Nov 12 22:55:01.814731 kubelet[2690]: I1112 22:55:01.814631 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f85cfda-ea59-4902-a46b-25278a61ab1c-cilium-run\") pod \"cilium-ztc8r\" (UID: \"8f85cfda-ea59-4902-a46b-25278a61ab1c\") " pod="kube-system/cilium-ztc8r" Nov 12 22:55:01.814731 kubelet[2690]: I1112 22:55:01.814644 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f85cfda-ea59-4902-a46b-25278a61ab1c-host-proc-sys-net\") pod \"cilium-ztc8r\" (UID: \"8f85cfda-ea59-4902-a46b-25278a61ab1c\") " pod="kube-system/cilium-ztc8r" Nov 12 22:55:01.814731 kubelet[2690]: I1112 22:55:01.814660 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f85cfda-ea59-4902-a46b-25278a61ab1c-xtables-lock\") pod \"cilium-ztc8r\" (UID: \"8f85cfda-ea59-4902-a46b-25278a61ab1c\") " pod="kube-system/cilium-ztc8r" Nov 12 22:55:01.814731 kubelet[2690]: I1112 22:55:01.814675 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f85cfda-ea59-4902-a46b-25278a61ab1c-clustermesh-secrets\") pod \"cilium-ztc8r\" (UID: \"8f85cfda-ea59-4902-a46b-25278a61ab1c\") " pod="kube-system/cilium-ztc8r" Nov 12 22:55:01.814731 kubelet[2690]: I1112 22:55:01.814688 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f85cfda-ea59-4902-a46b-25278a61ab1c-host-proc-sys-kernel\") pod \"cilium-ztc8r\" (UID: \"8f85cfda-ea59-4902-a46b-25278a61ab1c\") " pod="kube-system/cilium-ztc8r" Nov 12 22:55:01.814877 kubelet[2690]: I1112 22:55:01.814703 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f85cfda-ea59-4902-a46b-25278a61ab1c-cni-path\") pod \"cilium-ztc8r\" (UID: \"8f85cfda-ea59-4902-a46b-25278a61ab1c\") " pod="kube-system/cilium-ztc8r" Nov 12 22:55:01.814877 kubelet[2690]: I1112 22:55:01.814719 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f85cfda-ea59-4902-a46b-25278a61ab1c-cilium-config-path\") pod \"cilium-ztc8r\" (UID: \"8f85cfda-ea59-4902-a46b-25278a61ab1c\") " pod="kube-system/cilium-ztc8r" Nov 12 22:55:01.854002 sshd[4534]: Connection closed by 10.0.0.1 port 43278 Nov 12 22:55:01.854419 sshd-session[4532]: pam_unix(sshd:session): session closed for user core Nov 12 22:55:01.863300 systemd[1]: sshd@28-10.0.0.140:22-10.0.0.1:43278.service: Deactivated successfully. Nov 12 22:55:01.864981 systemd[1]: session-29.scope: Deactivated successfully. Nov 12 22:55:01.866763 systemd-logind[1485]: Session 29 logged out. Waiting for processes to exit. Nov 12 22:55:01.868211 systemd[1]: Started sshd@29-10.0.0.140:22-10.0.0.1:43290.service - OpenSSH per-connection server daemon (10.0.0.1:43290). Nov 12 22:55:01.869166 systemd-logind[1485]: Removed session 29. Nov 12 22:55:01.907413 sshd[4540]: Accepted publickey for core from 10.0.0.1 port 43290 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:55:01.908866 sshd-session[4540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:55:01.912896 systemd-logind[1485]: New session 30 of user core. Nov 12 22:55:01.922010 systemd[1]: Started session-30.scope - Session 30 of User core. Nov 12 22:55:02.059238 kubelet[2690]: E1112 22:55:02.059090 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:55:02.059682 containerd[1497]: time="2024-11-12T22:55:02.059627019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ztc8r,Uid:8f85cfda-ea59-4902-a46b-25278a61ab1c,Namespace:kube-system,Attempt:0,}" Nov 12 22:55:02.083217 containerd[1497]: time="2024-11-12T22:55:02.082818573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:55:02.083217 containerd[1497]: time="2024-11-12T22:55:02.083020856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:55:02.083217 containerd[1497]: time="2024-11-12T22:55:02.083034221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:55:02.083217 containerd[1497]: time="2024-11-12T22:55:02.083134933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:55:02.109660 systemd[1]: Started cri-containerd-dd0e2cc7b4d15e648206f5d24de203cdd6e76091308e5c20a26313cbc2f775f7.scope - libcontainer container dd0e2cc7b4d15e648206f5d24de203cdd6e76091308e5c20a26313cbc2f775f7. Nov 12 22:55:02.132167 containerd[1497]: time="2024-11-12T22:55:02.132119861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ztc8r,Uid:8f85cfda-ea59-4902-a46b-25278a61ab1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd0e2cc7b4d15e648206f5d24de203cdd6e76091308e5c20a26313cbc2f775f7\"" Nov 12 22:55:02.132909 kubelet[2690]: E1112 22:55:02.132879 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:55:02.135813 containerd[1497]: time="2024-11-12T22:55:02.135781673Z" level=info msg="CreateContainer within sandbox \"dd0e2cc7b4d15e648206f5d24de203cdd6e76091308e5c20a26313cbc2f775f7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 22:55:02.173642 containerd[1497]: time="2024-11-12T22:55:02.173584119Z" level=info msg="CreateContainer within sandbox \"dd0e2cc7b4d15e648206f5d24de203cdd6e76091308e5c20a26313cbc2f775f7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"279461bca8a69f7b8531e7dc58b03948f56bde843f2bd2cbb114be44f84db688\"" Nov 12 22:55:02.174244 containerd[1497]: time="2024-11-12T22:55:02.174198105Z" level=info msg="StartContainer for \"279461bca8a69f7b8531e7dc58b03948f56bde843f2bd2cbb114be44f84db688\"" Nov 12 22:55:02.200690 systemd[1]: Started cri-containerd-279461bca8a69f7b8531e7dc58b03948f56bde843f2bd2cbb114be44f84db688.scope - libcontainer container 279461bca8a69f7b8531e7dc58b03948f56bde843f2bd2cbb114be44f84db688. Nov 12 22:55:02.228103 containerd[1497]: time="2024-11-12T22:55:02.228044252Z" level=info msg="StartContainer for \"279461bca8a69f7b8531e7dc58b03948f56bde843f2bd2cbb114be44f84db688\" returns successfully" Nov 12 22:55:02.237772 systemd[1]: cri-containerd-279461bca8a69f7b8531e7dc58b03948f56bde843f2bd2cbb114be44f84db688.scope: Deactivated successfully. Nov 12 22:55:02.270101 containerd[1497]: time="2024-11-12T22:55:02.270030731Z" level=info msg="shim disconnected" id=279461bca8a69f7b8531e7dc58b03948f56bde843f2bd2cbb114be44f84db688 namespace=k8s.io Nov 12 22:55:02.270101 containerd[1497]: time="2024-11-12T22:55:02.270085867Z" level=warning msg="cleaning up after shim disconnected" id=279461bca8a69f7b8531e7dc58b03948f56bde843f2bd2cbb114be44f84db688 namespace=k8s.io Nov 12 22:55:02.270101 containerd[1497]: time="2024-11-12T22:55:02.270093711Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:55:02.312637 kubelet[2690]: E1112 22:55:02.311766 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:55:02.314234 containerd[1497]: time="2024-11-12T22:55:02.314190607Z" level=info msg="CreateContainer within sandbox \"dd0e2cc7b4d15e648206f5d24de203cdd6e76091308e5c20a26313cbc2f775f7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 22:55:02.328292 containerd[1497]: time="2024-11-12T22:55:02.328244161Z" level=info msg="CreateContainer within sandbox \"dd0e2cc7b4d15e648206f5d24de203cdd6e76091308e5c20a26313cbc2f775f7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0c45029b3f7ddd32ff72d2c2708241562c243953a8c7e521ed923d83e76d3b33\"" Nov 12 22:55:02.329280 containerd[1497]: time="2024-11-12T22:55:02.328756615Z" level=info msg="StartContainer for \"0c45029b3f7ddd32ff72d2c2708241562c243953a8c7e521ed923d83e76d3b33\"" Nov 12 22:55:02.354704 systemd[1]: Started cri-containerd-0c45029b3f7ddd32ff72d2c2708241562c243953a8c7e521ed923d83e76d3b33.scope - libcontainer container 0c45029b3f7ddd32ff72d2c2708241562c243953a8c7e521ed923d83e76d3b33. Nov 12 22:55:02.382995 containerd[1497]: time="2024-11-12T22:55:02.382949079Z" level=info msg="StartContainer for \"0c45029b3f7ddd32ff72d2c2708241562c243953a8c7e521ed923d83e76d3b33\" returns successfully" Nov 12 22:55:02.388828 systemd[1]: cri-containerd-0c45029b3f7ddd32ff72d2c2708241562c243953a8c7e521ed923d83e76d3b33.scope: Deactivated successfully. Nov 12 22:55:02.412370 containerd[1497]: time="2024-11-12T22:55:02.412306621Z" level=info msg="shim disconnected" id=0c45029b3f7ddd32ff72d2c2708241562c243953a8c7e521ed923d83e76d3b33 namespace=k8s.io Nov 12 22:55:02.412370 containerd[1497]: time="2024-11-12T22:55:02.412364050Z" level=warning msg="cleaning up after shim disconnected" id=0c45029b3f7ddd32ff72d2c2708241562c243953a8c7e521ed923d83e76d3b33 namespace=k8s.io Nov 12 22:55:02.412370 containerd[1497]: time="2024-11-12T22:55:02.412372465Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:55:03.315440 kubelet[2690]: E1112 22:55:03.315409 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:55:03.317407 containerd[1497]: time="2024-11-12T22:55:03.317370407Z" level=info msg="CreateContainer within sandbox \"dd0e2cc7b4d15e648206f5d24de203cdd6e76091308e5c20a26313cbc2f775f7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 22:55:03.758656 containerd[1497]: time="2024-11-12T22:55:03.758599937Z" level=info msg="CreateContainer within sandbox \"dd0e2cc7b4d15e648206f5d24de203cdd6e76091308e5c20a26313cbc2f775f7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2aef5d117801597fc05480d33aafafd8d7db6d9a10e13575dcf1f6ae3ef848a4\"" Nov 12 22:55:03.759216 containerd[1497]: time="2024-11-12T22:55:03.759186640Z" level=info msg="StartContainer for \"2aef5d117801597fc05480d33aafafd8d7db6d9a10e13575dcf1f6ae3ef848a4\"" Nov 12 22:55:03.787797 systemd[1]: Started cri-containerd-2aef5d117801597fc05480d33aafafd8d7db6d9a10e13575dcf1f6ae3ef848a4.scope - libcontainer container 2aef5d117801597fc05480d33aafafd8d7db6d9a10e13575dcf1f6ae3ef848a4. Nov 12 22:55:03.823060 systemd[1]: cri-containerd-2aef5d117801597fc05480d33aafafd8d7db6d9a10e13575dcf1f6ae3ef848a4.scope: Deactivated successfully. Nov 12 22:55:03.919543 containerd[1497]: time="2024-11-12T22:55:03.919496065Z" level=info msg="StartContainer for \"2aef5d117801597fc05480d33aafafd8d7db6d9a10e13575dcf1f6ae3ef848a4\" returns successfully" Nov 12 22:55:03.937141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2aef5d117801597fc05480d33aafafd8d7db6d9a10e13575dcf1f6ae3ef848a4-rootfs.mount: Deactivated successfully. Nov 12 22:55:04.183199 containerd[1497]: time="2024-11-12T22:55:04.183119534Z" level=info msg="shim disconnected" id=2aef5d117801597fc05480d33aafafd8d7db6d9a10e13575dcf1f6ae3ef848a4 namespace=k8s.io Nov 12 22:55:04.183199 containerd[1497]: time="2024-11-12T22:55:04.183182263Z" level=warning msg="cleaning up after shim disconnected" id=2aef5d117801597fc05480d33aafafd8d7db6d9a10e13575dcf1f6ae3ef848a4 namespace=k8s.io Nov 12 22:55:04.183199 containerd[1497]: time="2024-11-12T22:55:04.183191360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:55:04.196996 containerd[1497]: time="2024-11-12T22:55:04.196939500Z" level=warning msg="cleanup warnings time=\"2024-11-12T22:55:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 12 22:55:04.319398 kubelet[2690]: E1112 22:55:04.319365 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:55:04.321680 containerd[1497]: time="2024-11-12T22:55:04.320977993Z" level=info msg="CreateContainer within sandbox \"dd0e2cc7b4d15e648206f5d24de203cdd6e76091308e5c20a26313cbc2f775f7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 22:55:04.713266 containerd[1497]: time="2024-11-12T22:55:04.713103404Z" level=info msg="CreateContainer within sandbox \"dd0e2cc7b4d15e648206f5d24de203cdd6e76091308e5c20a26313cbc2f775f7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7eb0b2758965dfbcfa3d2c4e7ea58bf5dc0b9aa928e5430d47ce1a69355a9a9d\"" Nov 12 22:55:04.718240 containerd[1497]: time="2024-11-12T22:55:04.716837030Z" level=info msg="StartContainer for \"7eb0b2758965dfbcfa3d2c4e7ea58bf5dc0b9aa928e5430d47ce1a69355a9a9d\"" Nov 12 22:55:04.743652 systemd[1]: Started cri-containerd-7eb0b2758965dfbcfa3d2c4e7ea58bf5dc0b9aa928e5430d47ce1a69355a9a9d.scope - libcontainer container 7eb0b2758965dfbcfa3d2c4e7ea58bf5dc0b9aa928e5430d47ce1a69355a9a9d. Nov 12 22:55:04.766692 systemd[1]: cri-containerd-7eb0b2758965dfbcfa3d2c4e7ea58bf5dc0b9aa928e5430d47ce1a69355a9a9d.scope: Deactivated successfully. Nov 12 22:55:04.804363 containerd[1497]: time="2024-11-12T22:55:04.804332115Z" level=info msg="StartContainer for \"7eb0b2758965dfbcfa3d2c4e7ea58bf5dc0b9aa928e5430d47ce1a69355a9a9d\" returns successfully" Nov 12 22:55:04.864964 containerd[1497]: time="2024-11-12T22:55:04.864902218Z" level=info msg="shim disconnected" id=7eb0b2758965dfbcfa3d2c4e7ea58bf5dc0b9aa928e5430d47ce1a69355a9a9d namespace=k8s.io Nov 12 22:55:04.864964 containerd[1497]: time="2024-11-12T22:55:04.864951241Z" level=warning msg="cleaning up after shim disconnected" id=7eb0b2758965dfbcfa3d2c4e7ea58bf5dc0b9aa928e5430d47ce1a69355a9a9d namespace=k8s.io Nov 12 22:55:04.864964 containerd[1497]: time="2024-11-12T22:55:04.864959266Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:55:04.924162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7eb0b2758965dfbcfa3d2c4e7ea58bf5dc0b9aa928e5430d47ce1a69355a9a9d-rootfs.mount: Deactivated successfully. Nov 12 22:55:04.993887 kubelet[2690]: E1112 22:55:04.993782 2690 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 12 22:55:05.329932 kubelet[2690]: E1112 22:55:05.329799 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:55:05.332616 containerd[1497]: time="2024-11-12T22:55:05.332569880Z" level=info msg="CreateContainer within sandbox \"dd0e2cc7b4d15e648206f5d24de203cdd6e76091308e5c20a26313cbc2f775f7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 22:55:05.875282 containerd[1497]: time="2024-11-12T22:55:05.875235157Z" level=info msg="CreateContainer within sandbox \"dd0e2cc7b4d15e648206f5d24de203cdd6e76091308e5c20a26313cbc2f775f7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b33ec2193b315a9bc9d0084044d77d34d51e7011abc2566c8efe504df13afafd\"" Nov 12 22:55:05.875759 containerd[1497]: time="2024-11-12T22:55:05.875734756Z" level=info msg="StartContainer for \"b33ec2193b315a9bc9d0084044d77d34d51e7011abc2566c8efe504df13afafd\"" Nov 12 22:55:05.902664 systemd[1]: Started cri-containerd-b33ec2193b315a9bc9d0084044d77d34d51e7011abc2566c8efe504df13afafd.scope - libcontainer container b33ec2193b315a9bc9d0084044d77d34d51e7011abc2566c8efe504df13afafd. Nov 12 22:55:06.047875 containerd[1497]: time="2024-11-12T22:55:06.047809939Z" level=info msg="StartContainer for \"b33ec2193b315a9bc9d0084044d77d34d51e7011abc2566c8efe504df13afafd\" returns successfully" Nov 12 22:55:06.334253 kubelet[2690]: E1112 22:55:06.334218 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:55:06.392572 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 12 22:55:06.844697 kubelet[2690]: I1112 22:55:06.844632 2690 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-11-12T22:55:06Z","lastTransitionTime":"2024-11-12T22:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 12 22:55:08.060693 kubelet[2690]: E1112 22:55:08.060656 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:55:09.504874 systemd-networkd[1430]: lxc_health: Link UP Nov 12 22:55:09.517844 systemd-networkd[1430]: lxc_health: Gained carrier Nov 12 22:55:09.939714 kubelet[2690]: E1112 22:55:09.939195 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:55:10.061513 kubelet[2690]: E1112 22:55:10.061469 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:55:10.146560 kubelet[2690]: I1112 22:55:10.144169 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ztc8r" podStartSLOduration=9.144133478 podStartE2EDuration="9.144133478s" podCreationTimestamp="2024-11-12 22:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:55:06.350196052 +0000 UTC m=+91.495902580" watchObservedRunningTime="2024-11-12 22:55:10.144133478 +0000 UTC m=+95.289840006" Nov 12 22:55:10.353105 kubelet[2690]: E1112 22:55:10.352993 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:55:11.133564 systemd-networkd[1430]: lxc_health: Gained IPv6LL Nov 12 22:55:11.356361 kubelet[2690]: E1112 22:55:11.356317 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:55:16.903692 sshd[4547]: Connection closed by 10.0.0.1 port 43290 Nov 12 22:55:16.904063 sshd-session[4540]: pam_unix(sshd:session): session closed for user core Nov 12 22:55:16.906997 systemd[1]: sshd@29-10.0.0.140:22-10.0.0.1:43290.service: Deactivated successfully. Nov 12 22:55:16.909075 systemd[1]: session-30.scope: Deactivated successfully. Nov 12 22:55:16.910654 systemd-logind[1485]: Session 30 logged out. Waiting for processes to exit. Nov 12 22:55:16.911563 systemd-logind[1485]: Removed session 30.