Nov 12 20:55:02.059188 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:55:02.059208 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:55:02.059218 kernel: BIOS-provided physical RAM map: Nov 12 20:55:02.059225 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 12 20:55:02.059230 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 12 20:55:02.059236 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 12 20:55:02.059244 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 12 20:55:02.059251 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 12 20:55:02.059257 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Nov 12 20:55:02.059262 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Nov 12 20:55:02.059271 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Nov 12 20:55:02.059277 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Nov 12 20:55:02.059282 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Nov 12 20:55:02.059289 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Nov 12 20:55:02.059299 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Nov 12 20:55:02.059308 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 12 20:55:02.059320 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Nov 12 20:55:02.059329 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Nov 12 20:55:02.059338 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 12 20:55:02.059347 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 12 20:55:02.059357 kernel: NX (Execute Disable) protection: active Nov 12 20:55:02.059366 kernel: APIC: Static calls initialized Nov 12 20:55:02.059375 kernel: efi: EFI v2.7 by EDK II Nov 12 20:55:02.059384 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Nov 12 20:55:02.059394 kernel: SMBIOS 2.8 present. Nov 12 20:55:02.059403 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Nov 12 20:55:02.059412 kernel: Hypervisor detected: KVM Nov 12 20:55:02.059425 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 20:55:02.059432 kernel: kvm-clock: using sched offset of 5485111097 cycles Nov 12 20:55:02.059439 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 20:55:02.059446 kernel: tsc: Detected 2794.744 MHz processor Nov 12 20:55:02.059453 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:55:02.059460 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:55:02.059466 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Nov 12 20:55:02.059473 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 12 20:55:02.059480 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:55:02.059489 kernel: Using GB pages for direct mapping Nov 12 20:55:02.059498 kernel: Secure boot disabled Nov 12 20:55:02.059508 kernel: ACPI: Early table checksum verification disabled Nov 12 20:55:02.059517 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 12 20:55:02.059532 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 12 20:55:02.059542 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:55:02.059553 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:55:02.059563 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 12 20:55:02.059570 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:55:02.059577 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:55:02.059584 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:55:02.059591 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:55:02.059598 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 12 20:55:02.059622 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 12 20:55:02.059637 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Nov 12 20:55:02.059648 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 12 20:55:02.059658 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 12 20:55:02.059667 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 12 20:55:02.059674 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 12 20:55:02.059681 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 12 20:55:02.059688 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 12 20:55:02.059694 kernel: No NUMA configuration found Nov 12 20:55:02.059701 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Nov 12 20:55:02.059711 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Nov 12 20:55:02.059718 kernel: Zone ranges: Nov 12 20:55:02.059725 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:55:02.059732 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Nov 12 20:55:02.059739 kernel: Normal empty Nov 12 20:55:02.059746 kernel: Movable zone start for each node Nov 12 20:55:02.059752 kernel: Early memory node ranges Nov 12 20:55:02.059759 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 12 20:55:02.059766 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 12 20:55:02.059773 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 12 20:55:02.059782 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Nov 12 20:55:02.059803 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Nov 12 20:55:02.059810 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Nov 12 20:55:02.059819 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Nov 12 20:55:02.059827 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:55:02.059836 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 12 20:55:02.059846 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 12 20:55:02.059856 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:55:02.059865 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Nov 12 20:55:02.059879 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 12 20:55:02.059890 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Nov 12 20:55:02.059900 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 12 20:55:02.059907 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 20:55:02.059914 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:55:02.059921 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 20:55:02.059928 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 20:55:02.059935 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:55:02.059942 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 20:55:02.059949 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 20:55:02.059958 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:55:02.059965 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 20:55:02.059972 kernel: TSC deadline timer available Nov 12 20:55:02.059979 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 12 20:55:02.059986 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 20:55:02.059992 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 12 20:55:02.059999 kernel: kvm-guest: setup PV sched yield Nov 12 20:55:02.060006 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 12 20:55:02.060013 kernel: Booting paravirtualized kernel on KVM Nov 12 20:55:02.060022 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:55:02.060029 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 12 20:55:02.060036 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Nov 12 20:55:02.060043 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Nov 12 20:55:02.060049 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 12 20:55:02.060056 kernel: kvm-guest: PV spinlocks enabled Nov 12 20:55:02.060063 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:55:02.060071 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:55:02.060081 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:55:02.060088 kernel: random: crng init done Nov 12 20:55:02.060095 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 20:55:02.060102 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 20:55:02.060109 kernel: Fallback order for Node 0: 0 Nov 12 20:55:02.060115 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Nov 12 20:55:02.060122 kernel: Policy zone: DMA32 Nov 12 20:55:02.060129 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:55:02.060136 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 171124K reserved, 0K cma-reserved) Nov 12 20:55:02.060146 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 20:55:02.060152 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:55:02.060159 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:55:02.060166 kernel: Dynamic Preempt: voluntary Nov 12 20:55:02.060181 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:55:02.060191 kernel: rcu: RCU event tracing is enabled. Nov 12 20:55:02.060198 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 20:55:02.060206 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:55:02.060213 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:55:02.060220 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:55:02.060227 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:55:02.060235 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 20:55:02.060245 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 12 20:55:02.060252 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:55:02.060259 kernel: Console: colour dummy device 80x25 Nov 12 20:55:02.060266 kernel: printk: console [ttyS0] enabled Nov 12 20:55:02.060274 kernel: ACPI: Core revision 20230628 Nov 12 20:55:02.060283 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 12 20:55:02.060291 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:55:02.060298 kernel: x2apic enabled Nov 12 20:55:02.060305 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:55:02.060312 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 12 20:55:02.060319 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 12 20:55:02.060327 kernel: kvm-guest: setup PV IPIs Nov 12 20:55:02.060334 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 20:55:02.060341 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 12 20:55:02.060351 kernel: Calibrating delay loop (skipped) preset value.. 5589.48 BogoMIPS (lpj=2794744) Nov 12 20:55:02.060358 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 12 20:55:02.060365 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 12 20:55:02.060372 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 12 20:55:02.060379 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:55:02.060387 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:55:02.060394 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:55:02.060401 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:55:02.060408 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 12 20:55:02.060418 kernel: RETBleed: Mitigation: untrained return thunk Nov 12 20:55:02.060425 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 20:55:02.060432 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 20:55:02.060440 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 12 20:55:02.060447 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 12 20:55:02.060455 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 12 20:55:02.060462 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:55:02.060469 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:55:02.060479 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:55:02.060486 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:55:02.060493 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 12 20:55:02.060500 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:55:02.060508 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:55:02.060515 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:55:02.060522 kernel: landlock: Up and running. Nov 12 20:55:02.060529 kernel: SELinux: Initializing. Nov 12 20:55:02.060536 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 20:55:02.060546 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 20:55:02.060553 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 12 20:55:02.060560 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:55:02.060567 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:55:02.060575 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:55:02.060582 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 12 20:55:02.060589 kernel: ... version: 0 Nov 12 20:55:02.060596 kernel: ... bit width: 48 Nov 12 20:55:02.060603 kernel: ... generic registers: 6 Nov 12 20:55:02.060625 kernel: ... value mask: 0000ffffffffffff Nov 12 20:55:02.060633 kernel: ... max period: 00007fffffffffff Nov 12 20:55:02.060640 kernel: ... fixed-purpose events: 0 Nov 12 20:55:02.060647 kernel: ... event mask: 000000000000003f Nov 12 20:55:02.060654 kernel: signal: max sigframe size: 1776 Nov 12 20:55:02.060661 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:55:02.060668 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:55:02.060675 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:55:02.060683 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:55:02.060690 kernel: .... node #0, CPUs: #1 #2 #3 Nov 12 20:55:02.060699 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 20:55:02.060706 kernel: smpboot: Max logical packages: 1 Nov 12 20:55:02.060714 kernel: smpboot: Total of 4 processors activated (22357.95 BogoMIPS) Nov 12 20:55:02.060721 kernel: devtmpfs: initialized Nov 12 20:55:02.060728 kernel: x86/mm: Memory block size: 128MB Nov 12 20:55:02.060735 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 12 20:55:02.060742 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 12 20:55:02.060750 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Nov 12 20:55:02.060757 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 12 20:55:02.060767 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 12 20:55:02.060774 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:55:02.060781 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 20:55:02.060801 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:55:02.060810 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:55:02.060817 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:55:02.060825 kernel: audit: type=2000 audit(1731444901.271:1): state=initialized audit_enabled=0 res=1 Nov 12 20:55:02.060832 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:55:02.060841 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:55:02.060848 kernel: cpuidle: using governor menu Nov 12 20:55:02.060855 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:55:02.060862 kernel: dca service started, version 1.12.1 Nov 12 20:55:02.060870 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 12 20:55:02.060880 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 12 20:55:02.060887 kernel: PCI: Using configuration type 1 for base access Nov 12 20:55:02.060895 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:55:02.060902 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:55:02.060912 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:55:02.060919 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:55:02.060926 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:55:02.060934 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:55:02.060941 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:55:02.060948 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:55:02.060955 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:55:02.060962 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:55:02.060969 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:55:02.060979 kernel: ACPI: Interpreter enabled Nov 12 20:55:02.060986 kernel: ACPI: PM: (supports S0 S3 S5) Nov 12 20:55:02.060993 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:55:02.061001 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:55:02.061008 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 20:55:02.061015 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 12 20:55:02.061023 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 20:55:02.061200 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:55:02.061331 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 12 20:55:02.061451 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 12 20:55:02.061461 kernel: PCI host bridge to bus 0000:00 Nov 12 20:55:02.061583 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:55:02.061734 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 20:55:02.061873 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:55:02.061987 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 12 20:55:02.062099 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 12 20:55:02.062209 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Nov 12 20:55:02.062317 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 20:55:02.062452 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 12 20:55:02.062583 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 12 20:55:02.062719 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Nov 12 20:55:02.062857 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Nov 12 20:55:02.063007 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 12 20:55:02.063135 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Nov 12 20:55:02.063263 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 20:55:02.063398 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 20:55:02.063518 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Nov 12 20:55:02.063658 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Nov 12 20:55:02.063795 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Nov 12 20:55:02.063929 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 12 20:55:02.064051 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Nov 12 20:55:02.064169 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Nov 12 20:55:02.064288 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Nov 12 20:55:02.064413 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:55:02.064533 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Nov 12 20:55:02.064672 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Nov 12 20:55:02.064800 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Nov 12 20:55:02.064923 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Nov 12 20:55:02.065050 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 12 20:55:02.065176 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 12 20:55:02.065302 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 12 20:55:02.065422 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Nov 12 20:55:02.065546 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Nov 12 20:55:02.065688 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 12 20:55:02.065817 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Nov 12 20:55:02.065828 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 20:55:02.065835 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 20:55:02.065843 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:55:02.065852 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 20:55:02.065861 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 12 20:55:02.065875 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 12 20:55:02.065883 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 12 20:55:02.065892 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 12 20:55:02.065901 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 12 20:55:02.065910 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 12 20:55:02.065919 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 12 20:55:02.065928 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 12 20:55:02.065937 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 12 20:55:02.065946 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 12 20:55:02.065958 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 12 20:55:02.065967 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 12 20:55:02.065977 kernel: iommu: Default domain type: Translated Nov 12 20:55:02.065986 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:55:02.065995 kernel: efivars: Registered efivars operations Nov 12 20:55:02.066004 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:55:02.066013 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:55:02.066022 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 12 20:55:02.066031 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Nov 12 20:55:02.066043 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Nov 12 20:55:02.066051 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Nov 12 20:55:02.066175 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 12 20:55:02.066293 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 12 20:55:02.066412 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 20:55:02.066422 kernel: vgaarb: loaded Nov 12 20:55:02.066429 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 12 20:55:02.066437 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 12 20:55:02.066448 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 20:55:02.066455 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:55:02.066462 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:55:02.066469 kernel: pnp: PnP ACPI init Nov 12 20:55:02.066595 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 12 20:55:02.066605 kernel: pnp: PnP ACPI: found 6 devices Nov 12 20:55:02.066624 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:55:02.066632 kernel: NET: Registered PF_INET protocol family Nov 12 20:55:02.066642 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 20:55:02.066650 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 20:55:02.066657 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:55:02.066664 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 20:55:02.066672 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 20:55:02.066679 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 20:55:02.066686 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 20:55:02.066694 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 20:55:02.066701 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:55:02.066710 kernel: NET: Registered PF_XDP protocol family Nov 12 20:55:02.066863 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Nov 12 20:55:02.066993 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Nov 12 20:55:02.067105 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 20:55:02.067214 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 20:55:02.067322 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 20:55:02.067430 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 12 20:55:02.067538 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 12 20:55:02.067665 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Nov 12 20:55:02.067675 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:55:02.067682 kernel: Initialise system trusted keyrings Nov 12 20:55:02.067689 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 20:55:02.067696 kernel: Key type asymmetric registered Nov 12 20:55:02.067704 kernel: Asymmetric key parser 'x509' registered Nov 12 20:55:02.067711 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:55:02.067718 kernel: io scheduler mq-deadline registered Nov 12 20:55:02.067725 kernel: io scheduler kyber registered Nov 12 20:55:02.067737 kernel: io scheduler bfq registered Nov 12 20:55:02.067744 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:55:02.067751 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 12 20:55:02.067759 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 12 20:55:02.067766 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 12 20:55:02.067773 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:55:02.067780 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:55:02.067796 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 20:55:02.067803 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:55:02.067813 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:55:02.067942 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 12 20:55:02.067953 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 20:55:02.068063 kernel: rtc_cmos 00:04: registered as rtc0 Nov 12 20:55:02.068182 kernel: rtc_cmos 00:04: setting system clock to 2024-11-12T20:55:01 UTC (1731444901) Nov 12 20:55:02.068312 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 12 20:55:02.068324 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 12 20:55:02.068331 kernel: efifb: probing for efifb Nov 12 20:55:02.068342 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Nov 12 20:55:02.068350 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Nov 12 20:55:02.068357 kernel: efifb: scrolling: redraw Nov 12 20:55:02.068364 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Nov 12 20:55:02.068371 kernel: Console: switching to colour frame buffer device 100x37 Nov 12 20:55:02.068398 kernel: fb0: EFI VGA frame buffer device Nov 12 20:55:02.068408 kernel: pstore: Using crash dump compression: deflate Nov 12 20:55:02.068416 kernel: pstore: Registered efi_pstore as persistent store backend Nov 12 20:55:02.068425 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:55:02.068435 kernel: Segment Routing with IPv6 Nov 12 20:55:02.068443 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:55:02.068451 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:55:02.068459 kernel: Key type dns_resolver registered Nov 12 20:55:02.068467 kernel: IPI shorthand broadcast: enabled Nov 12 20:55:02.068475 kernel: sched_clock: Marking stable (752002315, 193994587)->(1097636889, -151639987) Nov 12 20:55:02.068483 kernel: registered taskstats version 1 Nov 12 20:55:02.068491 kernel: Loading compiled-in X.509 certificates Nov 12 20:55:02.068499 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:55:02.068510 kernel: Key type .fscrypt registered Nov 12 20:55:02.068518 kernel: Key type fscrypt-provisioning registered Nov 12 20:55:02.068526 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:55:02.068534 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:55:02.068542 kernel: ima: No architecture policies found Nov 12 20:55:02.068550 kernel: clk: Disabling unused clocks Nov 12 20:55:02.068559 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:55:02.068567 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:55:02.068575 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:55:02.068585 kernel: Run /init as init process Nov 12 20:55:02.068593 kernel: with arguments: Nov 12 20:55:02.068601 kernel: /init Nov 12 20:55:02.068609 kernel: with environment: Nov 12 20:55:02.068629 kernel: HOME=/ Nov 12 20:55:02.068636 kernel: TERM=linux Nov 12 20:55:02.068644 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:55:02.068653 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:55:02.068665 systemd[1]: Detected virtualization kvm. Nov 12 20:55:02.068673 systemd[1]: Detected architecture x86-64. Nov 12 20:55:02.068681 systemd[1]: Running in initrd. Nov 12 20:55:02.068691 systemd[1]: No hostname configured, using default hostname. Nov 12 20:55:02.068702 systemd[1]: Hostname set to . Nov 12 20:55:02.068710 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:55:02.068718 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:55:02.068726 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:55:02.068734 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:55:02.068742 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:55:02.068751 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:55:02.068759 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:55:02.068769 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:55:02.068779 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:55:02.068796 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:55:02.068807 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:55:02.068817 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:55:02.068825 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:55:02.068833 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:55:02.068847 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:55:02.068857 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:55:02.068867 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:55:02.068877 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:55:02.068887 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:55:02.068896 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:55:02.068906 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:55:02.068916 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:55:02.068927 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:55:02.068939 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:55:02.068949 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:55:02.068960 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:55:02.068970 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:55:02.068982 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:55:02.069004 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:55:02.069019 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:55:02.069032 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:55:02.069051 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:55:02.069088 systemd-journald[193]: Collecting audit messages is disabled. Nov 12 20:55:02.069106 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:55:02.069114 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:55:02.069126 systemd-journald[193]: Journal started Nov 12 20:55:02.069143 systemd-journald[193]: Runtime Journal (/run/log/journal/72b3091a6b2043f19190c2ab306ce633) is 6.0M, max 48.3M, 42.2M free. Nov 12 20:55:02.075759 systemd-modules-load[194]: Inserted module 'overlay' Nov 12 20:55:02.077768 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:55:02.080636 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:55:02.082071 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:55:02.084001 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:55:02.093818 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:55:02.096759 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:55:02.097382 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:55:02.109657 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:55:02.110845 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:55:02.114123 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:55:02.121130 kernel: Bridge firewalling registered Nov 12 20:55:02.114445 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 12 20:55:02.114746 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:55:02.116318 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:55:02.118876 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:55:02.121362 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:55:02.131589 dracut-cmdline[219]: dracut-dracut-053 Nov 12 20:55:02.134718 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:55:02.135283 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:55:02.148861 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:55:02.212515 systemd-resolved[245]: Positive Trust Anchors: Nov 12 20:55:02.212537 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:55:02.212568 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:55:02.215349 systemd-resolved[245]: Defaulting to hostname 'linux'. Nov 12 20:55:02.216622 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:55:02.222780 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:55:02.233629 kernel: SCSI subsystem initialized Nov 12 20:55:02.243645 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:55:02.254640 kernel: iscsi: registered transport (tcp) Nov 12 20:55:02.274912 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:55:02.274954 kernel: QLogic iSCSI HBA Driver Nov 12 20:55:02.326858 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:55:02.338989 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:55:02.363850 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:55:02.363913 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:55:02.364964 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:55:02.408647 kernel: raid6: avx2x4 gen() 29857 MB/s Nov 12 20:55:02.425652 kernel: raid6: avx2x2 gen() 28003 MB/s Nov 12 20:55:02.442987 kernel: raid6: avx2x1 gen() 19935 MB/s Nov 12 20:55:02.443060 kernel: raid6: using algorithm avx2x4 gen() 29857 MB/s Nov 12 20:55:02.461043 kernel: raid6: .... xor() 5623 MB/s, rmw enabled Nov 12 20:55:02.461130 kernel: raid6: using avx2x2 recovery algorithm Nov 12 20:55:02.483656 kernel: xor: automatically using best checksumming function avx Nov 12 20:55:02.643662 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:55:02.657799 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:55:02.666818 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:55:02.680075 systemd-udevd[413]: Using default interface naming scheme 'v255'. Nov 12 20:55:02.684698 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:55:02.688407 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:55:02.707953 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Nov 12 20:55:02.743307 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:55:02.751957 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:55:02.827008 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:55:02.835015 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:55:02.845905 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:55:02.849262 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:55:02.852814 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:55:02.855790 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:55:02.863892 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 12 20:55:02.888177 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:55:02.888194 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 20:55:02.888340 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:55:02.888359 kernel: AES CTR mode by8 optimization enabled Nov 12 20:55:02.888369 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 20:55:02.888379 kernel: GPT:9289727 != 19775487 Nov 12 20:55:02.888390 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 20:55:02.888400 kernel: GPT:9289727 != 19775487 Nov 12 20:55:02.888409 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:55:02.888418 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:55:02.864903 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:55:02.878128 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:55:02.897938 kernel: libata version 3.00 loaded. Nov 12 20:55:02.906055 kernel: ahci 0000:00:1f.2: version 3.0 Nov 12 20:55:02.931030 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 12 20:55:02.931066 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 12 20:55:02.931268 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 12 20:55:02.931452 kernel: scsi host0: ahci Nov 12 20:55:02.931679 kernel: scsi host1: ahci Nov 12 20:55:02.931865 kernel: scsi host2: ahci Nov 12 20:55:02.932051 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (475) Nov 12 20:55:02.932067 kernel: scsi host3: ahci Nov 12 20:55:02.932263 kernel: scsi host4: ahci Nov 12 20:55:02.932469 kernel: scsi host5: ahci Nov 12 20:55:02.932743 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Nov 12 20:55:02.932759 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Nov 12 20:55:02.932781 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Nov 12 20:55:02.932795 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Nov 12 20:55:02.932809 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Nov 12 20:55:02.932827 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Nov 12 20:55:02.922103 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:55:02.938450 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (466) Nov 12 20:55:02.922268 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:55:02.926811 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:55:02.928828 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:55:02.929080 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:55:02.941027 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:55:02.949894 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:55:02.959780 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 20:55:02.968498 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 20:55:02.970118 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:55:02.982697 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 20:55:02.984082 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 20:55:02.989713 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:55:03.001748 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:55:03.004965 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:55:03.012282 disk-uuid[563]: Primary Header is updated. Nov 12 20:55:03.012282 disk-uuid[563]: Secondary Entries is updated. Nov 12 20:55:03.012282 disk-uuid[563]: Secondary Header is updated. Nov 12 20:55:03.016409 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:55:03.020642 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:55:03.043794 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:55:03.240646 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 12 20:55:03.240723 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 12 20:55:03.241641 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 12 20:55:03.242697 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 12 20:55:03.242789 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 12 20:55:03.243641 kernel: ata3.00: applying bridge limits Nov 12 20:55:03.244643 kernel: ata3.00: configured for UDMA/100 Nov 12 20:55:03.246642 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 12 20:55:03.249640 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 12 20:55:03.250635 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 12 20:55:03.298241 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 12 20:55:03.310452 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 20:55:03.310471 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 12 20:55:04.024240 disk-uuid[564]: The operation has completed successfully. Nov 12 20:55:04.025525 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:55:04.050543 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:55:04.050686 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:55:04.083858 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:55:04.087120 sh[595]: Success Nov 12 20:55:04.099646 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 12 20:55:04.133642 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:55:04.141258 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:55:04.144147 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:55:04.155644 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:55:04.155682 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:55:04.157551 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:55:04.157573 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:55:04.158335 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:55:04.163120 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:55:04.164832 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:55:04.172861 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:55:04.174596 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:55:04.186105 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:55:04.186152 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:55:04.186167 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:55:04.190647 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:55:04.200589 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:55:04.203651 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:55:04.213478 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:55:04.220789 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:55:04.271678 ignition[693]: Ignition 2.19.0 Nov 12 20:55:04.272522 ignition[693]: Stage: fetch-offline Nov 12 20:55:04.272583 ignition[693]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:55:04.272594 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:55:04.272778 ignition[693]: parsed url from cmdline: "" Nov 12 20:55:04.272782 ignition[693]: no config URL provided Nov 12 20:55:04.272788 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:55:04.272797 ignition[693]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:55:04.272828 ignition[693]: op(1): [started] loading QEMU firmware config module Nov 12 20:55:04.272834 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 20:55:04.281961 ignition[693]: op(1): [finished] loading QEMU firmware config module Nov 12 20:55:04.300216 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:55:04.312769 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:55:04.327116 ignition[693]: parsing config with SHA512: 40c8f1961c682e0a073fd436347372a311fc153f24c6fd00d7cf807349382d272aa3dd3ef05227c18fd8053f1221f0bf9237c3884f42df0c3ba550e65693ac30 Nov 12 20:55:04.332152 unknown[693]: fetched base config from "system" Nov 12 20:55:04.332166 unknown[693]: fetched user config from "qemu" Nov 12 20:55:04.332571 ignition[693]: fetch-offline: fetch-offline passed Nov 12 20:55:04.332647 ignition[693]: Ignition finished successfully Nov 12 20:55:04.334769 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:55:04.336408 systemd-networkd[783]: lo: Link UP Nov 12 20:55:04.336412 systemd-networkd[783]: lo: Gained carrier Nov 12 20:55:04.338031 systemd-networkd[783]: Enumeration completed Nov 12 20:55:04.338122 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:55:04.338511 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:55:04.338516 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:55:04.338836 systemd[1]: Reached target network.target - Network. Nov 12 20:55:04.339997 systemd-networkd[783]: eth0: Link UP Nov 12 20:55:04.340002 systemd-networkd[783]: eth0: Gained carrier Nov 12 20:55:04.340010 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:55:04.340052 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 20:55:04.349315 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:55:04.360725 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 20:55:04.362324 ignition[786]: Ignition 2.19.0 Nov 12 20:55:04.362335 ignition[786]: Stage: kargs Nov 12 20:55:04.362526 ignition[786]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:55:04.362540 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:55:04.366472 ignition[786]: kargs: kargs passed Nov 12 20:55:04.366527 ignition[786]: Ignition finished successfully Nov 12 20:55:04.370911 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:55:04.385790 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:55:04.406922 ignition[795]: Ignition 2.19.0 Nov 12 20:55:04.406935 ignition[795]: Stage: disks Nov 12 20:55:04.407126 ignition[795]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:55:04.407140 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:55:04.408175 ignition[795]: disks: disks passed Nov 12 20:55:04.408237 ignition[795]: Ignition finished successfully Nov 12 20:55:04.413941 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:55:04.415271 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:55:04.416208 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:55:04.418382 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:55:04.420739 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:55:04.421096 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:55:04.436795 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:55:04.453822 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 20:55:04.461158 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:55:04.473761 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:55:04.566631 kernel: EXT4-fs (vda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:55:04.567108 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:55:04.569057 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:55:04.577709 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:55:04.579576 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:55:04.580965 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 20:55:04.581005 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:55:04.594220 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (813) Nov 12 20:55:04.594252 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:55:04.594267 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:55:04.594281 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:55:04.594296 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:55:04.581028 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:55:04.588413 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:55:04.595090 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:55:04.597931 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:55:04.631941 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:55:04.636249 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:55:04.640397 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:55:04.644545 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:55:04.737976 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:55:04.751803 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:55:04.754917 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:55:04.761638 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:55:04.786193 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:55:04.792502 ignition[929]: INFO : Ignition 2.19.0 Nov 12 20:55:04.792502 ignition[929]: INFO : Stage: mount Nov 12 20:55:04.794640 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:55:04.794640 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:55:04.794640 ignition[929]: INFO : mount: mount passed Nov 12 20:55:04.794640 ignition[929]: INFO : Ignition finished successfully Nov 12 20:55:04.801896 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:55:04.811869 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:55:05.156116 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:55:05.164826 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:55:05.172457 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (943) Nov 12 20:55:05.172499 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:55:05.172512 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:55:05.174043 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:55:05.176648 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:55:05.178283 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:55:05.203061 ignition[960]: INFO : Ignition 2.19.0 Nov 12 20:55:05.203061 ignition[960]: INFO : Stage: files Nov 12 20:55:05.204920 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:55:05.204920 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:55:05.207750 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:55:05.209623 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:55:05.209623 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:55:05.213625 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:55:05.215201 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:55:05.216529 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:55:05.215627 unknown[960]: wrote ssh authorized keys file for user: core Nov 12 20:55:05.219199 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:55:05.219199 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:55:05.262589 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 20:55:05.393835 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:55:05.393835 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 20:55:05.398416 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 12 20:55:05.719764 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 20:55:05.941083 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 20:55:05.941083 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:55:05.944987 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:55:05.944987 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:55:05.950601 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:55:05.952435 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:55:05.954528 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:55:05.954528 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:55:05.954528 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:55:05.954528 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:55:05.954528 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:55:05.954528 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:55:05.954528 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:55:05.954528 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:55:05.973091 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Nov 12 20:55:06.025876 systemd-networkd[783]: eth0: Gained IPv6LL Nov 12 20:55:06.226605 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 12 20:55:06.887245 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:55:06.887245 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 12 20:55:06.891986 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:55:06.891986 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:55:06.891986 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 12 20:55:06.891986 ignition[960]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 12 20:55:06.891986 ignition[960]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:55:06.891986 ignition[960]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:55:06.891986 ignition[960]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 12 20:55:06.891986 ignition[960]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 20:55:06.917863 ignition[960]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:55:06.929854 ignition[960]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:55:06.929854 ignition[960]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 20:55:06.929854 ignition[960]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:55:06.929854 ignition[960]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:55:06.929854 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:55:06.929854 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:55:06.929854 ignition[960]: INFO : files: files passed Nov 12 20:55:06.929854 ignition[960]: INFO : Ignition finished successfully Nov 12 20:55:06.927841 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:55:06.938810 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:55:06.941011 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:55:06.942585 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:55:06.942726 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:55:06.950827 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 20:55:06.953627 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:55:06.953627 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:55:06.956873 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:55:06.960077 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:55:06.960305 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:55:06.968797 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:55:06.996476 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:55:06.996598 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:55:07.020351 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:55:07.022404 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:55:07.024397 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:55:07.040947 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:55:07.058277 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:55:07.077781 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:55:07.089504 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:55:07.092010 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:55:07.094529 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:55:07.096489 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:55:07.097721 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:55:07.100685 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:55:07.103284 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:55:07.105581 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:55:07.108320 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:55:07.111185 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:55:07.113466 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:55:07.115601 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:55:07.118138 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:55:07.120276 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:55:07.122337 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:55:07.124007 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:55:07.125046 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:55:07.127401 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:55:07.129638 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:55:07.132115 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:55:07.133106 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:55:07.135890 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:55:07.207852 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:55:07.210459 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:55:07.211650 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:55:07.214431 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:55:07.216397 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:55:07.216638 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:55:07.217797 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:55:07.275879 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:55:07.277548 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:55:07.277693 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:55:07.279262 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:55:07.279357 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:55:07.280969 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:55:07.281098 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:55:07.284810 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:55:07.284923 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:55:07.301783 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:55:07.301874 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:55:07.302000 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:55:07.306598 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:55:07.309310 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:55:07.310549 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:55:07.341784 ignition[1014]: INFO : Ignition 2.19.0 Nov 12 20:55:07.341784 ignition[1014]: INFO : Stage: umount Nov 12 20:55:07.341784 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:55:07.341784 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:55:07.341784 ignition[1014]: INFO : umount: umount passed Nov 12 20:55:07.341784 ignition[1014]: INFO : Ignition finished successfully Nov 12 20:55:07.341859 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:55:07.342026 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:55:07.353746 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:55:07.354907 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:55:07.358413 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:55:07.361467 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:55:07.362503 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:55:07.366364 systemd[1]: Stopped target network.target - Network. Nov 12 20:55:07.368649 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:55:07.369757 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:55:07.371863 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:55:07.371938 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:55:07.375032 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:55:07.375093 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:55:07.415895 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:55:07.415986 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:55:07.419566 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:55:07.422065 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:55:07.428683 systemd-networkd[783]: eth0: DHCPv6 lease lost Nov 12 20:55:07.430915 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:55:07.431089 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:55:07.449092 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:55:07.449144 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:55:07.455720 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:55:07.455794 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:55:07.455858 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:55:07.459327 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:55:07.461755 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:55:07.461879 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:55:07.467023 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:55:07.467093 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:55:07.468481 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:55:07.468529 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:55:07.469938 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:55:07.469987 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:55:07.477475 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:55:07.477646 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:55:07.479437 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:55:07.479681 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:55:07.482313 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:55:07.482387 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:55:07.484030 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:55:07.484080 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:55:07.486148 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:55:07.486207 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:55:07.488488 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:55:07.488538 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:55:07.490495 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:55:07.490544 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:55:07.498747 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:55:07.500167 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:55:07.500220 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:55:07.502893 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:55:07.502940 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:55:07.509605 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:55:07.509745 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:55:07.907554 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:55:07.907731 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:55:07.909035 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:55:07.910527 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:55:07.910582 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:55:07.944778 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:55:07.952176 systemd[1]: Switching root. Nov 12 20:55:07.984903 systemd-journald[193]: Journal stopped Nov 12 20:55:09.378946 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Nov 12 20:55:09.379028 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:55:09.379053 kernel: SELinux: policy capability open_perms=1 Nov 12 20:55:09.379066 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:55:09.379083 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:55:09.379096 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:55:09.379109 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:55:09.379122 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:55:09.379135 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:55:09.379153 kernel: audit: type=1403 audit(1731444908.539:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:55:09.379169 systemd[1]: Successfully loaded SELinux policy in 44.871ms. Nov 12 20:55:09.379191 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.983ms. Nov 12 20:55:09.379206 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:55:09.379222 systemd[1]: Detected virtualization kvm. Nov 12 20:55:09.379241 systemd[1]: Detected architecture x86-64. Nov 12 20:55:09.379255 systemd[1]: Detected first boot. Nov 12 20:55:09.379268 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:55:09.379281 zram_generator::config[1058]: No configuration found. Nov 12 20:55:09.379297 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:55:09.379310 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 20:55:09.379324 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 20:55:09.379340 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 20:55:09.379355 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:55:09.379368 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:55:09.379382 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:55:09.379396 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:55:09.379415 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:55:09.379428 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:55:09.379443 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:55:09.379456 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:55:09.379473 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:55:09.379488 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:55:09.379502 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:55:09.379515 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:55:09.379529 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:55:09.379543 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:55:09.379557 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:55:09.379571 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:55:09.379584 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 20:55:09.379600 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 20:55:09.379637 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 20:55:09.379651 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:55:09.379665 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:55:09.379679 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:55:09.379693 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:55:09.379707 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:55:09.379721 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:55:09.379738 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:55:09.379751 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:55:09.379765 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:55:09.379780 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:55:09.379794 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:55:09.379809 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:55:09.379822 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:55:09.379836 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:55:09.379850 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:55:09.379866 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:55:09.379880 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:55:09.379893 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:55:09.379908 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:55:09.379921 systemd[1]: Reached target machines.target - Containers. Nov 12 20:55:09.379935 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:55:09.379949 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:55:09.379963 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:55:09.379979 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:55:09.379992 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:55:09.380007 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:55:09.380020 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:55:09.380034 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:55:09.380047 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:55:09.380061 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:55:09.380075 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 20:55:09.380090 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 20:55:09.380105 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 20:55:09.380119 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 20:55:09.380133 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:55:09.380147 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:55:09.380160 kernel: loop: module loaded Nov 12 20:55:09.380173 kernel: fuse: init (API version 7.39) Nov 12 20:55:09.380186 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:55:09.380200 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:55:09.380216 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:55:09.380230 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 20:55:09.380244 systemd[1]: Stopped verity-setup.service. Nov 12 20:55:09.380259 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:55:09.380272 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:55:09.380285 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:55:09.380299 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:55:09.380333 systemd-journald[1121]: Collecting audit messages is disabled. Nov 12 20:55:09.380359 systemd-journald[1121]: Journal started Nov 12 20:55:09.380384 systemd-journald[1121]: Runtime Journal (/run/log/journal/72b3091a6b2043f19190c2ab306ce633) is 6.0M, max 48.3M, 42.2M free. Nov 12 20:55:09.112157 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:55:09.129078 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 20:55:09.129568 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 20:55:09.383126 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:55:09.384273 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:55:09.386408 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:55:09.387974 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:55:09.389661 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:55:09.391736 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:55:09.392009 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:55:09.393893 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:55:09.394431 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:55:09.396158 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:55:09.396419 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:55:09.398822 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:55:09.399218 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:55:09.401119 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:55:09.401458 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:55:09.406229 kernel: ACPI: bus type drm_connector registered Nov 12 20:55:09.404147 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:55:09.407036 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:55:09.410023 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:55:09.410342 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:55:09.412267 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:55:09.423388 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:55:09.438509 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:55:09.451845 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:55:09.454794 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:55:09.456317 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:55:09.456358 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:55:09.458856 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:55:09.461881 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:55:09.464827 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:55:09.466205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:55:09.469855 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:55:09.474025 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:55:09.475312 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:55:09.476448 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:55:09.477804 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:55:09.482444 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:55:09.484922 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:55:09.488929 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:55:09.492949 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:55:09.497385 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:55:09.505772 systemd-journald[1121]: Time spent on flushing to /var/log/journal/72b3091a6b2043f19190c2ab306ce633 is 54.326ms for 996 entries. Nov 12 20:55:09.505772 systemd-journald[1121]: System Journal (/var/log/journal/72b3091a6b2043f19190c2ab306ce633) is 8.0M, max 195.6M, 187.6M free. Nov 12 20:55:09.590001 systemd-journald[1121]: Received client request to flush runtime journal. Nov 12 20:55:09.590042 kernel: loop0: detected capacity change from 0 to 205544 Nov 12 20:55:09.500720 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:55:09.591640 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:55:09.507625 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:55:09.574387 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:55:09.577988 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:55:09.592691 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:55:09.596912 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:55:09.598876 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:55:09.610226 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:55:09.624836 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 20:55:09.626325 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:55:09.627795 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:55:09.629634 kernel: loop1: detected capacity change from 0 to 140768 Nov 12 20:55:09.634217 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:55:09.662364 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:55:09.679657 kernel: loop2: detected capacity change from 0 to 142488 Nov 12 20:55:09.701452 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Nov 12 20:55:09.701478 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Nov 12 20:55:09.711959 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:55:09.728652 kernel: loop3: detected capacity change from 0 to 205544 Nov 12 20:55:09.765716 kernel: loop4: detected capacity change from 0 to 140768 Nov 12 20:55:09.783661 kernel: loop5: detected capacity change from 0 to 142488 Nov 12 20:55:09.793053 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 20:55:09.793668 (sd-merge)[1197]: Merged extensions into '/usr'. Nov 12 20:55:09.806185 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:55:09.806207 systemd[1]: Reloading... Nov 12 20:55:09.903662 zram_generator::config[1226]: No configuration found. Nov 12 20:55:09.977602 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:55:10.123093 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:55:10.182697 systemd[1]: Reloading finished in 375 ms. Nov 12 20:55:10.243641 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:55:10.245398 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:55:10.258831 systemd[1]: Starting ensure-sysext.service... Nov 12 20:55:10.263294 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:55:10.267331 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:55:10.267349 systemd[1]: Reloading... Nov 12 20:55:10.298513 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:55:10.298925 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:55:10.300146 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:55:10.300558 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Nov 12 20:55:10.300688 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Nov 12 20:55:10.328297 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:55:10.328309 systemd-tmpfiles[1261]: Skipping /boot Nov 12 20:55:10.364749 zram_generator::config[1287]: No configuration found. Nov 12 20:55:10.370193 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:55:10.370334 systemd-tmpfiles[1261]: Skipping /boot Nov 12 20:55:10.491053 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:55:10.546962 systemd[1]: Reloading finished in 279 ms. Nov 12 20:55:10.564713 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:55:10.583851 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:55:10.644781 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:55:10.651461 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:55:10.655971 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:55:10.670514 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:55:10.685203 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:55:10.688931 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:55:10.689162 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:55:10.691638 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:55:10.698558 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:55:10.734496 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:55:10.736449 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:55:10.736643 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:55:10.739587 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:55:10.741931 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:55:10.749819 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:55:10.750082 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:55:10.751830 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:55:10.754080 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:55:10.754228 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:55:10.755344 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:55:10.757931 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:55:10.758208 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:55:10.768237 augenrules[1351]: No rules Nov 12 20:55:10.774997 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:55:10.777425 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:55:10.779787 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:55:10.782353 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:55:10.782555 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:55:10.784509 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:55:10.797172 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:55:10.797450 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:55:10.799997 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:55:10.807749 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:55:10.807948 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:55:10.814884 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:55:10.817387 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:55:10.818880 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:55:10.818980 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:55:10.820336 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:55:10.825801 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:55:10.828303 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:55:10.828343 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:55:10.829063 systemd[1]: Finished ensure-sysext.service. Nov 12 20:55:10.830826 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:55:10.831079 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:55:10.842891 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:55:10.843172 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:55:10.847593 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:55:10.859882 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 20:55:10.861329 systemd-resolved[1339]: Positive Trust Anchors: Nov 12 20:55:10.861348 systemd-resolved[1339]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:55:10.861381 systemd-resolved[1339]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:55:10.861519 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:55:10.862478 systemd-udevd[1371]: Using default interface naming scheme 'v255'. Nov 12 20:55:10.865403 systemd-resolved[1339]: Defaulting to hostname 'linux'. Nov 12 20:55:10.867318 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:55:10.868731 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:55:10.885093 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:55:10.897918 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:55:10.927651 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1379) Nov 12 20:55:10.939254 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 20:55:10.940871 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:55:10.947667 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1379) Nov 12 20:55:10.950871 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 20:55:10.968646 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1398) Nov 12 20:55:10.985019 systemd-networkd[1385]: lo: Link UP Nov 12 20:55:10.985342 systemd-networkd[1385]: lo: Gained carrier Nov 12 20:55:10.987788 systemd-networkd[1385]: Enumeration completed Nov 12 20:55:10.988280 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:55:10.988286 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:55:10.988762 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:55:10.990340 systemd[1]: Reached target network.target - Network. Nov 12 20:55:10.990556 systemd-networkd[1385]: eth0: Link UP Nov 12 20:55:10.990645 systemd-networkd[1385]: eth0: Gained carrier Nov 12 20:55:10.990666 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:55:11.033841 systemd-networkd[1385]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 20:55:11.033933 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:55:11.035042 systemd-timesyncd[1376]: Network configuration changed, trying to establish connection. Nov 12 20:55:11.041543 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:55:11.701516 systemd-timesyncd[1376]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 20:55:11.701578 systemd-timesyncd[1376]: Initial clock synchronization to Tue 2024-11-12 20:55:11.701394 UTC. Nov 12 20:55:11.701618 systemd-resolved[1339]: Clock change detected. Flushing caches. Nov 12 20:55:11.704029 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:55:11.708367 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 12 20:55:11.716386 kernel: ACPI: button: Power Button [PWRF] Nov 12 20:55:11.734163 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 12 20:55:11.734565 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 12 20:55:11.734794 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 12 20:55:11.739224 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 12 20:55:11.735064 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:55:11.747390 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 12 20:55:11.765712 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:55:11.768349 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:55:11.772193 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:55:11.772535 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:55:11.785663 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:55:11.882673 kernel: kvm_amd: TSC scaling supported Nov 12 20:55:11.882769 kernel: kvm_amd: Nested Virtualization enabled Nov 12 20:55:11.882786 kernel: kvm_amd: Nested Paging enabled Nov 12 20:55:11.883850 kernel: kvm_amd: LBR virtualization supported Nov 12 20:55:11.883867 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 12 20:55:11.884481 kernel: kvm_amd: Virtual GIF supported Nov 12 20:55:11.906603 kernel: EDAC MC: Ver: 3.0.0 Nov 12 20:55:11.915131 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:55:11.945650 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:55:11.961674 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:55:11.971352 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:55:12.003639 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:55:12.005202 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:55:12.006397 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:55:12.007596 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:55:12.008872 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:55:12.010484 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:55:12.011858 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:55:12.013432 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:55:12.014833 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:55:12.014870 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:55:12.015827 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:55:12.017410 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:55:12.020253 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:55:12.030145 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:55:12.032611 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:55:12.034251 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:55:12.035466 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:55:12.036537 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:55:12.037561 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:55:12.037588 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:55:12.038612 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:55:12.040939 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:55:12.045429 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:55:12.045800 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:55:12.049559 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:55:12.072616 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:55:12.076083 jq[1432]: false Nov 12 20:55:12.076497 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:55:12.079701 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:55:12.084476 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:55:12.089441 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:55:12.093203 dbus-daemon[1431]: [system] SELinux support is enabled Nov 12 20:55:12.097832 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:55:12.100451 extend-filesystems[1433]: Found loop3 Nov 12 20:55:12.100451 extend-filesystems[1433]: Found loop4 Nov 12 20:55:12.100451 extend-filesystems[1433]: Found loop5 Nov 12 20:55:12.100451 extend-filesystems[1433]: Found sr0 Nov 12 20:55:12.100451 extend-filesystems[1433]: Found vda Nov 12 20:55:12.100451 extend-filesystems[1433]: Found vda1 Nov 12 20:55:12.100451 extend-filesystems[1433]: Found vda2 Nov 12 20:55:12.100451 extend-filesystems[1433]: Found vda3 Nov 12 20:55:12.100451 extend-filesystems[1433]: Found usr Nov 12 20:55:12.100451 extend-filesystems[1433]: Found vda4 Nov 12 20:55:12.100451 extend-filesystems[1433]: Found vda6 Nov 12 20:55:12.100451 extend-filesystems[1433]: Found vda7 Nov 12 20:55:12.100451 extend-filesystems[1433]: Found vda9 Nov 12 20:55:12.100451 extend-filesystems[1433]: Checking size of /dev/vda9 Nov 12 20:55:12.100508 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 20:55:12.102668 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:55:12.108716 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:55:12.114359 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:55:12.117011 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:55:12.120466 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:55:12.123357 jq[1451]: true Nov 12 20:55:12.123919 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:55:12.124117 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:55:12.124614 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:55:12.125666 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:55:12.127853 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:55:12.130113 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:55:12.139002 update_engine[1450]: I20241112 20:55:12.138468 1450 main.cc:92] Flatcar Update Engine starting Nov 12 20:55:12.148096 update_engine[1450]: I20241112 20:55:12.140921 1450 update_check_scheduler.cc:74] Next update check in 9m20s Nov 12 20:55:12.148148 extend-filesystems[1433]: Resized partition /dev/vda9 Nov 12 20:55:12.168130 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 20:55:12.168196 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1387) Nov 12 20:55:12.153481 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:55:12.168689 extend-filesystems[1460]: resize2fs 1.47.1 (20-May-2024) Nov 12 20:55:12.181489 jq[1456]: true Nov 12 20:55:12.158728 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:55:12.158760 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:55:12.166202 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:55:12.166229 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:55:12.172526 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:55:12.193748 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 20:55:12.223052 extend-filesystems[1460]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 20:55:12.223052 extend-filesystems[1460]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 20:55:12.223052 extend-filesystems[1460]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 20:55:12.261092 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Nov 12 20:55:12.226012 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:55:12.271645 tar[1454]: linux-amd64/helm Nov 12 20:55:12.232451 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:55:12.232725 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:55:12.243182 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 20:55:12.243206 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 20:55:12.248848 systemd-logind[1444]: New seat seat0. Nov 12 20:55:12.272953 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:55:12.283838 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:55:12.435073 sshd_keygen[1449]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:55:12.463593 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:55:12.475814 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:55:12.545368 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:55:12.547108 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:55:12.552381 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 20:55:12.555081 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:55:12.555427 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:55:12.562940 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:55:12.588088 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:55:12.609222 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:55:12.612427 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:55:12.614060 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:55:12.701750 systemd-networkd[1385]: eth0: Gained IPv6LL Nov 12 20:55:12.720871 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:55:12.722836 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:55:12.733737 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 20:55:12.737787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:12.741410 containerd[1461]: time="2024-11-12T20:55:12.739837083Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:55:12.745442 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:55:12.778169 containerd[1461]: time="2024-11-12T20:55:12.778105355Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:55:12.781356 containerd[1461]: time="2024-11-12T20:55:12.780398609Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:55:12.781356 containerd[1461]: time="2024-11-12T20:55:12.780430799Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:55:12.781356 containerd[1461]: time="2024-11-12T20:55:12.780446238Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:55:12.781356 containerd[1461]: time="2024-11-12T20:55:12.780636154Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:55:12.781356 containerd[1461]: time="2024-11-12T20:55:12.780657485Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:55:12.781356 containerd[1461]: time="2024-11-12T20:55:12.780726254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:55:12.781356 containerd[1461]: time="2024-11-12T20:55:12.780738767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:55:12.781356 containerd[1461]: time="2024-11-12T20:55:12.780955113Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:55:12.781356 containerd[1461]: time="2024-11-12T20:55:12.780973588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:55:12.781356 containerd[1461]: time="2024-11-12T20:55:12.780995629Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:55:12.781356 containerd[1461]: time="2024-11-12T20:55:12.781008233Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:55:12.781686 containerd[1461]: time="2024-11-12T20:55:12.781109042Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:55:12.781686 containerd[1461]: time="2024-11-12T20:55:12.781357869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:55:12.781686 containerd[1461]: time="2024-11-12T20:55:12.781496579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:55:12.781686 containerd[1461]: time="2024-11-12T20:55:12.781514603Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:55:12.781686 containerd[1461]: time="2024-11-12T20:55:12.781641060Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:55:12.781814 containerd[1461]: time="2024-11-12T20:55:12.781711413Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:55:12.784292 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 20:55:12.784640 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 20:55:12.786642 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:55:12.792488 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:55:12.797768 containerd[1461]: time="2024-11-12T20:55:12.797718214Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:55:12.798233 containerd[1461]: time="2024-11-12T20:55:12.797966871Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:55:12.798233 containerd[1461]: time="2024-11-12T20:55:12.798118996Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:55:12.798233 containerd[1461]: time="2024-11-12T20:55:12.798168038Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:55:12.798233 containerd[1461]: time="2024-11-12T20:55:12.798186332Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:55:12.799469 containerd[1461]: time="2024-11-12T20:55:12.798719142Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:55:12.799469 containerd[1461]: time="2024-11-12T20:55:12.799087674Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:55:12.799469 containerd[1461]: time="2024-11-12T20:55:12.799268443Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:55:12.799469 containerd[1461]: time="2024-11-12T20:55:12.799289964Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:55:12.799469 containerd[1461]: time="2024-11-12T20:55:12.799316824Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:55:12.799469 containerd[1461]: time="2024-11-12T20:55:12.799366758Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:55:12.799469 containerd[1461]: time="2024-11-12T20:55:12.799399569Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:55:12.799469 containerd[1461]: time="2024-11-12T20:55:12.799419206Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:55:12.799469 containerd[1461]: time="2024-11-12T20:55:12.799433623Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:55:12.799469 containerd[1461]: time="2024-11-12T20:55:12.799448601Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:55:12.799469 containerd[1461]: time="2024-11-12T20:55:12.799460784Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:55:12.799469 containerd[1461]: time="2024-11-12T20:55:12.799473739Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:55:12.799469 containerd[1461]: time="2024-11-12T20:55:12.799496271Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:55:12.800174 containerd[1461]: time="2024-11-12T20:55:12.799532208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:55:12.800174 containerd[1461]: time="2024-11-12T20:55:12.799632356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:55:12.800174 containerd[1461]: time="2024-11-12T20:55:12.799684504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:55:12.800174 containerd[1461]: time="2024-11-12T20:55:12.799723768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:55:12.800174 containerd[1461]: time="2024-11-12T20:55:12.799754385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:55:12.800174 containerd[1461]: time="2024-11-12T20:55:12.799769083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:55:12.800174 containerd[1461]: time="2024-11-12T20:55:12.799781065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:55:12.800174 containerd[1461]: time="2024-11-12T20:55:12.799799530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:55:12.800174 containerd[1461]: time="2024-11-12T20:55:12.799826360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:55:12.800174 containerd[1461]: time="2024-11-12T20:55:12.799854082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:55:12.800174 containerd[1461]: time="2024-11-12T20:55:12.799867177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:55:12.800174 containerd[1461]: time="2024-11-12T20:55:12.799879831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:55:12.800174 containerd[1461]: time="2024-11-12T20:55:12.799895410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:55:12.800174 containerd[1461]: time="2024-11-12T20:55:12.799925737Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:55:12.800174 containerd[1461]: time="2024-11-12T20:55:12.799985339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:55:12.800805 containerd[1461]: time="2024-11-12T20:55:12.800011217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:55:12.800805 containerd[1461]: time="2024-11-12T20:55:12.800037947Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:55:12.800805 containerd[1461]: time="2024-11-12T20:55:12.800153234Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:55:12.800805 containerd[1461]: time="2024-11-12T20:55:12.800188911Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:55:12.800805 containerd[1461]: time="2024-11-12T20:55:12.800211694Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:55:12.800805 containerd[1461]: time="2024-11-12T20:55:12.800240267Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:55:12.800805 containerd[1461]: time="2024-11-12T20:55:12.800276946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:55:12.800805 containerd[1461]: time="2024-11-12T20:55:12.800303376Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:55:12.800805 containerd[1461]: time="2024-11-12T20:55:12.800375331Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:55:12.800805 containerd[1461]: time="2024-11-12T20:55:12.800404005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:55:12.801273 containerd[1461]: time="2024-11-12T20:55:12.801018117Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:55:12.801273 containerd[1461]: time="2024-11-12T20:55:12.801154693Z" level=info msg="Connect containerd service" Nov 12 20:55:12.801273 containerd[1461]: time="2024-11-12T20:55:12.801290649Z" level=info msg="using legacy CRI server" Nov 12 20:55:12.801694 containerd[1461]: time="2024-11-12T20:55:12.801309905Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:55:12.801694 containerd[1461]: time="2024-11-12T20:55:12.801629254Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:55:12.803252 containerd[1461]: time="2024-11-12T20:55:12.803198519Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:55:12.803514 containerd[1461]: time="2024-11-12T20:55:12.803418161Z" level=info msg="Start subscribing containerd event" Nov 12 20:55:12.804081 containerd[1461]: time="2024-11-12T20:55:12.803555990Z" level=info msg="Start recovering state" Nov 12 20:55:12.804081 containerd[1461]: time="2024-11-12T20:55:12.803641531Z" level=info msg="Start event monitor" Nov 12 20:55:12.804081 containerd[1461]: time="2024-11-12T20:55:12.803654455Z" level=info msg="Start snapshots syncer" Nov 12 20:55:12.804081 containerd[1461]: time="2024-11-12T20:55:12.803683289Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:55:12.804081 containerd[1461]: time="2024-11-12T20:55:12.803691805Z" level=info msg="Start streaming server" Nov 12 20:55:12.804864 containerd[1461]: time="2024-11-12T20:55:12.804477840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:55:12.804864 containerd[1461]: time="2024-11-12T20:55:12.804601592Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:55:12.804864 containerd[1461]: time="2024-11-12T20:55:12.804708623Z" level=info msg="containerd successfully booted in 0.067372s" Nov 12 20:55:12.805273 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:55:12.827608 tar[1454]: linux-amd64/LICENSE Nov 12 20:55:12.827807 tar[1454]: linux-amd64/README.md Nov 12 20:55:12.846710 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:55:14.043634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:14.060833 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:55:14.061536 (kubelet)[1544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:55:14.063085 systemd[1]: Startup finished in 971ms (kernel) + 6.673s (initrd) + 4.907s (userspace) = 12.552s. Nov 12 20:55:14.549401 kubelet[1544]: E1112 20:55:14.549327 1544 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:55:14.553600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:55:14.553809 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:55:14.554163 systemd[1]: kubelet.service: Consumed 1.616s CPU time. Nov 12 20:55:15.653876 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:55:15.655155 systemd[1]: Started sshd@0-10.0.0.133:22-10.0.0.1:49266.service - OpenSSH per-connection server daemon (10.0.0.1:49266). Nov 12 20:55:15.698615 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 49266 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:55:15.700907 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:15.710374 systemd-logind[1444]: New session 1 of user core. Nov 12 20:55:15.711801 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:55:15.720570 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:55:15.733402 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:55:15.744572 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:55:15.747596 (systemd)[1562]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:55:15.853192 systemd[1562]: Queued start job for default target default.target. Nov 12 20:55:15.864861 systemd[1562]: Created slice app.slice - User Application Slice. Nov 12 20:55:15.864892 systemd[1562]: Reached target paths.target - Paths. Nov 12 20:55:15.864906 systemd[1562]: Reached target timers.target - Timers. Nov 12 20:55:15.866557 systemd[1562]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:55:15.880988 systemd[1562]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:55:15.881124 systemd[1562]: Reached target sockets.target - Sockets. Nov 12 20:55:15.881142 systemd[1562]: Reached target basic.target - Basic System. Nov 12 20:55:15.881179 systemd[1562]: Reached target default.target - Main User Target. Nov 12 20:55:15.881235 systemd[1562]: Startup finished in 126ms. Nov 12 20:55:15.881744 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:55:15.883472 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:55:15.948772 systemd[1]: Started sshd@1-10.0.0.133:22-10.0.0.1:49274.service - OpenSSH per-connection server daemon (10.0.0.1:49274). Nov 12 20:55:15.987445 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 49274 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:55:15.989449 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:15.994927 systemd-logind[1444]: New session 2 of user core. Nov 12 20:55:16.005652 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:55:16.065415 sshd[1573]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:16.080058 systemd[1]: sshd@1-10.0.0.133:22-10.0.0.1:49274.service: Deactivated successfully. Nov 12 20:55:16.082856 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 20:55:16.085252 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Nov 12 20:55:16.095853 systemd[1]: Started sshd@2-10.0.0.133:22-10.0.0.1:49286.service - OpenSSH per-connection server daemon (10.0.0.1:49286). Nov 12 20:55:16.097122 systemd-logind[1444]: Removed session 2. Nov 12 20:55:16.125362 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 49286 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:55:16.127202 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:16.132083 systemd-logind[1444]: New session 3 of user core. Nov 12 20:55:16.141486 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:55:16.192843 sshd[1580]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:16.204617 systemd[1]: sshd@2-10.0.0.133:22-10.0.0.1:49286.service: Deactivated successfully. Nov 12 20:55:16.206842 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 20:55:16.208363 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Nov 12 20:55:16.222805 systemd[1]: Started sshd@3-10.0.0.133:22-10.0.0.1:49296.service - OpenSSH per-connection server daemon (10.0.0.1:49296). Nov 12 20:55:16.223986 systemd-logind[1444]: Removed session 3. Nov 12 20:55:16.250226 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 49296 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:55:16.251864 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:16.255878 systemd-logind[1444]: New session 4 of user core. Nov 12 20:55:16.269531 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:55:16.323885 sshd[1587]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:16.341458 systemd[1]: sshd@3-10.0.0.133:22-10.0.0.1:49296.service: Deactivated successfully. Nov 12 20:55:16.343212 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:55:16.344776 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:55:16.346035 systemd[1]: Started sshd@4-10.0.0.133:22-10.0.0.1:49300.service - OpenSSH per-connection server daemon (10.0.0.1:49300). Nov 12 20:55:16.346788 systemd-logind[1444]: Removed session 4. Nov 12 20:55:16.379574 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 49300 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:55:16.381787 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:16.386721 systemd-logind[1444]: New session 5 of user core. Nov 12 20:55:16.396660 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:55:16.460125 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:55:16.460695 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:55:16.483947 sudo[1597]: pam_unix(sudo:session): session closed for user root Nov 12 20:55:16.486616 sshd[1594]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:16.507572 systemd[1]: sshd@4-10.0.0.133:22-10.0.0.1:49300.service: Deactivated successfully. Nov 12 20:55:16.510081 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:55:16.512295 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:55:16.522795 systemd[1]: Started sshd@5-10.0.0.133:22-10.0.0.1:49306.service - OpenSSH per-connection server daemon (10.0.0.1:49306). Nov 12 20:55:16.524078 systemd-logind[1444]: Removed session 5. Nov 12 20:55:16.553912 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 49306 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:55:16.556570 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:16.561859 systemd-logind[1444]: New session 6 of user core. Nov 12 20:55:16.572713 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:55:16.630171 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:55:16.630603 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:55:16.634105 sudo[1606]: pam_unix(sudo:session): session closed for user root Nov 12 20:55:16.640752 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:55:16.641098 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:55:16.661562 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:55:16.663180 auditctl[1609]: No rules Nov 12 20:55:16.664504 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:55:16.664779 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:55:16.666555 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:55:16.696847 augenrules[1627]: No rules Nov 12 20:55:16.698750 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:55:16.700063 sudo[1605]: pam_unix(sudo:session): session closed for user root Nov 12 20:55:16.702281 sshd[1602]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:16.713106 systemd[1]: sshd@5-10.0.0.133:22-10.0.0.1:49306.service: Deactivated successfully. Nov 12 20:55:16.715157 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:55:16.716844 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:55:16.725756 systemd[1]: Started sshd@6-10.0.0.133:22-10.0.0.1:49318.service - OpenSSH per-connection server daemon (10.0.0.1:49318). Nov 12 20:55:16.726698 systemd-logind[1444]: Removed session 6. Nov 12 20:55:16.753000 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 49318 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:55:16.754602 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:16.758590 systemd-logind[1444]: New session 7 of user core. Nov 12 20:55:16.769528 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:55:16.823654 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:55:16.823977 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:55:17.407558 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:55:17.407755 (dockerd)[1657]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:55:19.104055 dockerd[1657]: time="2024-11-12T20:55:19.103944919Z" level=info msg="Starting up" Nov 12 20:55:19.361226 dockerd[1657]: time="2024-11-12T20:55:19.360999916Z" level=info msg="Loading containers: start." Nov 12 20:55:19.526374 kernel: Initializing XFRM netlink socket Nov 12 20:55:19.626804 systemd-networkd[1385]: docker0: Link UP Nov 12 20:55:19.652701 dockerd[1657]: time="2024-11-12T20:55:19.652622318Z" level=info msg="Loading containers: done." Nov 12 20:55:19.679276 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3559205804-merged.mount: Deactivated successfully. Nov 12 20:55:19.685907 dockerd[1657]: time="2024-11-12T20:55:19.685808772Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:55:19.686103 dockerd[1657]: time="2024-11-12T20:55:19.686007846Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:55:19.686258 dockerd[1657]: time="2024-11-12T20:55:19.686222819Z" level=info msg="Daemon has completed initialization" Nov 12 20:55:19.749115 dockerd[1657]: time="2024-11-12T20:55:19.748984660Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:55:19.749290 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:55:20.411287 containerd[1461]: time="2024-11-12T20:55:20.411246543Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\"" Nov 12 20:55:24.804197 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:55:24.814705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:25.048600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:25.054348 (kubelet)[1814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:55:25.139076 kubelet[1814]: E1112 20:55:25.139008 1814 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:55:25.145722 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:55:25.145932 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:55:26.794141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2974793921.mount: Deactivated successfully. Nov 12 20:55:28.176324 containerd[1461]: time="2024-11-12T20:55:28.176231905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:28.178396 containerd[1461]: time="2024-11-12T20:55:28.178279408Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.2: active requests=0, bytes read=27975588" Nov 12 20:55:28.179506 containerd[1461]: time="2024-11-12T20:55:28.179466676Z" level=info msg="ImageCreate event name:\"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:28.183019 containerd[1461]: time="2024-11-12T20:55:28.182950234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:28.184391 containerd[1461]: time="2024-11-12T20:55:28.184287654Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.2\" with image id \"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\", size \"27972388\" in 7.772988903s" Nov 12 20:55:28.184391 containerd[1461]: time="2024-11-12T20:55:28.184353327Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\" returns image reference \"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\"" Nov 12 20:55:28.186641 containerd[1461]: time="2024-11-12T20:55:28.186582070Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\"" Nov 12 20:55:33.047108 containerd[1461]: time="2024-11-12T20:55:33.047035649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:33.047768 containerd[1461]: time="2024-11-12T20:55:33.047704274Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.2: active requests=0, bytes read=24701922" Nov 12 20:55:33.048966 containerd[1461]: time="2024-11-12T20:55:33.048927980Z" level=info msg="ImageCreate event name:\"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:33.052072 containerd[1461]: time="2024-11-12T20:55:33.052007039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:33.053169 containerd[1461]: time="2024-11-12T20:55:33.053118114Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.2\" with image id \"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\", size \"26147288\" in 4.866476001s" Nov 12 20:55:33.053169 containerd[1461]: time="2024-11-12T20:55:33.053164852Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\" returns image reference \"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\"" Nov 12 20:55:33.053829 containerd[1461]: time="2024-11-12T20:55:33.053783894Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\"" Nov 12 20:55:34.597929 containerd[1461]: time="2024-11-12T20:55:34.597852329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:34.598885 containerd[1461]: time="2024-11-12T20:55:34.598833671Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.2: active requests=0, bytes read=18657606" Nov 12 20:55:34.601425 containerd[1461]: time="2024-11-12T20:55:34.601316791Z" level=info msg="ImageCreate event name:\"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:34.605818 containerd[1461]: time="2024-11-12T20:55:34.605762785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:34.607093 containerd[1461]: time="2024-11-12T20:55:34.606889430Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.2\" with image id \"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\", size \"20102990\" in 1.553061423s" Nov 12 20:55:34.607093 containerd[1461]: time="2024-11-12T20:55:34.606932841Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\" returns image reference \"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\"" Nov 12 20:55:34.607560 containerd[1461]: time="2024-11-12T20:55:34.607532938Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\"" Nov 12 20:55:35.396202 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:55:35.406545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:35.567641 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:35.578314 (kubelet)[1893]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:55:36.376115 kubelet[1893]: E1112 20:55:36.376047 1893 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:55:36.380855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:55:36.381119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:55:36.767839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1056248805.mount: Deactivated successfully. Nov 12 20:55:38.687883 containerd[1461]: time="2024-11-12T20:55:38.687791898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:38.773286 containerd[1461]: time="2024-11-12T20:55:38.773198649Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.2: active requests=0, bytes read=30226814" Nov 12 20:55:38.851617 containerd[1461]: time="2024-11-12T20:55:38.851545619Z" level=info msg="ImageCreate event name:\"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:38.877802 containerd[1461]: time="2024-11-12T20:55:38.877744148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:38.878527 containerd[1461]: time="2024-11-12T20:55:38.878495668Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.2\" with image id \"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\", repo tag \"registry.k8s.io/kube-proxy:v1.31.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\", size \"30225833\" in 4.270932615s" Nov 12 20:55:38.878586 containerd[1461]: time="2024-11-12T20:55:38.878528731Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\" returns image reference \"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\"" Nov 12 20:55:38.879030 containerd[1461]: time="2024-11-12T20:55:38.879011797Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:55:39.953268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3469678785.mount: Deactivated successfully. Nov 12 20:55:41.188938 containerd[1461]: time="2024-11-12T20:55:41.188869324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:41.191500 containerd[1461]: time="2024-11-12T20:55:41.191404442Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 20:55:41.194883 containerd[1461]: time="2024-11-12T20:55:41.194832416Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:41.198405 containerd[1461]: time="2024-11-12T20:55:41.198330270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:41.199646 containerd[1461]: time="2024-11-12T20:55:41.199597960Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.320560184s" Nov 12 20:55:41.199646 containerd[1461]: time="2024-11-12T20:55:41.199632374Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:55:41.200234 containerd[1461]: time="2024-11-12T20:55:41.200206271Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 12 20:55:41.722708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3576651019.mount: Deactivated successfully. Nov 12 20:55:41.730998 containerd[1461]: time="2024-11-12T20:55:41.730935486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:41.731831 containerd[1461]: time="2024-11-12T20:55:41.731777136Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 12 20:55:41.733322 containerd[1461]: time="2024-11-12T20:55:41.733281880Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:41.738401 containerd[1461]: time="2024-11-12T20:55:41.736173066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:41.739675 containerd[1461]: time="2024-11-12T20:55:41.739644261Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 539.402283ms" Nov 12 20:55:41.739741 containerd[1461]: time="2024-11-12T20:55:41.739683134Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 12 20:55:41.740368 containerd[1461]: time="2024-11-12T20:55:41.740304279Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Nov 12 20:55:42.409676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3613334764.mount: Deactivated successfully. Nov 12 20:55:46.631575 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 12 20:55:46.648626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:46.797902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:46.802253 (kubelet)[1986]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:55:46.877083 kubelet[1986]: E1112 20:55:46.877021 1986 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:55:46.881426 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:55:46.881658 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:55:50.756358 containerd[1461]: time="2024-11-12T20:55:50.756290305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:50.757400 containerd[1461]: time="2024-11-12T20:55:50.757320241Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779650" Nov 12 20:55:50.758816 containerd[1461]: time="2024-11-12T20:55:50.758760732Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:50.762455 containerd[1461]: time="2024-11-12T20:55:50.762397677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:50.764046 containerd[1461]: time="2024-11-12T20:55:50.763967745Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 9.023597402s" Nov 12 20:55:50.764107 containerd[1461]: time="2024-11-12T20:55:50.764047217Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Nov 12 20:55:52.801663 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:52.810617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:52.853106 systemd[1]: Reloading requested from client PID 2057 ('systemctl') (unit session-7.scope)... Nov 12 20:55:52.853123 systemd[1]: Reloading... Nov 12 20:55:52.938361 zram_generator::config[2099]: No configuration found. Nov 12 20:55:54.638826 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:55:54.716566 systemd[1]: Reloading finished in 1863 ms. Nov 12 20:55:54.764874 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:54.768031 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:55:54.768277 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:54.777686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:54.926026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:54.931133 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:55:55.003983 kubelet[2146]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:55:55.003983 kubelet[2146]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:55:55.003983 kubelet[2146]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:55:55.004492 kubelet[2146]: I1112 20:55:55.004021 2146 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:55:55.468743 kubelet[2146]: I1112 20:55:55.468692 2146 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Nov 12 20:55:55.468743 kubelet[2146]: I1112 20:55:55.468727 2146 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:55:55.468999 kubelet[2146]: I1112 20:55:55.468980 2146 server.go:929] "Client rotation is on, will bootstrap in background" Nov 12 20:55:55.549216 kubelet[2146]: E1112 20:55:55.549166 2146 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:55.558423 kubelet[2146]: I1112 20:55:55.558371 2146 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:55:55.566465 kubelet[2146]: E1112 20:55:55.566435 2146 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 12 20:55:55.566539 kubelet[2146]: I1112 20:55:55.566474 2146 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 12 20:55:55.574860 kubelet[2146]: I1112 20:55:55.574831 2146 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:55:55.576905 kubelet[2146]: I1112 20:55:55.576881 2146 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 12 20:55:55.577073 kubelet[2146]: I1112 20:55:55.577033 2146 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:55:55.577235 kubelet[2146]: I1112 20:55:55.577071 2146 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 12 20:55:55.577308 kubelet[2146]: I1112 20:55:55.577244 2146 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:55:55.577308 kubelet[2146]: I1112 20:55:55.577254 2146 container_manager_linux.go:300] "Creating device plugin manager" Nov 12 20:55:55.577398 kubelet[2146]: I1112 20:55:55.577382 2146 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:55:55.580634 kubelet[2146]: W1112 20:55:55.580569 2146 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Nov 12 20:55:55.580686 kubelet[2146]: E1112 20:55:55.580650 2146 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:55.581593 kubelet[2146]: I1112 20:55:55.581568 2146 kubelet.go:408] "Attempting to sync node with API server" Nov 12 20:55:55.581632 kubelet[2146]: I1112 20:55:55.581595 2146 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:55:55.581660 kubelet[2146]: I1112 20:55:55.581641 2146 kubelet.go:314] "Adding apiserver pod source" Nov 12 20:55:55.581684 kubelet[2146]: I1112 20:55:55.581661 2146 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:55:55.594129 kubelet[2146]: W1112 20:55:55.594087 2146 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Nov 12 20:55:55.594186 kubelet[2146]: E1112 20:55:55.594145 2146 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:55.597226 kubelet[2146]: I1112 20:55:55.597123 2146 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:55:55.605531 kubelet[2146]: I1112 20:55:55.605501 2146 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:55:55.606547 kubelet[2146]: W1112 20:55:55.606518 2146 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:55:55.607465 kubelet[2146]: I1112 20:55:55.607448 2146 server.go:1269] "Started kubelet" Nov 12 20:55:55.607633 kubelet[2146]: I1112 20:55:55.607557 2146 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:55:55.607775 kubelet[2146]: I1112 20:55:55.607674 2146 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:55:55.608027 kubelet[2146]: I1112 20:55:55.608005 2146 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:55:55.608861 kubelet[2146]: I1112 20:55:55.608652 2146 server.go:460] "Adding debug handlers to kubelet server" Nov 12 20:55:55.609840 kubelet[2146]: I1112 20:55:55.609819 2146 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:55:55.610183 kubelet[2146]: I1112 20:55:55.610161 2146 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 12 20:55:55.610244 kubelet[2146]: I1112 20:55:55.610229 2146 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 12 20:55:55.611168 kubelet[2146]: E1112 20:55:55.611137 2146 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:55:55.611953 kubelet[2146]: W1112 20:55:55.611444 2146 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Nov 12 20:55:55.611953 kubelet[2146]: E1112 20:55:55.611489 2146 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:55.611953 kubelet[2146]: I1112 20:55:55.611589 2146 reconciler.go:26] "Reconciler: start to sync state" Nov 12 20:55:55.611953 kubelet[2146]: I1112 20:55:55.611609 2146 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 12 20:55:55.611953 kubelet[2146]: E1112 20:55:55.611659 2146 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:55.612135 kubelet[2146]: I1112 20:55:55.612110 2146 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:55:55.612199 kubelet[2146]: I1112 20:55:55.612184 2146 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:55:55.613015 kubelet[2146]: I1112 20:55:55.613001 2146 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:55:55.624106 kubelet[2146]: E1112 20:55:55.624061 2146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="200ms" Nov 12 20:55:55.625159 kubelet[2146]: I1112 20:55:55.625112 2146 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:55:55.626252 kubelet[2146]: I1112 20:55:55.626233 2146 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:55:55.626298 kubelet[2146]: I1112 20:55:55.626262 2146 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:55:55.626298 kubelet[2146]: I1112 20:55:55.626288 2146 kubelet.go:2321] "Starting kubelet main sync loop" Nov 12 20:55:55.626375 kubelet[2146]: E1112 20:55:55.626325 2146 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:55:55.631633 kubelet[2146]: W1112 20:55:55.631570 2146 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Nov 12 20:55:55.631633 kubelet[2146]: E1112 20:55:55.631625 2146 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:55.634816 kubelet[2146]: E1112 20:55:55.632884 2146 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.133:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.133:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18075403771f66f2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 20:55:55.607426802 +0000 UTC m=+0.672420771,LastTimestamp:2024-11-12 20:55:55.607426802 +0000 UTC m=+0.672420771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 20:55:55.638374 kubelet[2146]: I1112 20:55:55.638350 2146 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:55:55.638374 kubelet[2146]: I1112 20:55:55.638369 2146 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:55:55.638472 kubelet[2146]: I1112 20:55:55.638387 2146 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:55:55.711746 kubelet[2146]: E1112 20:55:55.711709 2146 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:55.727027 kubelet[2146]: E1112 20:55:55.726921 2146 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:55:55.812173 kubelet[2146]: E1112 20:55:55.812119 2146 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:55.824841 kubelet[2146]: E1112 20:55:55.824776 2146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="400ms" Nov 12 20:55:55.913076 kubelet[2146]: E1112 20:55:55.913021 2146 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:55.927261 kubelet[2146]: E1112 20:55:55.927223 2146 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:55:56.013968 kubelet[2146]: E1112 20:55:56.013843 2146 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:56.114926 kubelet[2146]: E1112 20:55:56.114878 2146 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:56.215407 kubelet[2146]: E1112 20:55:56.215367 2146 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:56.225969 kubelet[2146]: E1112 20:55:56.225929 2146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="800ms" Nov 12 20:55:56.316557 kubelet[2146]: E1112 20:55:56.316427 2146 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:56.327663 kubelet[2146]: E1112 20:55:56.327619 2146 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:55:56.417088 kubelet[2146]: E1112 20:55:56.417037 2146 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:56.436693 kubelet[2146]: W1112 20:55:56.436616 2146 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Nov 12 20:55:56.436742 kubelet[2146]: E1112 20:55:56.436697 2146 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:56.517288 kubelet[2146]: E1112 20:55:56.517239 2146 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:56.618054 kubelet[2146]: E1112 20:55:56.617905 2146 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:56.663581 kubelet[2146]: W1112 20:55:56.663539 2146 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Nov 12 20:55:56.663581 kubelet[2146]: E1112 20:55:56.663574 2146 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:56.718087 kubelet[2146]: E1112 20:55:56.718018 2146 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:56.818628 kubelet[2146]: E1112 20:55:56.818529 2146 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:56.901613 kubelet[2146]: I1112 20:55:56.901482 2146 policy_none.go:49] "None policy: Start" Nov 12 20:55:56.902424 kubelet[2146]: I1112 20:55:56.902387 2146 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:55:56.902424 kubelet[2146]: I1112 20:55:56.902425 2146 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:55:56.914638 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 20:55:56.919437 kubelet[2146]: E1112 20:55:56.919399 2146 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:55:56.928098 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 20:55:56.931514 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 20:55:56.947287 kubelet[2146]: I1112 20:55:56.947211 2146 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:55:56.947801 kubelet[2146]: I1112 20:55:56.947455 2146 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 12 20:55:56.947801 kubelet[2146]: I1112 20:55:56.947465 2146 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 20:55:56.947801 kubelet[2146]: I1112 20:55:56.947740 2146 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:55:56.949261 kubelet[2146]: E1112 20:55:56.949025 2146 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 20:55:56.996201 kubelet[2146]: W1112 20:55:56.996144 2146 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Nov 12 20:55:56.996313 kubelet[2146]: E1112 20:55:56.996208 2146 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:57.026699 kubelet[2146]: E1112 20:55:57.026647 2146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="1.6s" Nov 12 20:55:57.049133 kubelet[2146]: I1112 20:55:57.049071 2146 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:55:57.049546 kubelet[2146]: E1112 20:55:57.049516 2146 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Nov 12 20:55:57.133504 kubelet[2146]: W1112 20:55:57.133448 2146 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Nov 12 20:55:57.133504 kubelet[2146]: E1112 20:55:57.133508 2146 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:57.138447 systemd[1]: Created slice kubepods-burstable-pod55c3b228e757dd561d937280defd812b.slice - libcontainer container kubepods-burstable-pod55c3b228e757dd561d937280defd812b.slice. Nov 12 20:55:57.165959 systemd[1]: Created slice kubepods-burstable-pod2bd0c21dd05cc63bc1db25732dedb07c.slice - libcontainer container kubepods-burstable-pod2bd0c21dd05cc63bc1db25732dedb07c.slice. Nov 12 20:55:57.188791 systemd[1]: Created slice kubepods-burstable-pod33673bc39d15d92b38b41cdd12700fe3.slice - libcontainer container kubepods-burstable-pod33673bc39d15d92b38b41cdd12700fe3.slice. Nov 12 20:55:57.221825 kubelet[2146]: I1112 20:55:57.221757 2146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55c3b228e757dd561d937280defd812b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"55c3b228e757dd561d937280defd812b\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:55:57.221825 kubelet[2146]: I1112 20:55:57.221811 2146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55c3b228e757dd561d937280defd812b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"55c3b228e757dd561d937280defd812b\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:55:57.221825 kubelet[2146]: I1112 20:55:57.221835 2146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:55:57.222033 kubelet[2146]: I1112 20:55:57.221852 2146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:55:57.222033 kubelet[2146]: I1112 20:55:57.221866 2146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33673bc39d15d92b38b41cdd12700fe3-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33673bc39d15d92b38b41cdd12700fe3\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:55:57.222033 kubelet[2146]: I1112 20:55:57.221880 2146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55c3b228e757dd561d937280defd812b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"55c3b228e757dd561d937280defd812b\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:55:57.222033 kubelet[2146]: I1112 20:55:57.221894 2146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:55:57.222033 kubelet[2146]: I1112 20:55:57.221907 2146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:55:57.222158 kubelet[2146]: I1112 20:55:57.221920 2146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:55:57.251028 kubelet[2146]: I1112 20:55:57.250991 2146 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:55:57.251468 kubelet[2146]: E1112 20:55:57.251422 2146 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Nov 12 20:55:57.465375 kubelet[2146]: E1112 20:55:57.465210 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:57.466104 containerd[1461]: time="2024-11-12T20:55:57.466064399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:55c3b228e757dd561d937280defd812b,Namespace:kube-system,Attempt:0,}" Nov 12 20:55:57.487407 kubelet[2146]: E1112 20:55:57.487362 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:57.487880 containerd[1461]: time="2024-11-12T20:55:57.487844362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:2bd0c21dd05cc63bc1db25732dedb07c,Namespace:kube-system,Attempt:0,}" Nov 12 20:55:57.491163 kubelet[2146]: E1112 20:55:57.491138 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:57.491569 containerd[1461]: time="2024-11-12T20:55:57.491538031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33673bc39d15d92b38b41cdd12700fe3,Namespace:kube-system,Attempt:0,}" Nov 12 20:55:57.511123 update_engine[1450]: I20241112 20:55:57.511022 1450 update_attempter.cc:509] Updating boot flags... Nov 12 20:55:57.553410 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2185) Nov 12 20:55:57.623582 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2188) Nov 12 20:55:57.632220 kubelet[2146]: E1112 20:55:57.632129 2146 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:57.638410 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2188) Nov 12 20:55:57.656243 kubelet[2146]: I1112 20:55:57.655831 2146 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:55:57.656243 kubelet[2146]: E1112 20:55:57.656221 2146 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Nov 12 20:55:58.048954 kubelet[2146]: W1112 20:55:58.048871 2146 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Nov 12 20:55:58.049390 kubelet[2146]: E1112 20:55:58.048955 2146 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:58.402104 kubelet[2146]: W1112 20:55:58.401956 2146 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Nov 12 20:55:58.402104 kubelet[2146]: E1112 20:55:58.402046 2146 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:58.458096 kubelet[2146]: I1112 20:55:58.458066 2146 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:55:58.458438 kubelet[2146]: E1112 20:55:58.458404 2146 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Nov 12 20:55:58.628052 kubelet[2146]: E1112 20:55:58.627986 2146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="3.2s" Nov 12 20:55:59.158650 kubelet[2146]: W1112 20:55:59.158556 2146 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Nov 12 20:55:59.158650 kubelet[2146]: E1112 20:55:59.158637 2146 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:55:59.542966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3379100489.mount: Deactivated successfully. Nov 12 20:55:59.793893 containerd[1461]: time="2024-11-12T20:55:59.793708054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:55:59.796312 containerd[1461]: time="2024-11-12T20:55:59.796239120Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 20:55:59.799607 containerd[1461]: time="2024-11-12T20:55:59.798669025Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:55:59.806852 containerd[1461]: time="2024-11-12T20:55:59.806743087Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:55:59.810179 containerd[1461]: time="2024-11-12T20:55:59.810126690Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:55:59.814763 containerd[1461]: time="2024-11-12T20:55:59.814698704Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:55:59.816571 containerd[1461]: time="2024-11-12T20:55:59.816495619Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:55:59.818655 containerd[1461]: time="2024-11-12T20:55:59.818555655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:55:59.819976 containerd[1461]: time="2024-11-12T20:55:59.819647303Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.353494004s" Nov 12 20:55:59.821696 containerd[1461]: time="2024-11-12T20:55:59.821433929Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.333507291s" Nov 12 20:55:59.946915 containerd[1461]: time="2024-11-12T20:55:59.946856276Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.455259765s" Nov 12 20:56:00.060670 kubelet[2146]: I1112 20:56:00.060612 2146 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:56:00.061191 kubelet[2146]: E1112 20:56:00.061133 2146 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Nov 12 20:56:00.145600 kubelet[2146]: W1112 20:56:00.145550 2146 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Nov 12 20:56:00.145716 kubelet[2146]: E1112 20:56:00.145606 2146 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:56:00.310796 containerd[1461]: time="2024-11-12T20:56:00.310455745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:00.310796 containerd[1461]: time="2024-11-12T20:56:00.310532090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:00.310796 containerd[1461]: time="2024-11-12T20:56:00.310508656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:00.310796 containerd[1461]: time="2024-11-12T20:56:00.310558719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:00.310796 containerd[1461]: time="2024-11-12T20:56:00.310558078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:00.310796 containerd[1461]: time="2024-11-12T20:56:00.310568709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:00.310796 containerd[1461]: time="2024-11-12T20:56:00.310655213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:00.310796 containerd[1461]: time="2024-11-12T20:56:00.310692143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:00.315189 containerd[1461]: time="2024-11-12T20:56:00.313718404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:00.315189 containerd[1461]: time="2024-11-12T20:56:00.313758801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:00.315189 containerd[1461]: time="2024-11-12T20:56:00.313768670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:00.315189 containerd[1461]: time="2024-11-12T20:56:00.313843311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:00.334519 systemd[1]: Started cri-containerd-99a46a33d5edaf6745bd2d0c94281576e7d70f34f071db0199c4aa11c7aafa3a.scope - libcontainer container 99a46a33d5edaf6745bd2d0c94281576e7d70f34f071db0199c4aa11c7aafa3a. Nov 12 20:56:00.339557 systemd[1]: Started cri-containerd-5080e844bc8eda5627c5d39ac75f684e7dbccd7c069c1f726cda184c899f778a.scope - libcontainer container 5080e844bc8eda5627c5d39ac75f684e7dbccd7c069c1f726cda184c899f778a. Nov 12 20:56:00.341553 systemd[1]: Started cri-containerd-7b92e5f7ed4e2d58be7a322088f15e5abb6040305ad5bc5b5fd0f444f621d5b4.scope - libcontainer container 7b92e5f7ed4e2d58be7a322088f15e5abb6040305ad5bc5b5fd0f444f621d5b4. Nov 12 20:56:00.386509 containerd[1461]: time="2024-11-12T20:56:00.384158420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:2bd0c21dd05cc63bc1db25732dedb07c,Namespace:kube-system,Attempt:0,} returns sandbox id \"99a46a33d5edaf6745bd2d0c94281576e7d70f34f071db0199c4aa11c7aafa3a\"" Nov 12 20:56:00.386673 containerd[1461]: time="2024-11-12T20:56:00.386587941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:55c3b228e757dd561d937280defd812b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5080e844bc8eda5627c5d39ac75f684e7dbccd7c069c1f726cda184c899f778a\"" Nov 12 20:56:00.389034 kubelet[2146]: E1112 20:56:00.388797 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:00.389034 kubelet[2146]: E1112 20:56:00.388797 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:00.391906 containerd[1461]: time="2024-11-12T20:56:00.391871929Z" level=info msg="CreateContainer within sandbox \"99a46a33d5edaf6745bd2d0c94281576e7d70f34f071db0199c4aa11c7aafa3a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:56:00.391995 containerd[1461]: time="2024-11-12T20:56:00.391933716Z" level=info msg="CreateContainer within sandbox \"5080e844bc8eda5627c5d39ac75f684e7dbccd7c069c1f726cda184c899f778a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:56:00.396294 containerd[1461]: time="2024-11-12T20:56:00.396235634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33673bc39d15d92b38b41cdd12700fe3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b92e5f7ed4e2d58be7a322088f15e5abb6040305ad5bc5b5fd0f444f621d5b4\"" Nov 12 20:56:00.397073 kubelet[2146]: E1112 20:56:00.397047 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:00.399000 containerd[1461]: time="2024-11-12T20:56:00.398870705Z" level=info msg="CreateContainer within sandbox \"7b92e5f7ed4e2d58be7a322088f15e5abb6040305ad5bc5b5fd0f444f621d5b4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:56:00.419944 containerd[1461]: time="2024-11-12T20:56:00.419862965Z" level=info msg="CreateContainer within sandbox \"5080e844bc8eda5627c5d39ac75f684e7dbccd7c069c1f726cda184c899f778a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"74241295c2bc172aebc791fb3bf30af64d84287eeee4c855ce58331fdb445d84\"" Nov 12 20:56:00.420750 containerd[1461]: time="2024-11-12T20:56:00.420714327Z" level=info msg="StartContainer for \"74241295c2bc172aebc791fb3bf30af64d84287eeee4c855ce58331fdb445d84\"" Nov 12 20:56:00.429798 containerd[1461]: time="2024-11-12T20:56:00.429738670Z" level=info msg="CreateContainer within sandbox \"99a46a33d5edaf6745bd2d0c94281576e7d70f34f071db0199c4aa11c7aafa3a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bfed472bd43d25ef50f1bec2e8335ec1ab76d2bb4b3387f9be50264ce6655cb6\"" Nov 12 20:56:00.431012 containerd[1461]: time="2024-11-12T20:56:00.430977546Z" level=info msg="StartContainer for \"bfed472bd43d25ef50f1bec2e8335ec1ab76d2bb4b3387f9be50264ce6655cb6\"" Nov 12 20:56:00.433214 containerd[1461]: time="2024-11-12T20:56:00.433169438Z" level=info msg="CreateContainer within sandbox \"7b92e5f7ed4e2d58be7a322088f15e5abb6040305ad5bc5b5fd0f444f621d5b4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a4d24809d95a80a72cdf5022106dcba11bd201e9b26d51ad13fd4d17d403b788\"" Nov 12 20:56:00.433673 containerd[1461]: time="2024-11-12T20:56:00.433624710Z" level=info msg="StartContainer for \"a4d24809d95a80a72cdf5022106dcba11bd201e9b26d51ad13fd4d17d403b788\"" Nov 12 20:56:00.451533 systemd[1]: Started cri-containerd-74241295c2bc172aebc791fb3bf30af64d84287eeee4c855ce58331fdb445d84.scope - libcontainer container 74241295c2bc172aebc791fb3bf30af64d84287eeee4c855ce58331fdb445d84. Nov 12 20:56:00.462615 systemd[1]: Started cri-containerd-a4d24809d95a80a72cdf5022106dcba11bd201e9b26d51ad13fd4d17d403b788.scope - libcontainer container a4d24809d95a80a72cdf5022106dcba11bd201e9b26d51ad13fd4d17d403b788. Nov 12 20:56:00.468443 systemd[1]: Started cri-containerd-bfed472bd43d25ef50f1bec2e8335ec1ab76d2bb4b3387f9be50264ce6655cb6.scope - libcontainer container bfed472bd43d25ef50f1bec2e8335ec1ab76d2bb4b3387f9be50264ce6655cb6. Nov 12 20:56:00.529638 containerd[1461]: time="2024-11-12T20:56:00.529461848Z" level=info msg="StartContainer for \"bfed472bd43d25ef50f1bec2e8335ec1ab76d2bb4b3387f9be50264ce6655cb6\" returns successfully" Nov 12 20:56:00.529638 containerd[1461]: time="2024-11-12T20:56:00.529498057Z" level=info msg="StartContainer for \"74241295c2bc172aebc791fb3bf30af64d84287eeee4c855ce58331fdb445d84\" returns successfully" Nov 12 20:56:00.529638 containerd[1461]: time="2024-11-12T20:56:00.529485322Z" level=info msg="StartContainer for \"a4d24809d95a80a72cdf5022106dcba11bd201e9b26d51ad13fd4d17d403b788\" returns successfully" Nov 12 20:56:00.647892 kubelet[2146]: E1112 20:56:00.647762 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:00.652992 kubelet[2146]: E1112 20:56:00.652835 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:00.655725 kubelet[2146]: E1112 20:56:00.655646 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:01.657300 kubelet[2146]: E1112 20:56:01.657270 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:01.874638 kubelet[2146]: E1112 20:56:01.874586 2146 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 12 20:56:02.091703 kubelet[2146]: E1112 20:56:02.091651 2146 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 20:56:02.436712 kubelet[2146]: E1112 20:56:02.436554 2146 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 20:56:02.781844 kubelet[2146]: E1112 20:56:02.781730 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:02.951183 kubelet[2146]: E1112 20:56:02.951119 2146 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 20:56:03.263070 kubelet[2146]: I1112 20:56:03.262999 2146 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:56:03.271752 kubelet[2146]: I1112 20:56:03.271694 2146 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Nov 12 20:56:03.271752 kubelet[2146]: E1112 20:56:03.271741 2146 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 12 20:56:03.280011 kubelet[2146]: E1112 20:56:03.279972 2146 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:56:03.380647 kubelet[2146]: E1112 20:56:03.380586 2146 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:56:03.481346 kubelet[2146]: E1112 20:56:03.481302 2146 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:56:03.582257 kubelet[2146]: E1112 20:56:03.582205 2146 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:56:03.683276 kubelet[2146]: E1112 20:56:03.683213 2146 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:56:03.783906 kubelet[2146]: E1112 20:56:03.783842 2146 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:56:04.368359 systemd[1]: Reloading requested from client PID 2439 ('systemctl') (unit session-7.scope)... Nov 12 20:56:04.368375 systemd[1]: Reloading... Nov 12 20:56:04.475467 zram_generator::config[2481]: No configuration found. Nov 12 20:56:04.588396 kubelet[2146]: I1112 20:56:04.588325 2146 apiserver.go:52] "Watching apiserver" Nov 12 20:56:04.612270 kubelet[2146]: I1112 20:56:04.612223 2146 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 12 20:56:04.745729 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:56:04.842486 systemd[1]: Reloading finished in 473 ms. Nov 12 20:56:04.885204 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:56:04.905937 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:56:04.906213 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:56:04.906269 systemd[1]: kubelet.service: Consumed 1.068s CPU time, 118.0M memory peak, 0B memory swap peak. Nov 12 20:56:04.917572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:56:05.073309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:56:05.078375 (kubelet)[2523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:56:05.115551 kubelet[2523]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:56:05.115551 kubelet[2523]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:56:05.115551 kubelet[2523]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:56:05.115965 kubelet[2523]: I1112 20:56:05.115595 2523 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:56:05.122725 kubelet[2523]: I1112 20:56:05.122672 2523 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Nov 12 20:56:05.122725 kubelet[2523]: I1112 20:56:05.122703 2523 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:56:05.122977 kubelet[2523]: I1112 20:56:05.122947 2523 server.go:929] "Client rotation is on, will bootstrap in background" Nov 12 20:56:05.124438 kubelet[2523]: I1112 20:56:05.124408 2523 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:56:05.126588 kubelet[2523]: I1112 20:56:05.126517 2523 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:56:05.130262 kubelet[2523]: E1112 20:56:05.130211 2523 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 12 20:56:05.130262 kubelet[2523]: I1112 20:56:05.130249 2523 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 12 20:56:05.135781 kubelet[2523]: I1112 20:56:05.135735 2523 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:56:05.135937 kubelet[2523]: I1112 20:56:05.135870 2523 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 12 20:56:05.136063 kubelet[2523]: I1112 20:56:05.136016 2523 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:56:05.136220 kubelet[2523]: I1112 20:56:05.136053 2523 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 12 20:56:05.136220 kubelet[2523]: I1112 20:56:05.136219 2523 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:56:05.136394 kubelet[2523]: I1112 20:56:05.136229 2523 container_manager_linux.go:300] "Creating device plugin manager" Nov 12 20:56:05.136394 kubelet[2523]: I1112 20:56:05.136259 2523 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:56:05.136394 kubelet[2523]: I1112 20:56:05.136384 2523 kubelet.go:408] "Attempting to sync node with API server" Nov 12 20:56:05.136394 kubelet[2523]: I1112 20:56:05.136395 2523 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:56:05.136510 kubelet[2523]: I1112 20:56:05.136428 2523 kubelet.go:314] "Adding apiserver pod source" Nov 12 20:56:05.136510 kubelet[2523]: I1112 20:56:05.136443 2523 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:56:05.137303 kubelet[2523]: I1112 20:56:05.137281 2523 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:56:05.137900 kubelet[2523]: I1112 20:56:05.137720 2523 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:56:05.138197 kubelet[2523]: I1112 20:56:05.138169 2523 server.go:1269] "Started kubelet" Nov 12 20:56:05.140424 kubelet[2523]: I1112 20:56:05.138416 2523 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:56:05.140424 kubelet[2523]: I1112 20:56:05.138544 2523 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:56:05.140424 kubelet[2523]: I1112 20:56:05.138860 2523 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:56:05.140424 kubelet[2523]: I1112 20:56:05.139380 2523 server.go:460] "Adding debug handlers to kubelet server" Nov 12 20:56:05.140424 kubelet[2523]: I1112 20:56:05.140257 2523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:56:05.140594 kubelet[2523]: I1112 20:56:05.140482 2523 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 12 20:56:05.145601 kubelet[2523]: I1112 20:56:05.145560 2523 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 12 20:56:05.145708 kubelet[2523]: I1112 20:56:05.145697 2523 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 12 20:56:05.145902 kubelet[2523]: I1112 20:56:05.145879 2523 reconciler.go:26] "Reconciler: start to sync state" Nov 12 20:56:05.148994 kubelet[2523]: E1112 20:56:05.148959 2523 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:56:05.152242 kubelet[2523]: I1112 20:56:05.151977 2523 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:56:05.152470 kubelet[2523]: I1112 20:56:05.152451 2523 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:56:05.153989 kubelet[2523]: I1112 20:56:05.153957 2523 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:56:05.155740 kubelet[2523]: E1112 20:56:05.155719 2523 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:56:05.156441 kubelet[2523]: I1112 20:56:05.156388 2523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:56:05.159581 kubelet[2523]: I1112 20:56:05.159412 2523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:56:05.159581 kubelet[2523]: I1112 20:56:05.159468 2523 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:56:05.159581 kubelet[2523]: I1112 20:56:05.159485 2523 kubelet.go:2321] "Starting kubelet main sync loop" Nov 12 20:56:05.159581 kubelet[2523]: E1112 20:56:05.159557 2523 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:56:05.196396 kubelet[2523]: I1112 20:56:05.196363 2523 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:56:05.196396 kubelet[2523]: I1112 20:56:05.196386 2523 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:56:05.196586 kubelet[2523]: I1112 20:56:05.196415 2523 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:56:05.196655 kubelet[2523]: I1112 20:56:05.196635 2523 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:56:05.196677 kubelet[2523]: I1112 20:56:05.196654 2523 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:56:05.196702 kubelet[2523]: I1112 20:56:05.196677 2523 policy_none.go:49] "None policy: Start" Nov 12 20:56:05.197552 kubelet[2523]: I1112 20:56:05.197522 2523 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:56:05.197552 kubelet[2523]: I1112 20:56:05.197555 2523 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:56:05.197702 kubelet[2523]: I1112 20:56:05.197688 2523 state_mem.go:75] "Updated machine memory state" Nov 12 20:56:05.202497 kubelet[2523]: I1112 20:56:05.202432 2523 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:56:05.202689 kubelet[2523]: I1112 20:56:05.202668 2523 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 12 20:56:05.202806 kubelet[2523]: I1112 20:56:05.202684 2523 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 20:56:05.203381 kubelet[2523]: I1112 20:56:05.203194 2523 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:56:05.307758 kubelet[2523]: I1112 20:56:05.307711 2523 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:56:05.313920 kubelet[2523]: I1112 20:56:05.313886 2523 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Nov 12 20:56:05.314049 kubelet[2523]: I1112 20:56:05.313979 2523 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Nov 12 20:56:05.329021 sudo[2562]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 12 20:56:05.329488 sudo[2562]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 12 20:56:05.347278 kubelet[2523]: I1112 20:56:05.347200 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55c3b228e757dd561d937280defd812b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"55c3b228e757dd561d937280defd812b\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:56:05.347278 kubelet[2523]: I1112 20:56:05.347263 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:56:05.347278 kubelet[2523]: I1112 20:56:05.347291 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:56:05.347538 kubelet[2523]: I1112 20:56:05.347311 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:56:05.347538 kubelet[2523]: I1112 20:56:05.347351 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:56:05.347538 kubelet[2523]: I1112 20:56:05.347371 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55c3b228e757dd561d937280defd812b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"55c3b228e757dd561d937280defd812b\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:56:05.347538 kubelet[2523]: I1112 20:56:05.347390 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55c3b228e757dd561d937280defd812b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"55c3b228e757dd561d937280defd812b\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:56:05.347538 kubelet[2523]: I1112 20:56:05.347431 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:56:05.347689 kubelet[2523]: I1112 20:56:05.347455 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33673bc39d15d92b38b41cdd12700fe3-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33673bc39d15d92b38b41cdd12700fe3\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:56:05.570844 kubelet[2523]: E1112 20:56:05.570804 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:05.575982 kubelet[2523]: E1112 20:56:05.575883 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:05.575982 kubelet[2523]: E1112 20:56:05.575908 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:05.810878 sudo[2562]: pam_unix(sudo:session): session closed for user root Nov 12 20:56:06.137182 kubelet[2523]: I1112 20:56:06.137057 2523 apiserver.go:52] "Watching apiserver" Nov 12 20:56:06.148707 kubelet[2523]: I1112 20:56:06.148622 2523 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 12 20:56:06.173969 kubelet[2523]: E1112 20:56:06.173918 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:06.175300 kubelet[2523]: E1112 20:56:06.175171 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:06.653720 kubelet[2523]: E1112 20:56:06.653612 2523 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 12 20:56:06.653889 kubelet[2523]: E1112 20:56:06.653814 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:06.898012 kubelet[2523]: I1112 20:56:06.897928 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.897909471 podStartE2EDuration="1.897909471s" podCreationTimestamp="2024-11-12 20:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:56:06.897025121 +0000 UTC m=+1.814774962" watchObservedRunningTime="2024-11-12 20:56:06.897909471 +0000 UTC m=+1.815659312" Nov 12 20:56:07.174706 kubelet[2523]: E1112 20:56:07.174677 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:07.175296 kubelet[2523]: E1112 20:56:07.174756 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:07.570003 kubelet[2523]: I1112 20:56:07.569946 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.569929557 podStartE2EDuration="2.569929557s" podCreationTimestamp="2024-11-12 20:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:56:07.569776479 +0000 UTC m=+2.487526320" watchObservedRunningTime="2024-11-12 20:56:07.569929557 +0000 UTC m=+2.487679398" Nov 12 20:56:07.910239 kubelet[2523]: I1112 20:56:07.909951 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.909928362 podStartE2EDuration="2.909928362s" podCreationTimestamp="2024-11-12 20:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:56:07.8970547 +0000 UTC m=+2.814804541" watchObservedRunningTime="2024-11-12 20:56:07.909928362 +0000 UTC m=+2.827678203" Nov 12 20:56:09.043703 sudo[1639]: pam_unix(sudo:session): session closed for user root Nov 12 20:56:09.050535 sshd[1635]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:09.063797 systemd[1]: sshd@6-10.0.0.133:22-10.0.0.1:49318.service: Deactivated successfully. Nov 12 20:56:09.065901 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:56:09.066086 systemd[1]: session-7.scope: Consumed 4.818s CPU time, 157.9M memory peak, 0B memory swap peak. Nov 12 20:56:09.066678 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:56:09.067723 systemd-logind[1444]: Removed session 7. Nov 12 20:56:09.931146 kubelet[2523]: I1112 20:56:09.931095 2523 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:56:09.931650 containerd[1461]: time="2024-11-12T20:56:09.931604115Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:56:09.931986 kubelet[2523]: I1112 20:56:09.931963 2523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:56:10.846790 systemd[1]: Created slice kubepods-besteffort-podcacc2360_fbd0_4cd9_9156_48deb0eb33f3.slice - libcontainer container kubepods-besteffort-podcacc2360_fbd0_4cd9_9156_48deb0eb33f3.slice. Nov 12 20:56:10.862245 systemd[1]: Created slice kubepods-burstable-podbe7b042f_12a1_49f2_bc59_5317b3dc38ab.slice - libcontainer container kubepods-burstable-podbe7b042f_12a1_49f2_bc59_5317b3dc38ab.slice. Nov 12 20:56:10.981248 kubelet[2523]: I1112 20:56:10.981174 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbb2w\" (UniqueName: \"kubernetes.io/projected/cacc2360-fbd0-4cd9-9156-48deb0eb33f3-kube-api-access-pbb2w\") pod \"kube-proxy-wgd54\" (UID: \"cacc2360-fbd0-4cd9-9156-48deb0eb33f3\") " pod="kube-system/kube-proxy-wgd54" Nov 12 20:56:10.981248 kubelet[2523]: I1112 20:56:10.981244 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-lib-modules\") pod \"cilium-44pcc\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " pod="kube-system/cilium-44pcc" Nov 12 20:56:10.981799 kubelet[2523]: I1112 20:56:10.981270 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cacc2360-fbd0-4cd9-9156-48deb0eb33f3-lib-modules\") pod \"kube-proxy-wgd54\" (UID: \"cacc2360-fbd0-4cd9-9156-48deb0eb33f3\") " pod="kube-system/kube-proxy-wgd54" Nov 12 20:56:10.981799 kubelet[2523]: I1112 20:56:10.981290 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-bpf-maps\") pod \"cilium-44pcc\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " pod="kube-system/cilium-44pcc" Nov 12 20:56:10.981799 kubelet[2523]: I1112 20:56:10.981309 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-host-proc-sys-kernel\") pod \"cilium-44pcc\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " pod="kube-system/cilium-44pcc" Nov 12 20:56:10.981799 kubelet[2523]: I1112 20:56:10.981329 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be7b042f-12a1-49f2-bc59-5317b3dc38ab-hubble-tls\") pod \"cilium-44pcc\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " pod="kube-system/cilium-44pcc" Nov 12 20:56:10.981799 kubelet[2523]: I1112 20:56:10.981376 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-host-proc-sys-net\") pod \"cilium-44pcc\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " pod="kube-system/cilium-44pcc" Nov 12 20:56:10.981799 kubelet[2523]: I1112 20:56:10.981400 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-hostproc\") pod \"cilium-44pcc\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " pod="kube-system/cilium-44pcc" Nov 12 20:56:10.981947 kubelet[2523]: I1112 20:56:10.981422 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-cni-path\") pod \"cilium-44pcc\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " pod="kube-system/cilium-44pcc" Nov 12 20:56:10.981947 kubelet[2523]: I1112 20:56:10.981459 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be7b042f-12a1-49f2-bc59-5317b3dc38ab-cilium-config-path\") pod \"cilium-44pcc\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " pod="kube-system/cilium-44pcc" Nov 12 20:56:10.981947 kubelet[2523]: I1112 20:56:10.981480 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h64w2\" (UniqueName: \"kubernetes.io/projected/be7b042f-12a1-49f2-bc59-5317b3dc38ab-kube-api-access-h64w2\") pod \"cilium-44pcc\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " pod="kube-system/cilium-44pcc" Nov 12 20:56:10.981947 kubelet[2523]: I1112 20:56:10.981502 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be7b042f-12a1-49f2-bc59-5317b3dc38ab-clustermesh-secrets\") pod \"cilium-44pcc\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " pod="kube-system/cilium-44pcc" Nov 12 20:56:10.981947 kubelet[2523]: I1112 20:56:10.981524 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cacc2360-fbd0-4cd9-9156-48deb0eb33f3-kube-proxy\") pod \"kube-proxy-wgd54\" (UID: \"cacc2360-fbd0-4cd9-9156-48deb0eb33f3\") " pod="kube-system/kube-proxy-wgd54" Nov 12 20:56:10.981947 kubelet[2523]: I1112 20:56:10.981545 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-cilium-run\") pod \"cilium-44pcc\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " pod="kube-system/cilium-44pcc" Nov 12 20:56:10.982109 kubelet[2523]: I1112 20:56:10.981565 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-cilium-cgroup\") pod \"cilium-44pcc\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " pod="kube-system/cilium-44pcc" Nov 12 20:56:10.982109 kubelet[2523]: I1112 20:56:10.981583 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-etc-cni-netd\") pod \"cilium-44pcc\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " pod="kube-system/cilium-44pcc" Nov 12 20:56:10.982109 kubelet[2523]: I1112 20:56:10.981621 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-xtables-lock\") pod \"cilium-44pcc\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " pod="kube-system/cilium-44pcc" Nov 12 20:56:10.982109 kubelet[2523]: I1112 20:56:10.981646 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cacc2360-fbd0-4cd9-9156-48deb0eb33f3-xtables-lock\") pod \"kube-proxy-wgd54\" (UID: \"cacc2360-fbd0-4cd9-9156-48deb0eb33f3\") " pod="kube-system/kube-proxy-wgd54" Nov 12 20:56:10.994904 systemd[1]: Created slice kubepods-besteffort-pod1052b17c_b8b0_4bc2_a2e4_496ea70c4ec2.slice - libcontainer container kubepods-besteffort-pod1052b17c_b8b0_4bc2_a2e4_496ea70c4ec2.slice. Nov 12 20:56:11.158939 kubelet[2523]: E1112 20:56:11.158781 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:11.159687 containerd[1461]: time="2024-11-12T20:56:11.159629559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wgd54,Uid:cacc2360-fbd0-4cd9-9156-48deb0eb33f3,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:11.168314 kubelet[2523]: E1112 20:56:11.168258 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:11.169024 containerd[1461]: time="2024-11-12T20:56:11.168979426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-44pcc,Uid:be7b042f-12a1-49f2-bc59-5317b3dc38ab,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:11.183241 kubelet[2523]: I1112 20:56:11.183179 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2-cilium-config-path\") pod \"cilium-operator-5d85765b45-2zh85\" (UID: \"1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2\") " pod="kube-system/cilium-operator-5d85765b45-2zh85" Nov 12 20:56:11.183241 kubelet[2523]: I1112 20:56:11.183233 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsxtw\" (UniqueName: \"kubernetes.io/projected/1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2-kube-api-access-gsxtw\") pod \"cilium-operator-5d85765b45-2zh85\" (UID: \"1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2\") " pod="kube-system/cilium-operator-5d85765b45-2zh85" Nov 12 20:56:11.196639 containerd[1461]: time="2024-11-12T20:56:11.195767228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:11.196639 containerd[1461]: time="2024-11-12T20:56:11.196482196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:11.196639 containerd[1461]: time="2024-11-12T20:56:11.196497094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:11.196639 containerd[1461]: time="2024-11-12T20:56:11.196594728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:11.203737 containerd[1461]: time="2024-11-12T20:56:11.203635993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:11.204088 containerd[1461]: time="2024-11-12T20:56:11.203721594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:11.204088 containerd[1461]: time="2024-11-12T20:56:11.203867740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:11.204088 containerd[1461]: time="2024-11-12T20:56:11.203999608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:11.218578 systemd[1]: Started cri-containerd-601f25aefdc8d942da7f41ce19fea04339e20a219de910dc2cd3727fc6ae20b6.scope - libcontainer container 601f25aefdc8d942da7f41ce19fea04339e20a219de910dc2cd3727fc6ae20b6. Nov 12 20:56:11.222327 systemd[1]: Started cri-containerd-c42d0dfa18219b4b62fb3f5c4c932661b0cf02204bfa146b7aa047fd30a6991e.scope - libcontainer container c42d0dfa18219b4b62fb3f5c4c932661b0cf02204bfa146b7aa047fd30a6991e. Nov 12 20:56:11.244821 containerd[1461]: time="2024-11-12T20:56:11.244778597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wgd54,Uid:cacc2360-fbd0-4cd9-9156-48deb0eb33f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"601f25aefdc8d942da7f41ce19fea04339e20a219de910dc2cd3727fc6ae20b6\"" Nov 12 20:56:11.246056 kubelet[2523]: E1112 20:56:11.245980 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:11.249521 containerd[1461]: time="2024-11-12T20:56:11.249090567Z" level=info msg="CreateContainer within sandbox \"601f25aefdc8d942da7f41ce19fea04339e20a219de910dc2cd3727fc6ae20b6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:56:11.251831 containerd[1461]: time="2024-11-12T20:56:11.251775930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-44pcc,Uid:be7b042f-12a1-49f2-bc59-5317b3dc38ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"c42d0dfa18219b4b62fb3f5c4c932661b0cf02204bfa146b7aa047fd30a6991e\"" Nov 12 20:56:11.252545 kubelet[2523]: E1112 20:56:11.252520 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:11.253985 containerd[1461]: time="2024-11-12T20:56:11.253952152Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 12 20:56:11.278103 kubelet[2523]: E1112 20:56:11.278063 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:11.599632 kubelet[2523]: E1112 20:56:11.599584 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:11.600169 containerd[1461]: time="2024-11-12T20:56:11.600127473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2zh85,Uid:1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:12.184665 kubelet[2523]: E1112 20:56:12.184628 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:13.186106 kubelet[2523]: E1112 20:56:13.186074 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:13.198038 containerd[1461]: time="2024-11-12T20:56:13.197963629Z" level=info msg="CreateContainer within sandbox \"601f25aefdc8d942da7f41ce19fea04339e20a219de910dc2cd3727fc6ae20b6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e3cd4b5b2de176ed8ca8f11a032c783cb80d76bc61bfa3465400816e9159c498\"" Nov 12 20:56:13.201423 containerd[1461]: time="2024-11-12T20:56:13.199083939Z" level=info msg="StartContainer for \"e3cd4b5b2de176ed8ca8f11a032c783cb80d76bc61bfa3465400816e9159c498\"" Nov 12 20:56:13.215609 containerd[1461]: time="2024-11-12T20:56:13.214293558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:13.215609 containerd[1461]: time="2024-11-12T20:56:13.214395921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:13.215609 containerd[1461]: time="2024-11-12T20:56:13.214423173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:13.215609 containerd[1461]: time="2024-11-12T20:56:13.214554801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:13.235414 systemd[1]: run-containerd-runc-k8s.io-4fe3b148585259278944fdc9d8ad1caf61ad87a6945f729053417a18c6c018c5-runc.rktCjA.mount: Deactivated successfully. Nov 12 20:56:13.248545 systemd[1]: Started cri-containerd-4fe3b148585259278944fdc9d8ad1caf61ad87a6945f729053417a18c6c018c5.scope - libcontainer container 4fe3b148585259278944fdc9d8ad1caf61ad87a6945f729053417a18c6c018c5. Nov 12 20:56:13.250322 systemd[1]: Started cri-containerd-e3cd4b5b2de176ed8ca8f11a032c783cb80d76bc61bfa3465400816e9159c498.scope - libcontainer container e3cd4b5b2de176ed8ca8f11a032c783cb80d76bc61bfa3465400816e9159c498. Nov 12 20:56:13.322884 containerd[1461]: time="2024-11-12T20:56:13.322829335Z" level=info msg="StartContainer for \"e3cd4b5b2de176ed8ca8f11a032c783cb80d76bc61bfa3465400816e9159c498\" returns successfully" Nov 12 20:56:13.323021 containerd[1461]: time="2024-11-12T20:56:13.322906291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2zh85,Uid:1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fe3b148585259278944fdc9d8ad1caf61ad87a6945f729053417a18c6c018c5\"" Nov 12 20:56:13.324072 kubelet[2523]: E1112 20:56:13.323966 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:13.539359 kubelet[2523]: E1112 20:56:13.539206 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:14.189043 kubelet[2523]: E1112 20:56:14.189009 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:14.190529 kubelet[2523]: E1112 20:56:14.190499 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:14.198065 kubelet[2523]: I1112 20:56:14.198004 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wgd54" podStartSLOduration=4.19798427 podStartE2EDuration="4.19798427s" podCreationTimestamp="2024-11-12 20:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:56:14.197905993 +0000 UTC m=+9.115655834" watchObservedRunningTime="2024-11-12 20:56:14.19798427 +0000 UTC m=+9.115734111" Nov 12 20:56:15.191758 kubelet[2523]: E1112 20:56:15.191716 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:15.871981 kubelet[2523]: E1112 20:56:15.871937 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:18.199904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1143554694.mount: Deactivated successfully. Nov 12 20:56:22.439360 containerd[1461]: time="2024-11-12T20:56:22.439218383Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:22.442369 containerd[1461]: time="2024-11-12T20:56:22.442286183Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735287" Nov 12 20:56:22.443960 containerd[1461]: time="2024-11-12T20:56:22.443890681Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:22.446011 containerd[1461]: time="2024-11-12T20:56:22.445961717Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.191961765s" Nov 12 20:56:22.446090 containerd[1461]: time="2024-11-12T20:56:22.446016209Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 12 20:56:22.447310 containerd[1461]: time="2024-11-12T20:56:22.447272432Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 12 20:56:22.448637 containerd[1461]: time="2024-11-12T20:56:22.448599268Z" level=info msg="CreateContainer within sandbox \"c42d0dfa18219b4b62fb3f5c4c932661b0cf02204bfa146b7aa047fd30a6991e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 20:56:22.468726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount351557156.mount: Deactivated successfully. Nov 12 20:56:22.471642 containerd[1461]: time="2024-11-12T20:56:22.471586857Z" level=info msg="CreateContainer within sandbox \"c42d0dfa18219b4b62fb3f5c4c932661b0cf02204bfa146b7aa047fd30a6991e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21\"" Nov 12 20:56:22.472557 containerd[1461]: time="2024-11-12T20:56:22.472513799Z" level=info msg="StartContainer for \"e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21\"" Nov 12 20:56:22.506483 systemd[1]: run-containerd-runc-k8s.io-e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21-runc.PWxKkx.mount: Deactivated successfully. Nov 12 20:56:22.527615 systemd[1]: Started cri-containerd-e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21.scope - libcontainer container e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21. Nov 12 20:56:22.559199 containerd[1461]: time="2024-11-12T20:56:22.559147433Z" level=info msg="StartContainer for \"e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21\" returns successfully" Nov 12 20:56:22.572709 systemd[1]: cri-containerd-e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21.scope: Deactivated successfully. Nov 12 20:56:23.209198 kubelet[2523]: E1112 20:56:23.209156 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:23.466431 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21-rootfs.mount: Deactivated successfully. Nov 12 20:56:23.707883 containerd[1461]: time="2024-11-12T20:56:23.707798358Z" level=info msg="shim disconnected" id=e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21 namespace=k8s.io Nov 12 20:56:23.707883 containerd[1461]: time="2024-11-12T20:56:23.707879450Z" level=warning msg="cleaning up after shim disconnected" id=e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21 namespace=k8s.io Nov 12 20:56:23.707883 containerd[1461]: time="2024-11-12T20:56:23.707892795Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:56:24.213266 kubelet[2523]: E1112 20:56:24.213177 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:24.216052 containerd[1461]: time="2024-11-12T20:56:24.215995327Z" level=info msg="CreateContainer within sandbox \"c42d0dfa18219b4b62fb3f5c4c932661b0cf02204bfa146b7aa047fd30a6991e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 20:56:24.235111 containerd[1461]: time="2024-11-12T20:56:24.235046372Z" level=info msg="CreateContainer within sandbox \"c42d0dfa18219b4b62fb3f5c4c932661b0cf02204bfa146b7aa047fd30a6991e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"426f68cd62be5f37d5568118cae9e5262f66f8ca49aff79b9085734d94147a8e\"" Nov 12 20:56:24.235757 containerd[1461]: time="2024-11-12T20:56:24.235498663Z" level=info msg="StartContainer for \"426f68cd62be5f37d5568118cae9e5262f66f8ca49aff79b9085734d94147a8e\"" Nov 12 20:56:24.269570 systemd[1]: Started cri-containerd-426f68cd62be5f37d5568118cae9e5262f66f8ca49aff79b9085734d94147a8e.scope - libcontainer container 426f68cd62be5f37d5568118cae9e5262f66f8ca49aff79b9085734d94147a8e. Nov 12 20:56:24.306611 containerd[1461]: time="2024-11-12T20:56:24.306541495Z" level=info msg="StartContainer for \"426f68cd62be5f37d5568118cae9e5262f66f8ca49aff79b9085734d94147a8e\" returns successfully" Nov 12 20:56:24.323015 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:56:24.323418 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:56:24.323520 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:56:24.328821 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:56:24.329090 systemd[1]: cri-containerd-426f68cd62be5f37d5568118cae9e5262f66f8ca49aff79b9085734d94147a8e.scope: Deactivated successfully. Nov 12 20:56:24.355529 containerd[1461]: time="2024-11-12T20:56:24.355445025Z" level=info msg="shim disconnected" id=426f68cd62be5f37d5568118cae9e5262f66f8ca49aff79b9085734d94147a8e namespace=k8s.io Nov 12 20:56:24.355529 containerd[1461]: time="2024-11-12T20:56:24.355515778Z" level=warning msg="cleaning up after shim disconnected" id=426f68cd62be5f37d5568118cae9e5262f66f8ca49aff79b9085734d94147a8e namespace=k8s.io Nov 12 20:56:24.355529 containerd[1461]: time="2024-11-12T20:56:24.355527260Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:56:24.363738 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:56:24.466608 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-426f68cd62be5f37d5568118cae9e5262f66f8ca49aff79b9085734d94147a8e-rootfs.mount: Deactivated successfully. Nov 12 20:56:25.198443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1515872987.mount: Deactivated successfully. Nov 12 20:56:25.216870 kubelet[2523]: E1112 20:56:25.216831 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:25.219036 containerd[1461]: time="2024-11-12T20:56:25.218997772Z" level=info msg="CreateContainer within sandbox \"c42d0dfa18219b4b62fb3f5c4c932661b0cf02204bfa146b7aa047fd30a6991e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 20:56:25.244495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount173744693.mount: Deactivated successfully. Nov 12 20:56:25.246611 containerd[1461]: time="2024-11-12T20:56:25.246561548Z" level=info msg="CreateContainer within sandbox \"c42d0dfa18219b4b62fb3f5c4c932661b0cf02204bfa146b7aa047fd30a6991e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9efc050597fbc4963bcd02386809e84581bc1604c76dd4a1c3e438237abc73ef\"" Nov 12 20:56:25.247591 containerd[1461]: time="2024-11-12T20:56:25.247549896Z" level=info msg="StartContainer for \"9efc050597fbc4963bcd02386809e84581bc1604c76dd4a1c3e438237abc73ef\"" Nov 12 20:56:25.280468 systemd[1]: Started cri-containerd-9efc050597fbc4963bcd02386809e84581bc1604c76dd4a1c3e438237abc73ef.scope - libcontainer container 9efc050597fbc4963bcd02386809e84581bc1604c76dd4a1c3e438237abc73ef. Nov 12 20:56:25.314429 systemd[1]: cri-containerd-9efc050597fbc4963bcd02386809e84581bc1604c76dd4a1c3e438237abc73ef.scope: Deactivated successfully. Nov 12 20:56:25.315741 containerd[1461]: time="2024-11-12T20:56:25.315702608Z" level=info msg="StartContainer for \"9efc050597fbc4963bcd02386809e84581bc1604c76dd4a1c3e438237abc73ef\" returns successfully" Nov 12 20:56:25.342929 containerd[1461]: time="2024-11-12T20:56:25.342834752Z" level=info msg="shim disconnected" id=9efc050597fbc4963bcd02386809e84581bc1604c76dd4a1c3e438237abc73ef namespace=k8s.io Nov 12 20:56:25.342929 containerd[1461]: time="2024-11-12T20:56:25.342923038Z" level=warning msg="cleaning up after shim disconnected" id=9efc050597fbc4963bcd02386809e84581bc1604c76dd4a1c3e438237abc73ef namespace=k8s.io Nov 12 20:56:25.342929 containerd[1461]: time="2024-11-12T20:56:25.342936383Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:56:26.223778 kubelet[2523]: E1112 20:56:26.223529 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:26.226281 containerd[1461]: time="2024-11-12T20:56:26.226021008Z" level=info msg="CreateContainer within sandbox \"c42d0dfa18219b4b62fb3f5c4c932661b0cf02204bfa146b7aa047fd30a6991e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 20:56:26.249664 containerd[1461]: time="2024-11-12T20:56:26.249614544Z" level=info msg="CreateContainer within sandbox \"c42d0dfa18219b4b62fb3f5c4c932661b0cf02204bfa146b7aa047fd30a6991e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"86c8e3747a1b57066ab4a0bc17d850e8207e7edc066b3ba5cb2b15a49614e143\"" Nov 12 20:56:26.251854 containerd[1461]: time="2024-11-12T20:56:26.251045033Z" level=info msg="StartContainer for \"86c8e3747a1b57066ab4a0bc17d850e8207e7edc066b3ba5cb2b15a49614e143\"" Nov 12 20:56:26.252013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1933002303.mount: Deactivated successfully. Nov 12 20:56:26.293646 systemd[1]: Started cri-containerd-86c8e3747a1b57066ab4a0bc17d850e8207e7edc066b3ba5cb2b15a49614e143.scope - libcontainer container 86c8e3747a1b57066ab4a0bc17d850e8207e7edc066b3ba5cb2b15a49614e143. Nov 12 20:56:26.323770 systemd[1]: cri-containerd-86c8e3747a1b57066ab4a0bc17d850e8207e7edc066b3ba5cb2b15a49614e143.scope: Deactivated successfully. Nov 12 20:56:26.325884 containerd[1461]: time="2024-11-12T20:56:26.325819951Z" level=info msg="StartContainer for \"86c8e3747a1b57066ab4a0bc17d850e8207e7edc066b3ba5cb2b15a49614e143\" returns successfully" Nov 12 20:56:26.466138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86c8e3747a1b57066ab4a0bc17d850e8207e7edc066b3ba5cb2b15a49614e143-rootfs.mount: Deactivated successfully. Nov 12 20:56:26.608413 containerd[1461]: time="2024-11-12T20:56:26.608312788Z" level=info msg="shim disconnected" id=86c8e3747a1b57066ab4a0bc17d850e8207e7edc066b3ba5cb2b15a49614e143 namespace=k8s.io Nov 12 20:56:26.608413 containerd[1461]: time="2024-11-12T20:56:26.608404310Z" level=warning msg="cleaning up after shim disconnected" id=86c8e3747a1b57066ab4a0bc17d850e8207e7edc066b3ba5cb2b15a49614e143 namespace=k8s.io Nov 12 20:56:26.608413 containerd[1461]: time="2024-11-12T20:56:26.608420010Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:56:26.634754 containerd[1461]: time="2024-11-12T20:56:26.634686671Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:26.638638 containerd[1461]: time="2024-11-12T20:56:26.638548151Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907225" Nov 12 20:56:26.640157 containerd[1461]: time="2024-11-12T20:56:26.640112272Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:26.641582 containerd[1461]: time="2024-11-12T20:56:26.641486836Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.194157196s" Nov 12 20:56:26.641582 containerd[1461]: time="2024-11-12T20:56:26.641586593Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 12 20:56:26.643745 containerd[1461]: time="2024-11-12T20:56:26.643716888Z" level=info msg="CreateContainer within sandbox \"4fe3b148585259278944fdc9d8ad1caf61ad87a6945f729053417a18c6c018c5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 12 20:56:26.658695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1695854987.mount: Deactivated successfully. Nov 12 20:56:26.660482 containerd[1461]: time="2024-11-12T20:56:26.660443725Z" level=info msg="CreateContainer within sandbox \"4fe3b148585259278944fdc9d8ad1caf61ad87a6945f729053417a18c6c018c5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320\"" Nov 12 20:56:26.660976 containerd[1461]: time="2024-11-12T20:56:26.660930580Z" level=info msg="StartContainer for \"690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320\"" Nov 12 20:56:26.688507 systemd[1]: Started cri-containerd-690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320.scope - libcontainer container 690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320. Nov 12 20:56:26.727025 containerd[1461]: time="2024-11-12T20:56:26.726876829Z" level=info msg="StartContainer for \"690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320\" returns successfully" Nov 12 20:56:27.228383 kubelet[2523]: E1112 20:56:27.228278 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:27.234970 kubelet[2523]: E1112 20:56:27.234120 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:27.236637 containerd[1461]: time="2024-11-12T20:56:27.236476651Z" level=info msg="CreateContainer within sandbox \"c42d0dfa18219b4b62fb3f5c4c932661b0cf02204bfa146b7aa047fd30a6991e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 20:56:27.248490 kubelet[2523]: I1112 20:56:27.248229 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-2zh85" podStartSLOduration=3.930484869 podStartE2EDuration="17.248200781s" podCreationTimestamp="2024-11-12 20:56:10 +0000 UTC" firstStartedPulling="2024-11-12 20:56:13.324714908 +0000 UTC m=+8.242464759" lastFinishedPulling="2024-11-12 20:56:26.64243083 +0000 UTC m=+21.560180671" observedRunningTime="2024-11-12 20:56:27.246774921 +0000 UTC m=+22.164524772" watchObservedRunningTime="2024-11-12 20:56:27.248200781 +0000 UTC m=+22.165950652" Nov 12 20:56:27.267022 containerd[1461]: time="2024-11-12T20:56:27.266900883Z" level=info msg="CreateContainer within sandbox \"c42d0dfa18219b4b62fb3f5c4c932661b0cf02204bfa146b7aa047fd30a6991e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e\"" Nov 12 20:56:27.270543 containerd[1461]: time="2024-11-12T20:56:27.267916663Z" level=info msg="StartContainer for \"1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e\"" Nov 12 20:56:27.344834 systemd[1]: Started cri-containerd-1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e.scope - libcontainer container 1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e. Nov 12 20:56:27.389366 containerd[1461]: time="2024-11-12T20:56:27.388666840Z" level=info msg="StartContainer for \"1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e\" returns successfully" Nov 12 20:56:27.610216 kubelet[2523]: I1112 20:56:27.610180 2523 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Nov 12 20:56:27.660805 systemd[1]: Created slice kubepods-burstable-podb2632412_81a2_450b_8280_6ec8a1d5a5c0.slice - libcontainer container kubepods-burstable-podb2632412_81a2_450b_8280_6ec8a1d5a5c0.slice. Nov 12 20:56:27.668541 systemd[1]: Created slice kubepods-burstable-pod54ba0e43_368f_4b41_9b4f_6009634aff4a.slice - libcontainer container kubepods-burstable-pod54ba0e43_368f_4b41_9b4f_6009634aff4a.slice. Nov 12 20:56:27.787569 kubelet[2523]: I1112 20:56:27.787508 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2632412-81a2-450b-8280-6ec8a1d5a5c0-config-volume\") pod \"coredns-6f6b679f8f-5srw4\" (UID: \"b2632412-81a2-450b-8280-6ec8a1d5a5c0\") " pod="kube-system/coredns-6f6b679f8f-5srw4" Nov 12 20:56:27.787569 kubelet[2523]: I1112 20:56:27.787567 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54ba0e43-368f-4b41-9b4f-6009634aff4a-config-volume\") pod \"coredns-6f6b679f8f-lk2zn\" (UID: \"54ba0e43-368f-4b41-9b4f-6009634aff4a\") " pod="kube-system/coredns-6f6b679f8f-lk2zn" Nov 12 20:56:27.787766 kubelet[2523]: I1112 20:56:27.787592 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-899jb\" (UniqueName: \"kubernetes.io/projected/b2632412-81a2-450b-8280-6ec8a1d5a5c0-kube-api-access-899jb\") pod \"coredns-6f6b679f8f-5srw4\" (UID: \"b2632412-81a2-450b-8280-6ec8a1d5a5c0\") " pod="kube-system/coredns-6f6b679f8f-5srw4" Nov 12 20:56:27.787766 kubelet[2523]: I1112 20:56:27.787616 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m76jc\" (UniqueName: \"kubernetes.io/projected/54ba0e43-368f-4b41-9b4f-6009634aff4a-kube-api-access-m76jc\") pod \"coredns-6f6b679f8f-lk2zn\" (UID: \"54ba0e43-368f-4b41-9b4f-6009634aff4a\") " pod="kube-system/coredns-6f6b679f8f-lk2zn" Nov 12 20:56:27.968427 kubelet[2523]: E1112 20:56:27.968239 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:27.969618 containerd[1461]: time="2024-11-12T20:56:27.969550917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5srw4,Uid:b2632412-81a2-450b-8280-6ec8a1d5a5c0,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:27.971731 kubelet[2523]: E1112 20:56:27.971650 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:27.972350 containerd[1461]: time="2024-11-12T20:56:27.972283384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lk2zn,Uid:54ba0e43-368f-4b41-9b4f-6009634aff4a,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:28.238665 kubelet[2523]: E1112 20:56:28.238549 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:28.239065 kubelet[2523]: E1112 20:56:28.238842 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:28.356576 kubelet[2523]: I1112 20:56:28.356514 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-44pcc" podStartSLOduration=7.162842312 podStartE2EDuration="18.356494789s" podCreationTimestamp="2024-11-12 20:56:10 +0000 UTC" firstStartedPulling="2024-11-12 20:56:11.253400752 +0000 UTC m=+6.171150593" lastFinishedPulling="2024-11-12 20:56:22.447053229 +0000 UTC m=+17.364803070" observedRunningTime="2024-11-12 20:56:28.355872259 +0000 UTC m=+23.273622110" watchObservedRunningTime="2024-11-12 20:56:28.356494789 +0000 UTC m=+23.274244630" Nov 12 20:56:29.240866 kubelet[2523]: E1112 20:56:29.240829 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:30.243077 kubelet[2523]: E1112 20:56:30.243029 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:30.476228 systemd-networkd[1385]: cilium_host: Link UP Nov 12 20:56:30.476465 systemd-networkd[1385]: cilium_net: Link UP Nov 12 20:56:30.476469 systemd-networkd[1385]: cilium_net: Gained carrier Nov 12 20:56:30.476741 systemd-networkd[1385]: cilium_host: Gained carrier Nov 12 20:56:30.477731 systemd-networkd[1385]: cilium_host: Gained IPv6LL Nov 12 20:56:30.590074 systemd-networkd[1385]: cilium_vxlan: Link UP Nov 12 20:56:30.590284 systemd-networkd[1385]: cilium_vxlan: Gained carrier Nov 12 20:56:30.838395 kernel: NET: Registered PF_ALG protocol family Nov 12 20:56:31.421520 systemd-networkd[1385]: cilium_net: Gained IPv6LL Nov 12 20:56:31.560169 systemd-networkd[1385]: lxc_health: Link UP Nov 12 20:56:31.566266 systemd-networkd[1385]: lxc_health: Gained carrier Nov 12 20:56:32.018241 systemd-networkd[1385]: lxcf0bb0b7d9522: Link UP Nov 12 20:56:32.078698 kernel: eth0: renamed from tmp475cb Nov 12 20:56:32.092957 systemd-networkd[1385]: lxc25f9343cd2ef: Link UP Nov 12 20:56:32.095104 kernel: eth0: renamed from tmp4efce Nov 12 20:56:32.101548 systemd-networkd[1385]: lxcf0bb0b7d9522: Gained carrier Nov 12 20:56:32.101989 systemd-networkd[1385]: lxc25f9343cd2ef: Gained carrier Nov 12 20:56:32.189549 systemd-networkd[1385]: cilium_vxlan: Gained IPv6LL Nov 12 20:56:33.149575 systemd-networkd[1385]: lxc_health: Gained IPv6LL Nov 12 20:56:33.170173 kubelet[2523]: E1112 20:56:33.170133 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:33.250780 kubelet[2523]: E1112 20:56:33.250731 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:33.341518 systemd-networkd[1385]: lxcf0bb0b7d9522: Gained IPv6LL Nov 12 20:56:33.789928 systemd-networkd[1385]: lxc25f9343cd2ef: Gained IPv6LL Nov 12 20:56:34.252222 kubelet[2523]: E1112 20:56:34.252184 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:35.141694 systemd[1]: Started sshd@7-10.0.0.133:22-10.0.0.1:50606.service - OpenSSH per-connection server daemon (10.0.0.1:50606). Nov 12 20:56:35.186032 sshd[3736]: Accepted publickey for core from 10.0.0.1 port 50606 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:35.187790 sshd[3736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:35.192582 systemd-logind[1444]: New session 8 of user core. Nov 12 20:56:35.199521 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:56:35.359418 sshd[3736]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:35.362419 systemd[1]: sshd@7-10.0.0.133:22-10.0.0.1:50606.service: Deactivated successfully. Nov 12 20:56:35.364653 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:56:35.366265 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:56:35.367693 systemd-logind[1444]: Removed session 8. Nov 12 20:56:36.128368 containerd[1461]: time="2024-11-12T20:56:36.127550887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:36.128368 containerd[1461]: time="2024-11-12T20:56:36.127637190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:36.128368 containerd[1461]: time="2024-11-12T20:56:36.127654042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:36.128368 containerd[1461]: time="2024-11-12T20:56:36.127800317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:36.128368 containerd[1461]: time="2024-11-12T20:56:36.128071375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:36.128368 containerd[1461]: time="2024-11-12T20:56:36.128123072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:36.128913 containerd[1461]: time="2024-11-12T20:56:36.128164480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:36.128913 containerd[1461]: time="2024-11-12T20:56:36.128274608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:36.158657 systemd[1]: Started cri-containerd-475cbace746243b46e376a0a8da375b3f0042737dc35d0e531e6273630351b9b.scope - libcontainer container 475cbace746243b46e376a0a8da375b3f0042737dc35d0e531e6273630351b9b. Nov 12 20:56:36.161001 systemd[1]: Started cri-containerd-4efce41ee9929716610cc6867915b3bdf7362d9bd68f638561a7f7c82bb82bae.scope - libcontainer container 4efce41ee9929716610cc6867915b3bdf7362d9bd68f638561a7f7c82bb82bae. Nov 12 20:56:36.174262 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:56:36.176650 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:56:36.202837 containerd[1461]: time="2024-11-12T20:56:36.202768052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5srw4,Uid:b2632412-81a2-450b-8280-6ec8a1d5a5c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"4efce41ee9929716610cc6867915b3bdf7362d9bd68f638561a7f7c82bb82bae\"" Nov 12 20:56:36.208222 containerd[1461]: time="2024-11-12T20:56:36.208180138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lk2zn,Uid:54ba0e43-368f-4b41-9b4f-6009634aff4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"475cbace746243b46e376a0a8da375b3f0042737dc35d0e531e6273630351b9b\"" Nov 12 20:56:36.210832 kubelet[2523]: E1112 20:56:36.210789 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:36.211283 kubelet[2523]: E1112 20:56:36.210792 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:36.215063 containerd[1461]: time="2024-11-12T20:56:36.215028913Z" level=info msg="CreateContainer within sandbox \"4efce41ee9929716610cc6867915b3bdf7362d9bd68f638561a7f7c82bb82bae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:56:36.215146 containerd[1461]: time="2024-11-12T20:56:36.215101559Z" level=info msg="CreateContainer within sandbox \"475cbace746243b46e376a0a8da375b3f0042737dc35d0e531e6273630351b9b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:56:36.447998 containerd[1461]: time="2024-11-12T20:56:36.447485726Z" level=info msg="CreateContainer within sandbox \"4efce41ee9929716610cc6867915b3bdf7362d9bd68f638561a7f7c82bb82bae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9cdac8c43dfc048dabb3271962247d4706092d3bcd8e97a862fe6e17ac7b2ae9\"" Nov 12 20:56:36.448833 containerd[1461]: time="2024-11-12T20:56:36.448479934Z" level=info msg="StartContainer for \"9cdac8c43dfc048dabb3271962247d4706092d3bcd8e97a862fe6e17ac7b2ae9\"" Nov 12 20:56:36.448833 containerd[1461]: time="2024-11-12T20:56:36.448632130Z" level=info msg="CreateContainer within sandbox \"475cbace746243b46e376a0a8da375b3f0042737dc35d0e531e6273630351b9b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2df2b82d267e35ea5ca6aa68e757f74b7893eee2ab9848960680fa7048f3ed41\"" Nov 12 20:56:36.449256 containerd[1461]: time="2024-11-12T20:56:36.449230754Z" level=info msg="StartContainer for \"2df2b82d267e35ea5ca6aa68e757f74b7893eee2ab9848960680fa7048f3ed41\"" Nov 12 20:56:36.478547 systemd[1]: Started cri-containerd-2df2b82d267e35ea5ca6aa68e757f74b7893eee2ab9848960680fa7048f3ed41.scope - libcontainer container 2df2b82d267e35ea5ca6aa68e757f74b7893eee2ab9848960680fa7048f3ed41. Nov 12 20:56:36.482594 systemd[1]: Started cri-containerd-9cdac8c43dfc048dabb3271962247d4706092d3bcd8e97a862fe6e17ac7b2ae9.scope - libcontainer container 9cdac8c43dfc048dabb3271962247d4706092d3bcd8e97a862fe6e17ac7b2ae9. Nov 12 20:56:36.521513 containerd[1461]: time="2024-11-12T20:56:36.521435619Z" level=info msg="StartContainer for \"2df2b82d267e35ea5ca6aa68e757f74b7893eee2ab9848960680fa7048f3ed41\" returns successfully" Nov 12 20:56:36.521688 containerd[1461]: time="2024-11-12T20:56:36.521439517Z" level=info msg="StartContainer for \"9cdac8c43dfc048dabb3271962247d4706092d3bcd8e97a862fe6e17ac7b2ae9\" returns successfully" Nov 12 20:56:37.268107 kubelet[2523]: E1112 20:56:37.268056 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:37.269258 kubelet[2523]: E1112 20:56:37.268257 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:37.384301 kubelet[2523]: I1112 20:56:37.384222 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-5srw4" podStartSLOduration=27.384191874 podStartE2EDuration="27.384191874s" podCreationTimestamp="2024-11-12 20:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:56:37.36978381 +0000 UTC m=+32.287533661" watchObservedRunningTime="2024-11-12 20:56:37.384191874 +0000 UTC m=+32.301941715" Nov 12 20:56:37.396508 kubelet[2523]: I1112 20:56:37.396392 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-lk2zn" podStartSLOduration=27.396366833 podStartE2EDuration="27.396366833s" podCreationTimestamp="2024-11-12 20:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:56:37.384737379 +0000 UTC m=+32.302487220" watchObservedRunningTime="2024-11-12 20:56:37.396366833 +0000 UTC m=+32.314116674" Nov 12 20:56:38.270586 kubelet[2523]: E1112 20:56:38.270537 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:38.270586 kubelet[2523]: E1112 20:56:38.270537 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:39.271998 kubelet[2523]: E1112 20:56:39.271958 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:39.271998 kubelet[2523]: E1112 20:56:39.271974 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:40.377532 systemd[1]: Started sshd@8-10.0.0.133:22-10.0.0.1:39810.service - OpenSSH per-connection server daemon (10.0.0.1:39810). Nov 12 20:56:40.421307 sshd[3927]: Accepted publickey for core from 10.0.0.1 port 39810 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:40.423077 sshd[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:40.427844 systemd-logind[1444]: New session 9 of user core. Nov 12 20:56:40.440589 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:56:40.571804 sshd[3927]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:40.576093 systemd[1]: sshd@8-10.0.0.133:22-10.0.0.1:39810.service: Deactivated successfully. Nov 12 20:56:40.578410 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:56:40.579134 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:56:40.580101 systemd-logind[1444]: Removed session 9. Nov 12 20:56:45.584697 systemd[1]: Started sshd@9-10.0.0.133:22-10.0.0.1:47062.service - OpenSSH per-connection server daemon (10.0.0.1:47062). Nov 12 20:56:45.618764 sshd[3946]: Accepted publickey for core from 10.0.0.1 port 47062 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:45.620540 sshd[3946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:45.625093 systemd-logind[1444]: New session 10 of user core. Nov 12 20:56:45.640500 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:56:45.759672 sshd[3946]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:45.763921 systemd[1]: sshd@9-10.0.0.133:22-10.0.0.1:47062.service: Deactivated successfully. Nov 12 20:56:45.765827 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:56:45.766495 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:56:45.767416 systemd-logind[1444]: Removed session 10. Nov 12 20:56:50.776641 systemd[1]: Started sshd@10-10.0.0.133:22-10.0.0.1:47078.service - OpenSSH per-connection server daemon (10.0.0.1:47078). Nov 12 20:56:50.809796 sshd[3961]: Accepted publickey for core from 10.0.0.1 port 47078 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:50.811989 sshd[3961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:50.817642 systemd-logind[1444]: New session 11 of user core. Nov 12 20:56:50.827663 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:56:50.947477 sshd[3961]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:50.952648 systemd[1]: sshd@10-10.0.0.133:22-10.0.0.1:47078.service: Deactivated successfully. Nov 12 20:56:50.955372 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:56:50.956149 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:56:50.957563 systemd-logind[1444]: Removed session 11. Nov 12 20:56:55.964444 systemd[1]: Started sshd@11-10.0.0.133:22-10.0.0.1:53264.service - OpenSSH per-connection server daemon (10.0.0.1:53264). Nov 12 20:56:56.000837 sshd[3976]: Accepted publickey for core from 10.0.0.1 port 53264 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:56.002505 sshd[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:56.006468 systemd-logind[1444]: New session 12 of user core. Nov 12 20:56:56.016574 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:56:56.132907 sshd[3976]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:56.143315 systemd[1]: sshd@11-10.0.0.133:22-10.0.0.1:53264.service: Deactivated successfully. Nov 12 20:56:56.145419 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:56:56.147402 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:56:56.156702 systemd[1]: Started sshd@12-10.0.0.133:22-10.0.0.1:53276.service - OpenSSH per-connection server daemon (10.0.0.1:53276). Nov 12 20:56:56.157839 systemd-logind[1444]: Removed session 12. Nov 12 20:56:56.184412 sshd[3992]: Accepted publickey for core from 10.0.0.1 port 53276 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:56.186112 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:56.190377 systemd-logind[1444]: New session 13 of user core. Nov 12 20:56:56.197506 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:56:56.355115 sshd[3992]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:56.368212 systemd[1]: sshd@12-10.0.0.133:22-10.0.0.1:53276.service: Deactivated successfully. Nov 12 20:56:56.372482 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:56:56.377766 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:56:56.384307 systemd[1]: Started sshd@13-10.0.0.133:22-10.0.0.1:53292.service - OpenSSH per-connection server daemon (10.0.0.1:53292). Nov 12 20:56:56.386152 systemd-logind[1444]: Removed session 13. Nov 12 20:56:56.428749 sshd[4005]: Accepted publickey for core from 10.0.0.1 port 53292 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:56:56.430663 sshd[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:56.436007 systemd-logind[1444]: New session 14 of user core. Nov 12 20:56:56.449639 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:56:56.588544 sshd[4005]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:56.593617 systemd[1]: sshd@13-10.0.0.133:22-10.0.0.1:53292.service: Deactivated successfully. Nov 12 20:56:56.595959 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:56:56.596743 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:56:56.597742 systemd-logind[1444]: Removed session 14. Nov 12 20:57:01.600536 systemd[1]: Started sshd@14-10.0.0.133:22-10.0.0.1:53304.service - OpenSSH per-connection server daemon (10.0.0.1:53304). Nov 12 20:57:01.632552 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 53304 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:57:01.634218 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:01.638089 systemd-logind[1444]: New session 15 of user core. Nov 12 20:57:01.647475 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 20:57:01.768994 sshd[4019]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:01.773295 systemd[1]: sshd@14-10.0.0.133:22-10.0.0.1:53304.service: Deactivated successfully. Nov 12 20:57:01.775771 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 20:57:01.776521 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Nov 12 20:57:01.777892 systemd-logind[1444]: Removed session 15. Nov 12 20:57:06.780109 systemd[1]: Started sshd@15-10.0.0.133:22-10.0.0.1:52412.service - OpenSSH per-connection server daemon (10.0.0.1:52412). Nov 12 20:57:06.810863 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 52412 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:57:06.812244 sshd[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:06.815919 systemd-logind[1444]: New session 16 of user core. Nov 12 20:57:06.827466 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 20:57:06.926011 sshd[4035]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:06.935013 systemd[1]: sshd@15-10.0.0.133:22-10.0.0.1:52412.service: Deactivated successfully. Nov 12 20:57:06.936771 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 20:57:06.938253 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Nov 12 20:57:06.945625 systemd[1]: Started sshd@16-10.0.0.133:22-10.0.0.1:52414.service - OpenSSH per-connection server daemon (10.0.0.1:52414). Nov 12 20:57:06.946666 systemd-logind[1444]: Removed session 16. Nov 12 20:57:06.971683 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 52414 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:57:06.973701 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:06.978769 systemd-logind[1444]: New session 17 of user core. Nov 12 20:57:06.990057 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 20:57:07.399609 sshd[4049]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:07.411084 systemd[1]: sshd@16-10.0.0.133:22-10.0.0.1:52414.service: Deactivated successfully. Nov 12 20:57:07.412927 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 20:57:07.415294 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Nov 12 20:57:07.426730 systemd[1]: Started sshd@17-10.0.0.133:22-10.0.0.1:52430.service - OpenSSH per-connection server daemon (10.0.0.1:52430). Nov 12 20:57:07.427796 systemd-logind[1444]: Removed session 17. Nov 12 20:57:07.456388 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 52430 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:57:07.457973 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:07.462216 systemd-logind[1444]: New session 18 of user core. Nov 12 20:57:07.470467 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 20:57:09.165279 sshd[4061]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:09.176118 systemd[1]: sshd@17-10.0.0.133:22-10.0.0.1:52430.service: Deactivated successfully. Nov 12 20:57:09.178196 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 20:57:09.180300 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Nov 12 20:57:09.189866 systemd[1]: Started sshd@18-10.0.0.133:22-10.0.0.1:52442.service - OpenSSH per-connection server daemon (10.0.0.1:52442). Nov 12 20:57:09.191032 systemd-logind[1444]: Removed session 18. Nov 12 20:57:09.220052 sshd[4081]: Accepted publickey for core from 10.0.0.1 port 52442 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:57:09.221818 sshd[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:09.226186 systemd-logind[1444]: New session 19 of user core. Nov 12 20:57:09.236573 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 20:57:09.462029 sshd[4081]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:09.474952 systemd[1]: sshd@18-10.0.0.133:22-10.0.0.1:52442.service: Deactivated successfully. Nov 12 20:57:09.477535 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 20:57:09.479096 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Nov 12 20:57:09.491585 systemd[1]: Started sshd@19-10.0.0.133:22-10.0.0.1:52448.service - OpenSSH per-connection server daemon (10.0.0.1:52448). Nov 12 20:57:09.492568 systemd-logind[1444]: Removed session 19. Nov 12 20:57:09.519535 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 52448 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:57:09.521220 sshd[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:09.525164 systemd-logind[1444]: New session 20 of user core. Nov 12 20:57:09.532492 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 20:57:09.638459 sshd[4094]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:09.642598 systemd[1]: sshd@19-10.0.0.133:22-10.0.0.1:52448.service: Deactivated successfully. Nov 12 20:57:09.644697 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 20:57:09.645348 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Nov 12 20:57:09.646305 systemd-logind[1444]: Removed session 20. Nov 12 20:57:14.651066 systemd[1]: Started sshd@20-10.0.0.133:22-10.0.0.1:52460.service - OpenSSH per-connection server daemon (10.0.0.1:52460). Nov 12 20:57:14.686601 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 52460 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:57:14.688256 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:14.692211 systemd-logind[1444]: New session 21 of user core. Nov 12 20:57:14.703496 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 20:57:14.807534 sshd[4111]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:14.811057 systemd[1]: sshd@20-10.0.0.133:22-10.0.0.1:52460.service: Deactivated successfully. Nov 12 20:57:14.812896 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 20:57:14.813766 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. Nov 12 20:57:14.814637 systemd-logind[1444]: Removed session 21. Nov 12 20:57:17.161178 kubelet[2523]: E1112 20:57:17.161128 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:19.819680 systemd[1]: Started sshd@21-10.0.0.133:22-10.0.0.1:55798.service - OpenSSH per-connection server daemon (10.0.0.1:55798). Nov 12 20:57:19.852239 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 55798 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:57:19.853975 sshd[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:19.858390 systemd-logind[1444]: New session 22 of user core. Nov 12 20:57:19.868496 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 20:57:19.980321 sshd[4128]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:19.984730 systemd[1]: sshd@21-10.0.0.133:22-10.0.0.1:55798.service: Deactivated successfully. Nov 12 20:57:19.986839 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 20:57:19.987457 systemd-logind[1444]: Session 22 logged out. Waiting for processes to exit. Nov 12 20:57:19.988273 systemd-logind[1444]: Removed session 22. Nov 12 20:57:23.160887 kubelet[2523]: E1112 20:57:23.160840 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:24.991357 systemd[1]: Started sshd@22-10.0.0.133:22-10.0.0.1:55806.service - OpenSSH per-connection server daemon (10.0.0.1:55806). Nov 12 20:57:25.026275 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 55806 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:57:25.027972 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:25.031822 systemd-logind[1444]: New session 23 of user core. Nov 12 20:57:25.045466 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 20:57:25.150577 sshd[4142]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:25.154259 systemd[1]: sshd@22-10.0.0.133:22-10.0.0.1:55806.service: Deactivated successfully. Nov 12 20:57:25.156423 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 20:57:25.157010 systemd-logind[1444]: Session 23 logged out. Waiting for processes to exit. Nov 12 20:57:25.157830 systemd-logind[1444]: Removed session 23. Nov 12 20:57:29.160937 kubelet[2523]: E1112 20:57:29.160832 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:30.162856 systemd[1]: Started sshd@23-10.0.0.133:22-10.0.0.1:55548.service - OpenSSH per-connection server daemon (10.0.0.1:55548). Nov 12 20:57:30.197251 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 55548 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:57:30.199035 sshd[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:30.203417 systemd-logind[1444]: New session 24 of user core. Nov 12 20:57:30.215582 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 20:57:30.331246 sshd[4156]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:30.352698 systemd[1]: sshd@23-10.0.0.133:22-10.0.0.1:55548.service: Deactivated successfully. Nov 12 20:57:30.355917 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 20:57:30.358886 systemd-logind[1444]: Session 24 logged out. Waiting for processes to exit. Nov 12 20:57:30.369871 systemd[1]: Started sshd@24-10.0.0.133:22-10.0.0.1:55564.service - OpenSSH per-connection server daemon (10.0.0.1:55564). Nov 12 20:57:30.372396 systemd-logind[1444]: Removed session 24. Nov 12 20:57:30.402309 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 55564 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:57:30.404234 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:30.408840 systemd-logind[1444]: New session 25 of user core. Nov 12 20:57:30.417538 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 20:57:31.889468 containerd[1461]: time="2024-11-12T20:57:31.889282007Z" level=info msg="StopContainer for \"690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320\" with timeout 30 (s)" Nov 12 20:57:31.890137 containerd[1461]: time="2024-11-12T20:57:31.889848189Z" level=info msg="Stop container \"690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320\" with signal terminated" Nov 12 20:57:31.906399 systemd[1]: cri-containerd-690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320.scope: Deactivated successfully. Nov 12 20:57:31.918310 containerd[1461]: time="2024-11-12T20:57:31.918241928Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:57:31.929711 containerd[1461]: time="2024-11-12T20:57:31.929662329Z" level=info msg="StopContainer for \"1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e\" with timeout 2 (s)" Nov 12 20:57:31.930055 containerd[1461]: time="2024-11-12T20:57:31.930021389Z" level=info msg="Stop container \"1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e\" with signal terminated" Nov 12 20:57:31.934063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320-rootfs.mount: Deactivated successfully. Nov 12 20:57:31.937695 systemd-networkd[1385]: lxc_health: Link DOWN Nov 12 20:57:31.938035 systemd-networkd[1385]: lxc_health: Lost carrier Nov 12 20:57:31.946750 containerd[1461]: time="2024-11-12T20:57:31.946653400Z" level=info msg="shim disconnected" id=690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320 namespace=k8s.io Nov 12 20:57:31.946750 containerd[1461]: time="2024-11-12T20:57:31.946748991Z" level=warning msg="cleaning up after shim disconnected" id=690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320 namespace=k8s.io Nov 12 20:57:31.946975 containerd[1461]: time="2024-11-12T20:57:31.946762947Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:57:31.966982 systemd[1]: cri-containerd-1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e.scope: Deactivated successfully. Nov 12 20:57:31.967694 systemd[1]: cri-containerd-1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e.scope: Consumed 7.639s CPU time. Nov 12 20:57:31.970447 containerd[1461]: time="2024-11-12T20:57:31.969900861Z" level=info msg="StopContainer for \"690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320\" returns successfully" Nov 12 20:57:31.974347 containerd[1461]: time="2024-11-12T20:57:31.974291756Z" level=info msg="StopPodSandbox for \"4fe3b148585259278944fdc9d8ad1caf61ad87a6945f729053417a18c6c018c5\"" Nov 12 20:57:31.975075 containerd[1461]: time="2024-11-12T20:57:31.974374462Z" level=info msg="Container to stop \"690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:57:31.977035 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4fe3b148585259278944fdc9d8ad1caf61ad87a6945f729053417a18c6c018c5-shm.mount: Deactivated successfully. Nov 12 20:57:31.982831 systemd[1]: cri-containerd-4fe3b148585259278944fdc9d8ad1caf61ad87a6945f729053417a18c6c018c5.scope: Deactivated successfully. Nov 12 20:57:31.994635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e-rootfs.mount: Deactivated successfully. Nov 12 20:57:32.004734 containerd[1461]: time="2024-11-12T20:57:32.004643464Z" level=info msg="shim disconnected" id=1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e namespace=k8s.io Nov 12 20:57:32.005078 containerd[1461]: time="2024-11-12T20:57:32.005052229Z" level=warning msg="cleaning up after shim disconnected" id=1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e namespace=k8s.io Nov 12 20:57:32.005078 containerd[1461]: time="2024-11-12T20:57:32.005072347Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:57:32.006377 containerd[1461]: time="2024-11-12T20:57:32.005836335Z" level=info msg="shim disconnected" id=4fe3b148585259278944fdc9d8ad1caf61ad87a6945f729053417a18c6c018c5 namespace=k8s.io Nov 12 20:57:32.006377 containerd[1461]: time="2024-11-12T20:57:32.006236533Z" level=warning msg="cleaning up after shim disconnected" id=4fe3b148585259278944fdc9d8ad1caf61ad87a6945f729053417a18c6c018c5 namespace=k8s.io Nov 12 20:57:32.006377 containerd[1461]: time="2024-11-12T20:57:32.006250580Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:57:32.007037 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4fe3b148585259278944fdc9d8ad1caf61ad87a6945f729053417a18c6c018c5-rootfs.mount: Deactivated successfully. Nov 12 20:57:32.027956 containerd[1461]: time="2024-11-12T20:57:32.027884494Z" level=info msg="StopContainer for \"1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e\" returns successfully" Nov 12 20:57:32.028678 containerd[1461]: time="2024-11-12T20:57:32.028641238Z" level=info msg="StopPodSandbox for \"c42d0dfa18219b4b62fb3f5c4c932661b0cf02204bfa146b7aa047fd30a6991e\"" Nov 12 20:57:32.028756 containerd[1461]: time="2024-11-12T20:57:32.028693016Z" level=info msg="Container to stop \"426f68cd62be5f37d5568118cae9e5262f66f8ca49aff79b9085734d94147a8e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:57:32.028756 containerd[1461]: time="2024-11-12T20:57:32.028708777Z" level=info msg="Container to stop \"9efc050597fbc4963bcd02386809e84581bc1604c76dd4a1c3e438237abc73ef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:57:32.028756 containerd[1461]: time="2024-11-12T20:57:32.028720800Z" level=info msg="Container to stop \"86c8e3747a1b57066ab4a0bc17d850e8207e7edc066b3ba5cb2b15a49614e143\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:57:32.028756 containerd[1461]: time="2024-11-12T20:57:32.028732802Z" level=info msg="Container to stop \"1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:57:32.028756 containerd[1461]: time="2024-11-12T20:57:32.028745235Z" level=info msg="Container to stop \"e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:57:32.035278 containerd[1461]: time="2024-11-12T20:57:32.035202696Z" level=info msg="TearDown network for sandbox \"4fe3b148585259278944fdc9d8ad1caf61ad87a6945f729053417a18c6c018c5\" successfully" Nov 12 20:57:32.035278 containerd[1461]: time="2024-11-12T20:57:32.035260946Z" level=info msg="StopPodSandbox for \"4fe3b148585259278944fdc9d8ad1caf61ad87a6945f729053417a18c6c018c5\" returns successfully" Nov 12 20:57:32.036208 systemd[1]: cri-containerd-c42d0dfa18219b4b62fb3f5c4c932661b0cf02204bfa146b7aa047fd30a6991e.scope: Deactivated successfully. Nov 12 20:57:32.070063 containerd[1461]: time="2024-11-12T20:57:32.069947349Z" level=info msg="shim disconnected" id=c42d0dfa18219b4b62fb3f5c4c932661b0cf02204bfa146b7aa047fd30a6991e namespace=k8s.io Nov 12 20:57:32.070063 containerd[1461]: time="2024-11-12T20:57:32.070040525Z" level=warning msg="cleaning up after shim disconnected" id=c42d0dfa18219b4b62fb3f5c4c932661b0cf02204bfa146b7aa047fd30a6991e namespace=k8s.io Nov 12 20:57:32.070063 containerd[1461]: time="2024-11-12T20:57:32.070057337Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:57:32.089860 containerd[1461]: time="2024-11-12T20:57:32.089781432Z" level=info msg="TearDown network for sandbox \"c42d0dfa18219b4b62fb3f5c4c932661b0cf02204bfa146b7aa047fd30a6991e\" successfully" Nov 12 20:57:32.089860 containerd[1461]: time="2024-11-12T20:57:32.089843560Z" level=info msg="StopPodSandbox for \"c42d0dfa18219b4b62fb3f5c4c932661b0cf02204bfa146b7aa047fd30a6991e\" returns successfully" Nov 12 20:57:32.186774 kubelet[2523]: I1112 20:57:32.186587 2523 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be7b042f-12a1-49f2-bc59-5317b3dc38ab-hubble-tls\") pod \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " Nov 12 20:57:32.186774 kubelet[2523]: I1112 20:57:32.186637 2523 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-bpf-maps\") pod \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " Nov 12 20:57:32.186774 kubelet[2523]: I1112 20:57:32.186656 2523 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-host-proc-sys-kernel\") pod \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " Nov 12 20:57:32.186774 kubelet[2523]: I1112 20:57:32.186673 2523 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-cni-path\") pod \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " Nov 12 20:57:32.186774 kubelet[2523]: I1112 20:57:32.186687 2523 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-cilium-run\") pod \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " Nov 12 20:57:32.186774 kubelet[2523]: I1112 20:57:32.186707 2523 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be7b042f-12a1-49f2-bc59-5317b3dc38ab-clustermesh-secrets\") pod \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " Nov 12 20:57:32.187665 kubelet[2523]: I1112 20:57:32.186726 2523 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2-cilium-config-path\") pod \"1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2\" (UID: \"1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2\") " Nov 12 20:57:32.187665 kubelet[2523]: I1112 20:57:32.186740 2523 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-cilium-cgroup\") pod \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " Nov 12 20:57:32.187665 kubelet[2523]: I1112 20:57:32.186755 2523 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-lib-modules\") pod \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " Nov 12 20:57:32.187665 kubelet[2523]: I1112 20:57:32.186769 2523 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-hostproc\") pod \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " Nov 12 20:57:32.187665 kubelet[2523]: I1112 20:57:32.186784 2523 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be7b042f-12a1-49f2-bc59-5317b3dc38ab-cilium-config-path\") pod \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " Nov 12 20:57:32.187665 kubelet[2523]: I1112 20:57:32.186765 2523 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "be7b042f-12a1-49f2-bc59-5317b3dc38ab" (UID: "be7b042f-12a1-49f2-bc59-5317b3dc38ab"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:57:32.188081 kubelet[2523]: I1112 20:57:32.186800 2523 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h64w2\" (UniqueName: \"kubernetes.io/projected/be7b042f-12a1-49f2-bc59-5317b3dc38ab-kube-api-access-h64w2\") pod \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " Nov 12 20:57:32.188081 kubelet[2523]: I1112 20:57:32.186874 2523 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-etc-cni-netd\") pod \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " Nov 12 20:57:32.188081 kubelet[2523]: I1112 20:57:32.186897 2523 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsxtw\" (UniqueName: \"kubernetes.io/projected/1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2-kube-api-access-gsxtw\") pod \"1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2\" (UID: \"1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2\") " Nov 12 20:57:32.188081 kubelet[2523]: I1112 20:57:32.186917 2523 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-host-proc-sys-net\") pod \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " Nov 12 20:57:32.188081 kubelet[2523]: I1112 20:57:32.186934 2523 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-xtables-lock\") pod \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\" (UID: \"be7b042f-12a1-49f2-bc59-5317b3dc38ab\") " Nov 12 20:57:32.188081 kubelet[2523]: I1112 20:57:32.187001 2523 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 12 20:57:32.188280 kubelet[2523]: I1112 20:57:32.187530 2523 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "be7b042f-12a1-49f2-bc59-5317b3dc38ab" (UID: "be7b042f-12a1-49f2-bc59-5317b3dc38ab"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:57:32.188280 kubelet[2523]: I1112 20:57:32.187575 2523 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-cni-path" (OuterVolumeSpecName: "cni-path") pod "be7b042f-12a1-49f2-bc59-5317b3dc38ab" (UID: "be7b042f-12a1-49f2-bc59-5317b3dc38ab"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:57:32.188280 kubelet[2523]: I1112 20:57:32.187594 2523 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "be7b042f-12a1-49f2-bc59-5317b3dc38ab" (UID: "be7b042f-12a1-49f2-bc59-5317b3dc38ab"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:57:32.191561 kubelet[2523]: I1112 20:57:32.191504 2523 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "be7b042f-12a1-49f2-bc59-5317b3dc38ab" (UID: "be7b042f-12a1-49f2-bc59-5317b3dc38ab"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:57:32.192108 kubelet[2523]: I1112 20:57:32.192059 2523 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be7b042f-12a1-49f2-bc59-5317b3dc38ab-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "be7b042f-12a1-49f2-bc59-5317b3dc38ab" (UID: "be7b042f-12a1-49f2-bc59-5317b3dc38ab"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 12 20:57:32.192222 kubelet[2523]: I1112 20:57:32.192073 2523 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2" (UID: "1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 20:57:32.192395 kubelet[2523]: I1112 20:57:32.192361 2523 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "be7b042f-12a1-49f2-bc59-5317b3dc38ab" (UID: "be7b042f-12a1-49f2-bc59-5317b3dc38ab"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:57:32.192395 kubelet[2523]: I1112 20:57:32.192401 2523 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "be7b042f-12a1-49f2-bc59-5317b3dc38ab" (UID: "be7b042f-12a1-49f2-bc59-5317b3dc38ab"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:57:32.192551 kubelet[2523]: I1112 20:57:32.192440 2523 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-hostproc" (OuterVolumeSpecName: "hostproc") pod "be7b042f-12a1-49f2-bc59-5317b3dc38ab" (UID: "be7b042f-12a1-49f2-bc59-5317b3dc38ab"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:57:32.192551 kubelet[2523]: I1112 20:57:32.192469 2523 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "be7b042f-12a1-49f2-bc59-5317b3dc38ab" (UID: "be7b042f-12a1-49f2-bc59-5317b3dc38ab"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:57:32.195600 kubelet[2523]: I1112 20:57:32.195545 2523 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be7b042f-12a1-49f2-bc59-5317b3dc38ab-kube-api-access-h64w2" (OuterVolumeSpecName: "kube-api-access-h64w2") pod "be7b042f-12a1-49f2-bc59-5317b3dc38ab" (UID: "be7b042f-12a1-49f2-bc59-5317b3dc38ab"). InnerVolumeSpecName "kube-api-access-h64w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 20:57:32.195700 kubelet[2523]: I1112 20:57:32.195645 2523 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "be7b042f-12a1-49f2-bc59-5317b3dc38ab" (UID: "be7b042f-12a1-49f2-bc59-5317b3dc38ab"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:57:32.197501 kubelet[2523]: I1112 20:57:32.195685 2523 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2-kube-api-access-gsxtw" (OuterVolumeSpecName: "kube-api-access-gsxtw") pod "1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2" (UID: "1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2"). InnerVolumeSpecName "kube-api-access-gsxtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 20:57:32.197501 kubelet[2523]: I1112 20:57:32.195834 2523 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be7b042f-12a1-49f2-bc59-5317b3dc38ab-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "be7b042f-12a1-49f2-bc59-5317b3dc38ab" (UID: "be7b042f-12a1-49f2-bc59-5317b3dc38ab"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 20:57:32.197501 kubelet[2523]: I1112 20:57:32.197075 2523 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be7b042f-12a1-49f2-bc59-5317b3dc38ab-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "be7b042f-12a1-49f2-bc59-5317b3dc38ab" (UID: "be7b042f-12a1-49f2-bc59-5317b3dc38ab"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 20:57:32.287704 kubelet[2523]: I1112 20:57:32.287619 2523 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 12 20:57:32.287704 kubelet[2523]: I1112 20:57:32.287682 2523 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 12 20:57:32.287704 kubelet[2523]: I1112 20:57:32.287693 2523 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be7b042f-12a1-49f2-bc59-5317b3dc38ab-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 12 20:57:32.287704 kubelet[2523]: I1112 20:57:32.287708 2523 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 12 20:57:32.287704 kubelet[2523]: I1112 20:57:32.287720 2523 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 12 20:57:32.287704 kubelet[2523]: I1112 20:57:32.287732 2523 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 12 20:57:32.288089 kubelet[2523]: I1112 20:57:32.287744 2523 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be7b042f-12a1-49f2-bc59-5317b3dc38ab-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 12 20:57:32.288089 kubelet[2523]: I1112 20:57:32.287756 2523 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 20:57:32.288089 kubelet[2523]: I1112 20:57:32.287767 2523 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 12 20:57:32.288089 kubelet[2523]: I1112 20:57:32.287777 2523 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 12 20:57:32.288089 kubelet[2523]: I1112 20:57:32.287787 2523 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be7b042f-12a1-49f2-bc59-5317b3dc38ab-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 20:57:32.288089 kubelet[2523]: I1112 20:57:32.287799 2523 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 12 20:57:32.288089 kubelet[2523]: I1112 20:57:32.287810 2523 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be7b042f-12a1-49f2-bc59-5317b3dc38ab-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 12 20:57:32.288089 kubelet[2523]: I1112 20:57:32.287820 2523 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gsxtw\" (UniqueName: \"kubernetes.io/projected/1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2-kube-api-access-gsxtw\") on node \"localhost\" DevicePath \"\"" Nov 12 20:57:32.288377 kubelet[2523]: I1112 20:57:32.287833 2523 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-h64w2\" (UniqueName: \"kubernetes.io/projected/be7b042f-12a1-49f2-bc59-5317b3dc38ab-kube-api-access-h64w2\") on node \"localhost\" DevicePath \"\"" Nov 12 20:57:32.420574 kubelet[2523]: I1112 20:57:32.420537 2523 scope.go:117] "RemoveContainer" containerID="1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e" Nov 12 20:57:32.424401 containerd[1461]: time="2024-11-12T20:57:32.424371974Z" level=info msg="RemoveContainer for \"1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e\"" Nov 12 20:57:32.427238 systemd[1]: Removed slice kubepods-burstable-podbe7b042f_12a1_49f2_bc59_5317b3dc38ab.slice - libcontainer container kubepods-burstable-podbe7b042f_12a1_49f2_bc59_5317b3dc38ab.slice. Nov 12 20:57:32.427464 systemd[1]: kubepods-burstable-podbe7b042f_12a1_49f2_bc59_5317b3dc38ab.slice: Consumed 7.760s CPU time. Nov 12 20:57:32.430543 systemd[1]: Removed slice kubepods-besteffort-pod1052b17c_b8b0_4bc2_a2e4_496ea70c4ec2.slice - libcontainer container kubepods-besteffort-pod1052b17c_b8b0_4bc2_a2e4_496ea70c4ec2.slice. Nov 12 20:57:32.437014 containerd[1461]: time="2024-11-12T20:57:32.436836779Z" level=info msg="RemoveContainer for \"1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e\" returns successfully" Nov 12 20:57:32.437553 kubelet[2523]: I1112 20:57:32.437516 2523 scope.go:117] "RemoveContainer" containerID="86c8e3747a1b57066ab4a0bc17d850e8207e7edc066b3ba5cb2b15a49614e143" Nov 12 20:57:32.439041 containerd[1461]: time="2024-11-12T20:57:32.438998066Z" level=info msg="RemoveContainer for \"86c8e3747a1b57066ab4a0bc17d850e8207e7edc066b3ba5cb2b15a49614e143\"" Nov 12 20:57:32.443607 containerd[1461]: time="2024-11-12T20:57:32.443556856Z" level=info msg="RemoveContainer for \"86c8e3747a1b57066ab4a0bc17d850e8207e7edc066b3ba5cb2b15a49614e143\" returns successfully" Nov 12 20:57:32.443864 kubelet[2523]: I1112 20:57:32.443821 2523 scope.go:117] "RemoveContainer" containerID="9efc050597fbc4963bcd02386809e84581bc1604c76dd4a1c3e438237abc73ef" Nov 12 20:57:32.445799 containerd[1461]: time="2024-11-12T20:57:32.445724525Z" level=info msg="RemoveContainer for \"9efc050597fbc4963bcd02386809e84581bc1604c76dd4a1c3e438237abc73ef\"" Nov 12 20:57:32.450448 containerd[1461]: time="2024-11-12T20:57:32.450397311Z" level=info msg="RemoveContainer for \"9efc050597fbc4963bcd02386809e84581bc1604c76dd4a1c3e438237abc73ef\" returns successfully" Nov 12 20:57:32.450751 kubelet[2523]: I1112 20:57:32.450713 2523 scope.go:117] "RemoveContainer" containerID="426f68cd62be5f37d5568118cae9e5262f66f8ca49aff79b9085734d94147a8e" Nov 12 20:57:32.452274 containerd[1461]: time="2024-11-12T20:57:32.452221128Z" level=info msg="RemoveContainer for \"426f68cd62be5f37d5568118cae9e5262f66f8ca49aff79b9085734d94147a8e\"" Nov 12 20:57:32.456705 containerd[1461]: time="2024-11-12T20:57:32.456659441Z" level=info msg="RemoveContainer for \"426f68cd62be5f37d5568118cae9e5262f66f8ca49aff79b9085734d94147a8e\" returns successfully" Nov 12 20:57:32.456951 kubelet[2523]: I1112 20:57:32.456915 2523 scope.go:117] "RemoveContainer" containerID="e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21" Nov 12 20:57:32.458481 containerd[1461]: time="2024-11-12T20:57:32.458433673Z" level=info msg="RemoveContainer for \"e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21\"" Nov 12 20:57:32.461796 containerd[1461]: time="2024-11-12T20:57:32.461761120Z" level=info msg="RemoveContainer for \"e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21\" returns successfully" Nov 12 20:57:32.462005 kubelet[2523]: I1112 20:57:32.461953 2523 scope.go:117] "RemoveContainer" containerID="1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e" Nov 12 20:57:32.465783 containerd[1461]: time="2024-11-12T20:57:32.465717860Z" level=error msg="ContainerStatus for \"1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e\": not found" Nov 12 20:57:32.477884 kubelet[2523]: E1112 20:57:32.477811 2523 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e\": not found" containerID="1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e" Nov 12 20:57:32.478119 kubelet[2523]: I1112 20:57:32.477873 2523 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e"} err="failed to get container status \"1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e\": rpc error: code = NotFound desc = an error occurred when try to find container \"1a4176d6b3fe63d2a1e67fb8d3145676874f42dad9d54e3c5879817f948e8f2e\": not found" Nov 12 20:57:32.478119 kubelet[2523]: I1112 20:57:32.478011 2523 scope.go:117] "RemoveContainer" containerID="86c8e3747a1b57066ab4a0bc17d850e8207e7edc066b3ba5cb2b15a49614e143" Nov 12 20:57:32.478451 containerd[1461]: time="2024-11-12T20:57:32.478398825Z" level=error msg="ContainerStatus for \"86c8e3747a1b57066ab4a0bc17d850e8207e7edc066b3ba5cb2b15a49614e143\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"86c8e3747a1b57066ab4a0bc17d850e8207e7edc066b3ba5cb2b15a49614e143\": not found" Nov 12 20:57:32.478608 kubelet[2523]: E1112 20:57:32.478576 2523 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"86c8e3747a1b57066ab4a0bc17d850e8207e7edc066b3ba5cb2b15a49614e143\": not found" containerID="86c8e3747a1b57066ab4a0bc17d850e8207e7edc066b3ba5cb2b15a49614e143" Nov 12 20:57:32.478655 kubelet[2523]: I1112 20:57:32.478615 2523 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"86c8e3747a1b57066ab4a0bc17d850e8207e7edc066b3ba5cb2b15a49614e143"} err="failed to get container status \"86c8e3747a1b57066ab4a0bc17d850e8207e7edc066b3ba5cb2b15a49614e143\": rpc error: code = NotFound desc = an error occurred when try to find container \"86c8e3747a1b57066ab4a0bc17d850e8207e7edc066b3ba5cb2b15a49614e143\": not found" Nov 12 20:57:32.478655 kubelet[2523]: I1112 20:57:32.478641 2523 scope.go:117] "RemoveContainer" containerID="9efc050597fbc4963bcd02386809e84581bc1604c76dd4a1c3e438237abc73ef" Nov 12 20:57:32.478883 containerd[1461]: time="2024-11-12T20:57:32.478842866Z" level=error msg="ContainerStatus for \"9efc050597fbc4963bcd02386809e84581bc1604c76dd4a1c3e438237abc73ef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9efc050597fbc4963bcd02386809e84581bc1604c76dd4a1c3e438237abc73ef\": not found" Nov 12 20:57:32.479081 kubelet[2523]: E1112 20:57:32.479047 2523 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9efc050597fbc4963bcd02386809e84581bc1604c76dd4a1c3e438237abc73ef\": not found" containerID="9efc050597fbc4963bcd02386809e84581bc1604c76dd4a1c3e438237abc73ef" Nov 12 20:57:32.479136 kubelet[2523]: I1112 20:57:32.479099 2523 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9efc050597fbc4963bcd02386809e84581bc1604c76dd4a1c3e438237abc73ef"} err="failed to get container status \"9efc050597fbc4963bcd02386809e84581bc1604c76dd4a1c3e438237abc73ef\": rpc error: code = NotFound desc = an error occurred when try to find container \"9efc050597fbc4963bcd02386809e84581bc1604c76dd4a1c3e438237abc73ef\": not found" Nov 12 20:57:32.479161 kubelet[2523]: I1112 20:57:32.479139 2523 scope.go:117] "RemoveContainer" containerID="426f68cd62be5f37d5568118cae9e5262f66f8ca49aff79b9085734d94147a8e" Nov 12 20:57:32.479582 containerd[1461]: time="2024-11-12T20:57:32.479527384Z" level=error msg="ContainerStatus for \"426f68cd62be5f37d5568118cae9e5262f66f8ca49aff79b9085734d94147a8e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"426f68cd62be5f37d5568118cae9e5262f66f8ca49aff79b9085734d94147a8e\": not found" Nov 12 20:57:32.479714 kubelet[2523]: E1112 20:57:32.479687 2523 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"426f68cd62be5f37d5568118cae9e5262f66f8ca49aff79b9085734d94147a8e\": not found" containerID="426f68cd62be5f37d5568118cae9e5262f66f8ca49aff79b9085734d94147a8e" Nov 12 20:57:32.479714 kubelet[2523]: I1112 20:57:32.479710 2523 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"426f68cd62be5f37d5568118cae9e5262f66f8ca49aff79b9085734d94147a8e"} err="failed to get container status \"426f68cd62be5f37d5568118cae9e5262f66f8ca49aff79b9085734d94147a8e\": rpc error: code = NotFound desc = an error occurred when try to find container \"426f68cd62be5f37d5568118cae9e5262f66f8ca49aff79b9085734d94147a8e\": not found" Nov 12 20:57:32.479835 kubelet[2523]: I1112 20:57:32.479725 2523 scope.go:117] "RemoveContainer" containerID="e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21" Nov 12 20:57:32.479932 containerd[1461]: time="2024-11-12T20:57:32.479899259Z" level=error msg="ContainerStatus for \"e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21\": not found" Nov 12 20:57:32.480084 kubelet[2523]: E1112 20:57:32.480056 2523 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21\": not found" containerID="e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21" Nov 12 20:57:32.480126 kubelet[2523]: I1112 20:57:32.480088 2523 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21"} err="failed to get container status \"e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21\": rpc error: code = NotFound desc = an error occurred when try to find container \"e63c81a3a7a9282c6f8278f7ffeb15e721a3ce472a9b7d5f3fe78551ac017e21\": not found" Nov 12 20:57:32.480126 kubelet[2523]: I1112 20:57:32.480111 2523 scope.go:117] "RemoveContainer" containerID="690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320" Nov 12 20:57:32.481584 containerd[1461]: time="2024-11-12T20:57:32.481543485Z" level=info msg="RemoveContainer for \"690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320\"" Nov 12 20:57:32.485871 containerd[1461]: time="2024-11-12T20:57:32.485836051Z" level=info msg="RemoveContainer for \"690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320\" returns successfully" Nov 12 20:57:32.486057 kubelet[2523]: I1112 20:57:32.486039 2523 scope.go:117] "RemoveContainer" containerID="690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320" Nov 12 20:57:32.486392 containerd[1461]: time="2024-11-12T20:57:32.486351729Z" level=error msg="ContainerStatus for \"690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320\": not found" Nov 12 20:57:32.486614 kubelet[2523]: E1112 20:57:32.486579 2523 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320\": not found" containerID="690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320" Nov 12 20:57:32.486665 kubelet[2523]: I1112 20:57:32.486630 2523 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320"} err="failed to get container status \"690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320\": rpc error: code = NotFound desc = an error occurred when try to find container \"690f22b99f7edc63c8ee068f8166f09fcd72078eddd88762ec8357c5d8b20320\": not found" Nov 12 20:57:32.897954 systemd[1]: var-lib-kubelet-pods-1052b17c\x2db8b0\x2d4bc2\x2da2e4\x2d496ea70c4ec2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgsxtw.mount: Deactivated successfully. Nov 12 20:57:32.898124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c42d0dfa18219b4b62fb3f5c4c932661b0cf02204bfa146b7aa047fd30a6991e-rootfs.mount: Deactivated successfully. Nov 12 20:57:32.898221 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c42d0dfa18219b4b62fb3f5c4c932661b0cf02204bfa146b7aa047fd30a6991e-shm.mount: Deactivated successfully. Nov 12 20:57:32.898348 systemd[1]: var-lib-kubelet-pods-be7b042f\x2d12a1\x2d49f2\x2dbc59\x2d5317b3dc38ab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh64w2.mount: Deactivated successfully. Nov 12 20:57:32.898455 systemd[1]: var-lib-kubelet-pods-be7b042f\x2d12a1\x2d49f2\x2dbc59\x2d5317b3dc38ab-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 12 20:57:32.898561 systemd[1]: var-lib-kubelet-pods-be7b042f\x2d12a1\x2d49f2\x2dbc59\x2d5317b3dc38ab-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 12 20:57:33.162734 kubelet[2523]: I1112 20:57:33.162622 2523 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2" path="/var/lib/kubelet/pods/1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2/volumes" Nov 12 20:57:33.163300 kubelet[2523]: I1112 20:57:33.163272 2523 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be7b042f-12a1-49f2-bc59-5317b3dc38ab" path="/var/lib/kubelet/pods/be7b042f-12a1-49f2-bc59-5317b3dc38ab/volumes" Nov 12 20:57:33.725240 sshd[4171]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:33.733683 systemd[1]: sshd@24-10.0.0.133:22-10.0.0.1:55564.service: Deactivated successfully. Nov 12 20:57:33.735828 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 20:57:33.737573 systemd-logind[1444]: Session 25 logged out. Waiting for processes to exit. Nov 12 20:57:33.742618 systemd[1]: Started sshd@25-10.0.0.133:22-10.0.0.1:55568.service - OpenSSH per-connection server daemon (10.0.0.1:55568). Nov 12 20:57:33.743778 systemd-logind[1444]: Removed session 25. Nov 12 20:57:33.783170 sshd[4332]: Accepted publickey for core from 10.0.0.1 port 55568 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:57:33.785265 sshd[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:33.790038 systemd-logind[1444]: New session 26 of user core. Nov 12 20:57:33.799627 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 20:57:34.528106 sshd[4332]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:34.538484 kubelet[2523]: E1112 20:57:34.538424 2523 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be7b042f-12a1-49f2-bc59-5317b3dc38ab" containerName="cilium-agent" Nov 12 20:57:34.539488 kubelet[2523]: E1112 20:57:34.538513 2523 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be7b042f-12a1-49f2-bc59-5317b3dc38ab" containerName="apply-sysctl-overwrites" Nov 12 20:57:34.539488 kubelet[2523]: E1112 20:57:34.538546 2523 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be7b042f-12a1-49f2-bc59-5317b3dc38ab" containerName="mount-bpf-fs" Nov 12 20:57:34.539488 kubelet[2523]: E1112 20:57:34.538553 2523 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be7b042f-12a1-49f2-bc59-5317b3dc38ab" containerName="clean-cilium-state" Nov 12 20:57:34.539488 kubelet[2523]: E1112 20:57:34.538558 2523 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2" containerName="cilium-operator" Nov 12 20:57:34.539488 kubelet[2523]: E1112 20:57:34.538565 2523 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be7b042f-12a1-49f2-bc59-5317b3dc38ab" containerName="mount-cgroup" Nov 12 20:57:34.539488 kubelet[2523]: I1112 20:57:34.538603 2523 memory_manager.go:354] "RemoveStaleState removing state" podUID="1052b17c-b8b0-4bc2-a2e4-496ea70c4ec2" containerName="cilium-operator" Nov 12 20:57:34.539488 kubelet[2523]: I1112 20:57:34.538610 2523 memory_manager.go:354] "RemoveStaleState removing state" podUID="be7b042f-12a1-49f2-bc59-5317b3dc38ab" containerName="cilium-agent" Nov 12 20:57:34.542281 systemd[1]: sshd@25-10.0.0.133:22-10.0.0.1:55568.service: Deactivated successfully. Nov 12 20:57:34.547403 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 20:57:34.549715 systemd-logind[1444]: Session 26 logged out. Waiting for processes to exit. Nov 12 20:57:34.561707 systemd[1]: Started sshd@26-10.0.0.133:22-10.0.0.1:55576.service - OpenSSH per-connection server daemon (10.0.0.1:55576). Nov 12 20:57:34.562943 systemd-logind[1444]: Removed session 26. Nov 12 20:57:34.569691 systemd[1]: Created slice kubepods-burstable-podcf02ab13_a08e_4f16_8f14_de55d8b8ea6e.slice - libcontainer container kubepods-burstable-podcf02ab13_a08e_4f16_8f14_de55d8b8ea6e.slice. Nov 12 20:57:34.595130 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 55576 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:57:34.596697 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:34.600596 systemd-logind[1444]: New session 27 of user core. Nov 12 20:57:34.617461 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 12 20:57:34.669501 sshd[4345]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:34.679774 systemd[1]: sshd@26-10.0.0.133:22-10.0.0.1:55576.service: Deactivated successfully. Nov 12 20:57:34.681923 systemd[1]: session-27.scope: Deactivated successfully. Nov 12 20:57:34.683827 systemd-logind[1444]: Session 27 logged out. Waiting for processes to exit. Nov 12 20:57:34.696648 systemd[1]: Started sshd@27-10.0.0.133:22-10.0.0.1:55580.service - OpenSSH per-connection server daemon (10.0.0.1:55580). Nov 12 20:57:34.697762 systemd-logind[1444]: Removed session 27. Nov 12 20:57:34.702615 kubelet[2523]: I1112 20:57:34.702559 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cf02ab13-a08e-4f16-8f14-de55d8b8ea6e-bpf-maps\") pod \"cilium-k8gnl\" (UID: \"cf02ab13-a08e-4f16-8f14-de55d8b8ea6e\") " pod="kube-system/cilium-k8gnl" Nov 12 20:57:34.702615 kubelet[2523]: I1112 20:57:34.702605 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cf02ab13-a08e-4f16-8f14-de55d8b8ea6e-cni-path\") pod \"cilium-k8gnl\" (UID: \"cf02ab13-a08e-4f16-8f14-de55d8b8ea6e\") " pod="kube-system/cilium-k8gnl" Nov 12 20:57:34.702762 kubelet[2523]: I1112 20:57:34.702632 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cf02ab13-a08e-4f16-8f14-de55d8b8ea6e-cilium-run\") pod \"cilium-k8gnl\" (UID: \"cf02ab13-a08e-4f16-8f14-de55d8b8ea6e\") " pod="kube-system/cilium-k8gnl" Nov 12 20:57:34.702762 kubelet[2523]: I1112 20:57:34.702651 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cf02ab13-a08e-4f16-8f14-de55d8b8ea6e-hostproc\") pod \"cilium-k8gnl\" (UID: \"cf02ab13-a08e-4f16-8f14-de55d8b8ea6e\") " pod="kube-system/cilium-k8gnl" Nov 12 20:57:34.702762 kubelet[2523]: I1112 20:57:34.702668 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cf02ab13-a08e-4f16-8f14-de55d8b8ea6e-cilium-cgroup\") pod \"cilium-k8gnl\" (UID: \"cf02ab13-a08e-4f16-8f14-de55d8b8ea6e\") " pod="kube-system/cilium-k8gnl" Nov 12 20:57:34.702762 kubelet[2523]: I1112 20:57:34.702684 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf02ab13-a08e-4f16-8f14-de55d8b8ea6e-lib-modules\") pod \"cilium-k8gnl\" (UID: \"cf02ab13-a08e-4f16-8f14-de55d8b8ea6e\") " pod="kube-system/cilium-k8gnl" Nov 12 20:57:34.702762 kubelet[2523]: I1112 20:57:34.702699 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cf02ab13-a08e-4f16-8f14-de55d8b8ea6e-host-proc-sys-kernel\") pod \"cilium-k8gnl\" (UID: \"cf02ab13-a08e-4f16-8f14-de55d8b8ea6e\") " pod="kube-system/cilium-k8gnl" Nov 12 20:57:34.702762 kubelet[2523]: I1112 20:57:34.702717 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cf02ab13-a08e-4f16-8f14-de55d8b8ea6e-clustermesh-secrets\") pod \"cilium-k8gnl\" (UID: \"cf02ab13-a08e-4f16-8f14-de55d8b8ea6e\") " pod="kube-system/cilium-k8gnl" Nov 12 20:57:34.702997 kubelet[2523]: I1112 20:57:34.702732 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cf02ab13-a08e-4f16-8f14-de55d8b8ea6e-host-proc-sys-net\") pod \"cilium-k8gnl\" (UID: \"cf02ab13-a08e-4f16-8f14-de55d8b8ea6e\") " pod="kube-system/cilium-k8gnl" Nov 12 20:57:34.702997 kubelet[2523]: I1112 20:57:34.702746 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf02ab13-a08e-4f16-8f14-de55d8b8ea6e-etc-cni-netd\") pod \"cilium-k8gnl\" (UID: \"cf02ab13-a08e-4f16-8f14-de55d8b8ea6e\") " pod="kube-system/cilium-k8gnl" Nov 12 20:57:34.702997 kubelet[2523]: I1112 20:57:34.702762 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf02ab13-a08e-4f16-8f14-de55d8b8ea6e-xtables-lock\") pod \"cilium-k8gnl\" (UID: \"cf02ab13-a08e-4f16-8f14-de55d8b8ea6e\") " pod="kube-system/cilium-k8gnl" Nov 12 20:57:34.702997 kubelet[2523]: I1112 20:57:34.702779 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cf02ab13-a08e-4f16-8f14-de55d8b8ea6e-hubble-tls\") pod \"cilium-k8gnl\" (UID: \"cf02ab13-a08e-4f16-8f14-de55d8b8ea6e\") " pod="kube-system/cilium-k8gnl" Nov 12 20:57:34.702997 kubelet[2523]: I1112 20:57:34.702795 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf02ab13-a08e-4f16-8f14-de55d8b8ea6e-cilium-config-path\") pod \"cilium-k8gnl\" (UID: \"cf02ab13-a08e-4f16-8f14-de55d8b8ea6e\") " pod="kube-system/cilium-k8gnl" Nov 12 20:57:34.702997 kubelet[2523]: I1112 20:57:34.702819 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cf02ab13-a08e-4f16-8f14-de55d8b8ea6e-cilium-ipsec-secrets\") pod \"cilium-k8gnl\" (UID: \"cf02ab13-a08e-4f16-8f14-de55d8b8ea6e\") " pod="kube-system/cilium-k8gnl" Nov 12 20:57:34.703182 kubelet[2523]: I1112 20:57:34.702847 2523 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq9jx\" (UniqueName: \"kubernetes.io/projected/cf02ab13-a08e-4f16-8f14-de55d8b8ea6e-kube-api-access-tq9jx\") pod \"cilium-k8gnl\" (UID: \"cf02ab13-a08e-4f16-8f14-de55d8b8ea6e\") " pod="kube-system/cilium-k8gnl" Nov 12 20:57:34.726354 sshd[4354]: Accepted publickey for core from 10.0.0.1 port 55580 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:57:34.728014 sshd[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:34.733345 systemd-logind[1444]: New session 28 of user core. Nov 12 20:57:34.742589 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 12 20:57:34.872885 kubelet[2523]: E1112 20:57:34.872842 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:34.874092 containerd[1461]: time="2024-11-12T20:57:34.873515553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k8gnl,Uid:cf02ab13-a08e-4f16-8f14-de55d8b8ea6e,Namespace:kube-system,Attempt:0,}" Nov 12 20:57:34.908315 containerd[1461]: time="2024-11-12T20:57:34.908060090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:57:34.908315 containerd[1461]: time="2024-11-12T20:57:34.908157255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:57:34.908315 containerd[1461]: time="2024-11-12T20:57:34.908173996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:34.908630 containerd[1461]: time="2024-11-12T20:57:34.908430983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:34.934774 systemd[1]: Started cri-containerd-1dc2fa0644d5eb75a45ed1e14dfd32d30a154adb64df07d728f583815ceb4baf.scope - libcontainer container 1dc2fa0644d5eb75a45ed1e14dfd32d30a154adb64df07d728f583815ceb4baf. Nov 12 20:57:34.964437 containerd[1461]: time="2024-11-12T20:57:34.964375044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k8gnl,Uid:cf02ab13-a08e-4f16-8f14-de55d8b8ea6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1dc2fa0644d5eb75a45ed1e14dfd32d30a154adb64df07d728f583815ceb4baf\"" Nov 12 20:57:34.965550 kubelet[2523]: E1112 20:57:34.965515 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:34.969023 containerd[1461]: time="2024-11-12T20:57:34.968951796Z" level=info msg="CreateContainer within sandbox \"1dc2fa0644d5eb75a45ed1e14dfd32d30a154adb64df07d728f583815ceb4baf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 20:57:35.225394 kubelet[2523]: E1112 20:57:35.225249 2523 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 12 20:57:35.481934 containerd[1461]: time="2024-11-12T20:57:35.481782931Z" level=info msg="CreateContainer within sandbox \"1dc2fa0644d5eb75a45ed1e14dfd32d30a154adb64df07d728f583815ceb4baf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1b01e620dc589f8ebeaaf1c07d3bee64aba93fa62d7411fb55e775b21e49af39\"" Nov 12 20:57:35.482721 containerd[1461]: time="2024-11-12T20:57:35.482501051Z" level=info msg="StartContainer for \"1b01e620dc589f8ebeaaf1c07d3bee64aba93fa62d7411fb55e775b21e49af39\"" Nov 12 20:57:35.509521 systemd[1]: Started cri-containerd-1b01e620dc589f8ebeaaf1c07d3bee64aba93fa62d7411fb55e775b21e49af39.scope - libcontainer container 1b01e620dc589f8ebeaaf1c07d3bee64aba93fa62d7411fb55e775b21e49af39. Nov 12 20:57:35.547491 systemd[1]: cri-containerd-1b01e620dc589f8ebeaaf1c07d3bee64aba93fa62d7411fb55e775b21e49af39.scope: Deactivated successfully. Nov 12 20:57:35.606330 containerd[1461]: time="2024-11-12T20:57:35.606244389Z" level=info msg="StartContainer for \"1b01e620dc589f8ebeaaf1c07d3bee64aba93fa62d7411fb55e775b21e49af39\" returns successfully" Nov 12 20:57:35.723540 containerd[1461]: time="2024-11-12T20:57:35.723459032Z" level=info msg="shim disconnected" id=1b01e620dc589f8ebeaaf1c07d3bee64aba93fa62d7411fb55e775b21e49af39 namespace=k8s.io Nov 12 20:57:35.723540 containerd[1461]: time="2024-11-12T20:57:35.723521571Z" level=warning msg="cleaning up after shim disconnected" id=1b01e620dc589f8ebeaaf1c07d3bee64aba93fa62d7411fb55e775b21e49af39 namespace=k8s.io Nov 12 20:57:35.723540 containerd[1461]: time="2024-11-12T20:57:35.723529706Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:57:36.435717 kubelet[2523]: E1112 20:57:36.435640 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:36.439321 containerd[1461]: time="2024-11-12T20:57:36.439266634Z" level=info msg="CreateContainer within sandbox \"1dc2fa0644d5eb75a45ed1e14dfd32d30a154adb64df07d728f583815ceb4baf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 20:57:36.454213 containerd[1461]: time="2024-11-12T20:57:36.454160301Z" level=info msg="CreateContainer within sandbox \"1dc2fa0644d5eb75a45ed1e14dfd32d30a154adb64df07d728f583815ceb4baf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"09c2402a72dab2c486896bc3c4b139d87cd45d5522895b7a41596c7e9b74e016\"" Nov 12 20:57:36.455392 containerd[1461]: time="2024-11-12T20:57:36.454754556Z" level=info msg="StartContainer for \"09c2402a72dab2c486896bc3c4b139d87cd45d5522895b7a41596c7e9b74e016\"" Nov 12 20:57:36.496550 systemd[1]: Started cri-containerd-09c2402a72dab2c486896bc3c4b139d87cd45d5522895b7a41596c7e9b74e016.scope - libcontainer container 09c2402a72dab2c486896bc3c4b139d87cd45d5522895b7a41596c7e9b74e016. Nov 12 20:57:36.525181 containerd[1461]: time="2024-11-12T20:57:36.525128416Z" level=info msg="StartContainer for \"09c2402a72dab2c486896bc3c4b139d87cd45d5522895b7a41596c7e9b74e016\" returns successfully" Nov 12 20:57:36.530519 systemd[1]: cri-containerd-09c2402a72dab2c486896bc3c4b139d87cd45d5522895b7a41596c7e9b74e016.scope: Deactivated successfully. Nov 12 20:57:36.554721 containerd[1461]: time="2024-11-12T20:57:36.554646240Z" level=info msg="shim disconnected" id=09c2402a72dab2c486896bc3c4b139d87cd45d5522895b7a41596c7e9b74e016 namespace=k8s.io Nov 12 20:57:36.554721 containerd[1461]: time="2024-11-12T20:57:36.554711523Z" level=warning msg="cleaning up after shim disconnected" id=09c2402a72dab2c486896bc3c4b139d87cd45d5522895b7a41596c7e9b74e016 namespace=k8s.io Nov 12 20:57:36.554721 containerd[1461]: time="2024-11-12T20:57:36.554721171Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:57:36.810924 systemd[1]: run-containerd-runc-k8s.io-09c2402a72dab2c486896bc3c4b139d87cd45d5522895b7a41596c7e9b74e016-runc.QXIFYt.mount: Deactivated successfully. Nov 12 20:57:36.811082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09c2402a72dab2c486896bc3c4b139d87cd45d5522895b7a41596c7e9b74e016-rootfs.mount: Deactivated successfully. Nov 12 20:57:37.285091 kubelet[2523]: I1112 20:57:37.284903 2523 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-11-12T20:57:37Z","lastTransitionTime":"2024-11-12T20:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 12 20:57:37.438917 kubelet[2523]: E1112 20:57:37.438871 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:37.441649 containerd[1461]: time="2024-11-12T20:57:37.441583360Z" level=info msg="CreateContainer within sandbox \"1dc2fa0644d5eb75a45ed1e14dfd32d30a154adb64df07d728f583815ceb4baf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 20:57:37.505999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2641492472.mount: Deactivated successfully. Nov 12 20:57:37.520814 containerd[1461]: time="2024-11-12T20:57:37.520760505Z" level=info msg="CreateContainer within sandbox \"1dc2fa0644d5eb75a45ed1e14dfd32d30a154adb64df07d728f583815ceb4baf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e012438fba239c08016030dad5a750ecaf14617f16a583ca75133290c7ac7039\"" Nov 12 20:57:37.521561 containerd[1461]: time="2024-11-12T20:57:37.521486510Z" level=info msg="StartContainer for \"e012438fba239c08016030dad5a750ecaf14617f16a583ca75133290c7ac7039\"" Nov 12 20:57:37.561497 systemd[1]: Started cri-containerd-e012438fba239c08016030dad5a750ecaf14617f16a583ca75133290c7ac7039.scope - libcontainer container e012438fba239c08016030dad5a750ecaf14617f16a583ca75133290c7ac7039. Nov 12 20:57:37.595048 containerd[1461]: time="2024-11-12T20:57:37.594885423Z" level=info msg="StartContainer for \"e012438fba239c08016030dad5a750ecaf14617f16a583ca75133290c7ac7039\" returns successfully" Nov 12 20:57:37.597183 systemd[1]: cri-containerd-e012438fba239c08016030dad5a750ecaf14617f16a583ca75133290c7ac7039.scope: Deactivated successfully. Nov 12 20:57:37.627882 containerd[1461]: time="2024-11-12T20:57:37.627788015Z" level=info msg="shim disconnected" id=e012438fba239c08016030dad5a750ecaf14617f16a583ca75133290c7ac7039 namespace=k8s.io Nov 12 20:57:37.627882 containerd[1461]: time="2024-11-12T20:57:37.627855513Z" level=warning msg="cleaning up after shim disconnected" id=e012438fba239c08016030dad5a750ecaf14617f16a583ca75133290c7ac7039 namespace=k8s.io Nov 12 20:57:37.627882 containerd[1461]: time="2024-11-12T20:57:37.627864790Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:57:37.811284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e012438fba239c08016030dad5a750ecaf14617f16a583ca75133290c7ac7039-rootfs.mount: Deactivated successfully. Nov 12 20:57:38.161200 kubelet[2523]: E1112 20:57:38.161137 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:38.443954 kubelet[2523]: E1112 20:57:38.443804 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:38.447847 containerd[1461]: time="2024-11-12T20:57:38.447794088Z" level=info msg="CreateContainer within sandbox \"1dc2fa0644d5eb75a45ed1e14dfd32d30a154adb64df07d728f583815ceb4baf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 20:57:38.470871 containerd[1461]: time="2024-11-12T20:57:38.470798901Z" level=info msg="CreateContainer within sandbox \"1dc2fa0644d5eb75a45ed1e14dfd32d30a154adb64df07d728f583815ceb4baf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8ccd12248e461c90b4314be34a28726f92aed9f3d9ce701ec88691844744cd1f\"" Nov 12 20:57:38.471633 containerd[1461]: time="2024-11-12T20:57:38.471590569Z" level=info msg="StartContainer for \"8ccd12248e461c90b4314be34a28726f92aed9f3d9ce701ec88691844744cd1f\"" Nov 12 20:57:38.507575 systemd[1]: Started cri-containerd-8ccd12248e461c90b4314be34a28726f92aed9f3d9ce701ec88691844744cd1f.scope - libcontainer container 8ccd12248e461c90b4314be34a28726f92aed9f3d9ce701ec88691844744cd1f. Nov 12 20:57:38.535212 systemd[1]: cri-containerd-8ccd12248e461c90b4314be34a28726f92aed9f3d9ce701ec88691844744cd1f.scope: Deactivated successfully. Nov 12 20:57:38.537599 containerd[1461]: time="2024-11-12T20:57:38.537523845Z" level=info msg="StartContainer for \"8ccd12248e461c90b4314be34a28726f92aed9f3d9ce701ec88691844744cd1f\" returns successfully" Nov 12 20:57:38.563357 containerd[1461]: time="2024-11-12T20:57:38.563246122Z" level=info msg="shim disconnected" id=8ccd12248e461c90b4314be34a28726f92aed9f3d9ce701ec88691844744cd1f namespace=k8s.io Nov 12 20:57:38.563605 containerd[1461]: time="2024-11-12T20:57:38.563331444Z" level=warning msg="cleaning up after shim disconnected" id=8ccd12248e461c90b4314be34a28726f92aed9f3d9ce701ec88691844744cd1f namespace=k8s.io Nov 12 20:57:38.563605 containerd[1461]: time="2024-11-12T20:57:38.563429429Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:57:38.810368 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ccd12248e461c90b4314be34a28726f92aed9f3d9ce701ec88691844744cd1f-rootfs.mount: Deactivated successfully. Nov 12 20:57:39.448458 kubelet[2523]: E1112 20:57:39.448423 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:39.450291 containerd[1461]: time="2024-11-12T20:57:39.450257070Z" level=info msg="CreateContainer within sandbox \"1dc2fa0644d5eb75a45ed1e14dfd32d30a154adb64df07d728f583815ceb4baf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 20:57:39.673376 containerd[1461]: time="2024-11-12T20:57:39.673291167Z" level=info msg="CreateContainer within sandbox \"1dc2fa0644d5eb75a45ed1e14dfd32d30a154adb64df07d728f583815ceb4baf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8338aaee24cd1fad2ff228c2239c2fab9bf0c3ce3844f3a1aebe1a6baef7a99e\"" Nov 12 20:57:39.674071 containerd[1461]: time="2024-11-12T20:57:39.674024585Z" level=info msg="StartContainer for \"8338aaee24cd1fad2ff228c2239c2fab9bf0c3ce3844f3a1aebe1a6baef7a99e\"" Nov 12 20:57:39.705585 systemd[1]: Started cri-containerd-8338aaee24cd1fad2ff228c2239c2fab9bf0c3ce3844f3a1aebe1a6baef7a99e.scope - libcontainer container 8338aaee24cd1fad2ff228c2239c2fab9bf0c3ce3844f3a1aebe1a6baef7a99e. Nov 12 20:57:39.916825 containerd[1461]: time="2024-11-12T20:57:39.916752141Z" level=info msg="StartContainer for \"8338aaee24cd1fad2ff228c2239c2fab9bf0c3ce3844f3a1aebe1a6baef7a99e\" returns successfully" Nov 12 20:57:40.310380 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 12 20:57:40.453514 kubelet[2523]: E1112 20:57:40.453477 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:40.646025 kubelet[2523]: I1112 20:57:40.645403 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k8gnl" podStartSLOduration=6.645384814 podStartE2EDuration="6.645384814s" podCreationTimestamp="2024-11-12 20:57:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:57:40.644828431 +0000 UTC m=+95.562578272" watchObservedRunningTime="2024-11-12 20:57:40.645384814 +0000 UTC m=+95.563134685" Nov 12 20:57:41.455798 kubelet[2523]: E1112 20:57:41.455754 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:42.457786 kubelet[2523]: E1112 20:57:42.457748 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:43.635740 systemd-networkd[1385]: lxc_health: Link UP Nov 12 20:57:43.649235 systemd-networkd[1385]: lxc_health: Gained carrier Nov 12 20:57:44.876103 kubelet[2523]: E1112 20:57:44.875834 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:45.149591 systemd-networkd[1385]: lxc_health: Gained IPv6LL Nov 12 20:57:45.462615 kubelet[2523]: E1112 20:57:45.462499 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:49.913787 sshd[4354]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:49.918121 systemd[1]: sshd@27-10.0.0.133:22-10.0.0.1:55580.service: Deactivated successfully. Nov 12 20:57:49.920190 systemd[1]: session-28.scope: Deactivated successfully. Nov 12 20:57:49.920911 systemd-logind[1444]: Session 28 logged out. Waiting for processes to exit. Nov 12 20:57:49.921726 systemd-logind[1444]: Removed session 28.