Nov 12 20:56:23.327915 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:56:23.327936 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:56:23.327947 kernel: BIOS-provided physical RAM map: Nov 12 20:56:23.327953 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 12 20:56:23.327959 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 12 20:56:23.327966 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 12 20:56:23.327973 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 12 20:56:23.328005 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 12 20:56:23.328011 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 12 20:56:23.328020 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 12 20:56:23.328027 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 12 20:56:23.328033 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 12 20:56:23.328039 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 12 20:56:23.328045 kernel: NX (Execute Disable) protection: active Nov 12 20:56:23.328053 kernel: APIC: Static calls initialized Nov 12 20:56:23.328062 kernel: SMBIOS 2.8 present. Nov 12 20:56:23.328069 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 12 20:56:23.328076 kernel: Hypervisor detected: KVM Nov 12 20:56:23.328083 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 20:56:23.328090 kernel: kvm-clock: using sched offset of 2233078677 cycles Nov 12 20:56:23.328097 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 20:56:23.328104 kernel: tsc: Detected 2794.744 MHz processor Nov 12 20:56:23.328111 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:56:23.328118 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:56:23.328125 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 12 20:56:23.328135 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 12 20:56:23.328142 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:56:23.328149 kernel: Using GB pages for direct mapping Nov 12 20:56:23.328156 kernel: ACPI: Early table checksum verification disabled Nov 12 20:56:23.328163 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 12 20:56:23.328170 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:56:23.328177 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:56:23.328184 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:56:23.328193 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 12 20:56:23.328200 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:56:23.328207 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:56:23.328214 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:56:23.328221 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:56:23.328227 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Nov 12 20:56:23.328235 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Nov 12 20:56:23.328245 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 12 20:56:23.328262 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Nov 12 20:56:23.328269 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Nov 12 20:56:23.328278 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Nov 12 20:56:23.328285 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Nov 12 20:56:23.328292 kernel: No NUMA configuration found Nov 12 20:56:23.328299 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 12 20:56:23.328306 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Nov 12 20:56:23.328316 kernel: Zone ranges: Nov 12 20:56:23.328323 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:56:23.328330 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 12 20:56:23.328338 kernel: Normal empty Nov 12 20:56:23.328345 kernel: Movable zone start for each node Nov 12 20:56:23.328352 kernel: Early memory node ranges Nov 12 20:56:23.328361 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 12 20:56:23.328371 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 12 20:56:23.328381 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 12 20:56:23.328394 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:56:23.328401 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 12 20:56:23.328408 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 12 20:56:23.328416 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 12 20:56:23.328423 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 20:56:23.328430 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:56:23.328437 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 20:56:23.328445 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 20:56:23.328452 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:56:23.328462 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 20:56:23.328469 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 20:56:23.328476 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:56:23.328483 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 20:56:23.328490 kernel: TSC deadline timer available Nov 12 20:56:23.328497 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 12 20:56:23.328505 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 20:56:23.328512 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 12 20:56:23.328519 kernel: kvm-guest: setup PV sched yield Nov 12 20:56:23.328526 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 12 20:56:23.328536 kernel: Booting paravirtualized kernel on KVM Nov 12 20:56:23.328543 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:56:23.328550 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 12 20:56:23.328558 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Nov 12 20:56:23.328565 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Nov 12 20:56:23.328572 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 12 20:56:23.328579 kernel: kvm-guest: PV spinlocks enabled Nov 12 20:56:23.328586 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:56:23.328594 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:56:23.328604 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:56:23.328612 kernel: random: crng init done Nov 12 20:56:23.328619 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 20:56:23.328626 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 20:56:23.328633 kernel: Fallback order for Node 0: 0 Nov 12 20:56:23.328640 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Nov 12 20:56:23.328647 kernel: Policy zone: DMA32 Nov 12 20:56:23.328655 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:56:23.328664 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 136900K reserved, 0K cma-reserved) Nov 12 20:56:23.328672 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 20:56:23.328679 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:56:23.328686 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:56:23.328693 kernel: Dynamic Preempt: voluntary Nov 12 20:56:23.328701 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:56:23.328708 kernel: rcu: RCU event tracing is enabled. Nov 12 20:56:23.328716 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 20:56:23.328723 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:56:23.328733 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:56:23.328741 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:56:23.328748 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:56:23.328755 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 20:56:23.328763 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 12 20:56:23.328770 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:56:23.328777 kernel: Console: colour VGA+ 80x25 Nov 12 20:56:23.328784 kernel: printk: console [ttyS0] enabled Nov 12 20:56:23.328791 kernel: ACPI: Core revision 20230628 Nov 12 20:56:23.328801 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 12 20:56:23.328808 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:56:23.328815 kernel: x2apic enabled Nov 12 20:56:23.328822 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:56:23.328830 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 12 20:56:23.328837 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 12 20:56:23.328845 kernel: kvm-guest: setup PV IPIs Nov 12 20:56:23.328861 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 20:56:23.328869 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 12 20:56:23.328877 kernel: Calibrating delay loop (skipped) preset value.. 5589.48 BogoMIPS (lpj=2794744) Nov 12 20:56:23.328884 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 12 20:56:23.328892 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 12 20:56:23.328901 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 12 20:56:23.328909 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:56:23.328916 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:56:23.328924 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:56:23.328934 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:56:23.328942 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 12 20:56:23.328949 kernel: RETBleed: Mitigation: untrained return thunk Nov 12 20:56:23.328957 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 20:56:23.328965 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 20:56:23.328972 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 12 20:56:23.329070 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 12 20:56:23.329078 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 12 20:56:23.329085 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:56:23.329096 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:56:23.329104 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:56:23.329111 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:56:23.329119 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 12 20:56:23.329126 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:56:23.329134 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:56:23.329142 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:56:23.329149 kernel: landlock: Up and running. Nov 12 20:56:23.329157 kernel: SELinux: Initializing. Nov 12 20:56:23.329167 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 20:56:23.329174 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 20:56:23.329182 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 12 20:56:23.329190 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:56:23.329197 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:56:23.329205 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:56:23.329213 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 12 20:56:23.329220 kernel: ... version: 0 Nov 12 20:56:23.329230 kernel: ... bit width: 48 Nov 12 20:56:23.329238 kernel: ... generic registers: 6 Nov 12 20:56:23.329245 kernel: ... value mask: 0000ffffffffffff Nov 12 20:56:23.329259 kernel: ... max period: 00007fffffffffff Nov 12 20:56:23.329267 kernel: ... fixed-purpose events: 0 Nov 12 20:56:23.329275 kernel: ... event mask: 000000000000003f Nov 12 20:56:23.329282 kernel: signal: max sigframe size: 1776 Nov 12 20:56:23.329289 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:56:23.329297 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:56:23.329305 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:56:23.329315 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:56:23.329322 kernel: .... node #0, CPUs: #1 #2 #3 Nov 12 20:56:23.329330 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 20:56:23.329337 kernel: smpboot: Max logical packages: 1 Nov 12 20:56:23.329345 kernel: smpboot: Total of 4 processors activated (22357.95 BogoMIPS) Nov 12 20:56:23.329352 kernel: devtmpfs: initialized Nov 12 20:56:23.329360 kernel: x86/mm: Memory block size: 128MB Nov 12 20:56:23.329368 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:56:23.329375 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 20:56:23.329385 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:56:23.329393 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:56:23.329400 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:56:23.329408 kernel: audit: type=2000 audit(1731444981.946:1): state=initialized audit_enabled=0 res=1 Nov 12 20:56:23.329415 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:56:23.329423 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:56:23.329430 kernel: cpuidle: using governor menu Nov 12 20:56:23.329438 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:56:23.329445 kernel: dca service started, version 1.12.1 Nov 12 20:56:23.329455 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 12 20:56:23.329463 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 12 20:56:23.329470 kernel: PCI: Using configuration type 1 for base access Nov 12 20:56:23.329478 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:56:23.329486 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:56:23.329493 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:56:23.329501 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:56:23.329508 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:56:23.329516 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:56:23.329526 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:56:23.329533 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:56:23.329541 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:56:23.329548 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:56:23.329556 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:56:23.329563 kernel: ACPI: Interpreter enabled Nov 12 20:56:23.329571 kernel: ACPI: PM: (supports S0 S3 S5) Nov 12 20:56:23.329578 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:56:23.329586 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:56:23.329596 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 20:56:23.329603 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 12 20:56:23.329611 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 20:56:23.329789 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:56:23.329916 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 12 20:56:23.330053 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 12 20:56:23.330064 kernel: PCI host bridge to bus 0000:00 Nov 12 20:56:23.330191 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:56:23.330316 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 20:56:23.330428 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:56:23.330537 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 12 20:56:23.330645 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 12 20:56:23.330759 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 12 20:56:23.330868 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 20:56:23.331021 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 12 20:56:23.331157 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 12 20:56:23.331303 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 12 20:56:23.331424 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 12 20:56:23.331562 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 12 20:56:23.331752 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 20:56:23.331891 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 20:56:23.332078 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 12 20:56:23.332200 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 12 20:56:23.332332 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 12 20:56:23.332459 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 12 20:56:23.332577 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Nov 12 20:56:23.332698 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 12 20:56:23.332825 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 12 20:56:23.332953 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:56:23.333090 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Nov 12 20:56:23.333210 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Nov 12 20:56:23.333428 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 12 20:56:23.333585 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 12 20:56:23.333783 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 12 20:56:23.333913 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 12 20:56:23.334058 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 12 20:56:23.334178 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Nov 12 20:56:23.334308 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Nov 12 20:56:23.334435 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 12 20:56:23.334554 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 12 20:56:23.334568 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 20:56:23.334576 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 20:56:23.334584 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:56:23.334591 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 20:56:23.334599 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 12 20:56:23.334607 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 12 20:56:23.334624 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 12 20:56:23.334632 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 12 20:56:23.334640 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 12 20:56:23.334650 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 12 20:56:23.334662 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 12 20:56:23.334677 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 12 20:56:23.334687 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 12 20:56:23.334698 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 12 20:56:23.334710 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 12 20:56:23.334722 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 12 20:56:23.334733 kernel: iommu: Default domain type: Translated Nov 12 20:56:23.334743 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:56:23.334759 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:56:23.334770 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:56:23.334780 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 12 20:56:23.334791 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 12 20:56:23.334945 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 12 20:56:23.335118 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 12 20:56:23.335261 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 20:56:23.335273 kernel: vgaarb: loaded Nov 12 20:56:23.335281 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 12 20:56:23.335293 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 12 20:56:23.335301 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 20:56:23.335308 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:56:23.335316 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:56:23.335324 kernel: pnp: PnP ACPI init Nov 12 20:56:23.335460 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 12 20:56:23.335476 kernel: pnp: PnP ACPI: found 6 devices Nov 12 20:56:23.335487 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:56:23.335499 kernel: NET: Registered PF_INET protocol family Nov 12 20:56:23.335507 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 20:56:23.335515 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 20:56:23.335523 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:56:23.335530 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 20:56:23.335538 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 20:56:23.335545 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 20:56:23.335553 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 20:56:23.335560 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 20:56:23.335570 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:56:23.335577 kernel: NET: Registered PF_XDP protocol family Nov 12 20:56:23.335706 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 20:56:23.335824 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 20:56:23.335936 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 20:56:23.336078 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 12 20:56:23.336235 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 12 20:56:23.336408 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 12 20:56:23.336424 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:56:23.336432 kernel: Initialise system trusted keyrings Nov 12 20:56:23.336439 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 20:56:23.336447 kernel: Key type asymmetric registered Nov 12 20:56:23.336454 kernel: Asymmetric key parser 'x509' registered Nov 12 20:56:23.336462 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:56:23.336470 kernel: io scheduler mq-deadline registered Nov 12 20:56:23.336477 kernel: io scheduler kyber registered Nov 12 20:56:23.336484 kernel: io scheduler bfq registered Nov 12 20:56:23.336494 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:56:23.336502 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 12 20:56:23.336510 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 12 20:56:23.336518 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 12 20:56:23.336525 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:56:23.336533 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:56:23.336541 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 20:56:23.336548 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:56:23.336556 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:56:23.336687 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 12 20:56:23.336807 kernel: rtc_cmos 00:04: registered as rtc0 Nov 12 20:56:23.336920 kernel: rtc_cmos 00:04: setting system clock to 2024-11-12T20:56:22 UTC (1731444982) Nov 12 20:56:23.337116 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 12 20:56:23.337127 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 12 20:56:23.337135 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Nov 12 20:56:23.337143 kernel: hpet: Lost 22 RTC interrupts Nov 12 20:56:23.337150 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:56:23.337163 kernel: Segment Routing with IPv6 Nov 12 20:56:23.337170 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:56:23.337178 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:56:23.337185 kernel: Key type dns_resolver registered Nov 12 20:56:23.337193 kernel: IPI shorthand broadcast: enabled Nov 12 20:56:23.337200 kernel: sched_clock: Marking stable (1304002714, 327373948)->(1747575324, -116198662) Nov 12 20:56:23.337208 kernel: registered taskstats version 1 Nov 12 20:56:23.337216 kernel: Loading compiled-in X.509 certificates Nov 12 20:56:23.337223 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:56:23.337234 kernel: Key type .fscrypt registered Nov 12 20:56:23.337241 kernel: Key type fscrypt-provisioning registered Nov 12 20:56:23.337249 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:56:23.337269 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:56:23.337277 kernel: ima: No architecture policies found Nov 12 20:56:23.337284 kernel: clk: Disabling unused clocks Nov 12 20:56:23.337292 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:56:23.337299 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:56:23.337307 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:56:23.337317 kernel: Run /init as init process Nov 12 20:56:23.337325 kernel: with arguments: Nov 12 20:56:23.337332 kernel: /init Nov 12 20:56:23.337340 kernel: with environment: Nov 12 20:56:23.337347 kernel: HOME=/ Nov 12 20:56:23.337354 kernel: TERM=linux Nov 12 20:56:23.337362 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:56:23.337372 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:56:23.337401 systemd[1]: Detected virtualization kvm. Nov 12 20:56:23.337418 systemd[1]: Detected architecture x86-64. Nov 12 20:56:23.337426 systemd[1]: Running in initrd. Nov 12 20:56:23.337434 systemd[1]: No hostname configured, using default hostname. Nov 12 20:56:23.337442 systemd[1]: Hostname set to . Nov 12 20:56:23.337451 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:56:23.337459 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:56:23.337467 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:56:23.337478 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:56:23.337504 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:56:23.337519 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:56:23.337529 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:56:23.337538 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:56:23.337551 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:56:23.337559 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:56:23.337568 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:56:23.337576 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:56:23.337585 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:56:23.337593 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:56:23.337601 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:56:23.337610 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:56:23.337621 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:56:23.337629 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:56:23.337637 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:56:23.337646 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:56:23.337654 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:56:23.337663 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:56:23.337671 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:56:23.337679 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:56:23.337697 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:56:23.337713 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:56:23.337721 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:56:23.337736 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:56:23.337747 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:56:23.337756 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:56:23.337764 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:56:23.337775 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:56:23.337785 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:56:23.337796 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:56:23.337842 systemd-journald[192]: Collecting audit messages is disabled. Nov 12 20:56:23.337868 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:56:23.337879 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:56:23.337888 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:56:23.337899 systemd-journald[192]: Journal started Nov 12 20:56:23.337918 systemd-journald[192]: Runtime Journal (/run/log/journal/8f0017c691a443548db450b7746640cc) is 6.0M, max 48.4M, 42.3M free. Nov 12 20:56:23.334659 systemd-modules-load[193]: Inserted module 'overlay' Nov 12 20:56:23.375812 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:56:23.375844 kernel: Bridge firewalling registered Nov 12 20:56:23.375856 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:56:23.366641 systemd-modules-load[193]: Inserted module 'br_netfilter' Nov 12 20:56:23.377898 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:56:23.388703 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:56:23.413226 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:56:23.414515 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:56:23.416216 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:56:23.418549 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:56:23.432080 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:56:23.435167 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:56:23.438427 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:56:23.452340 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:56:23.456641 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:56:23.469000 dracut-cmdline[226]: dracut-dracut-053 Nov 12 20:56:23.472776 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:56:23.505919 systemd-resolved[229]: Positive Trust Anchors: Nov 12 20:56:23.505944 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:56:23.506001 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:56:23.508912 systemd-resolved[229]: Defaulting to hostname 'linux'. Nov 12 20:56:23.510037 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:56:23.520646 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:56:23.589040 kernel: SCSI subsystem initialized Nov 12 20:56:23.599042 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:56:23.611026 kernel: iscsi: registered transport (tcp) Nov 12 20:56:23.639036 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:56:23.639110 kernel: QLogic iSCSI HBA Driver Nov 12 20:56:23.692679 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:56:23.701159 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:56:23.727038 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:56:23.727132 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:56:23.728786 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:56:23.773064 kernel: raid6: avx2x4 gen() 26957 MB/s Nov 12 20:56:23.790040 kernel: raid6: avx2x2 gen() 29144 MB/s Nov 12 20:56:23.807232 kernel: raid6: avx2x1 gen() 23651 MB/s Nov 12 20:56:23.807315 kernel: raid6: using algorithm avx2x2 gen() 29144 MB/s Nov 12 20:56:23.825148 kernel: raid6: .... xor() 17938 MB/s, rmw enabled Nov 12 20:56:23.825229 kernel: raid6: using avx2x2 recovery algorithm Nov 12 20:56:23.851064 kernel: xor: automatically using best checksumming function avx Nov 12 20:56:24.043036 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:56:24.059036 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:56:24.070361 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:56:24.086684 systemd-udevd[412]: Using default interface naming scheme 'v255'. Nov 12 20:56:24.091770 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:56:24.118185 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:56:24.132157 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Nov 12 20:56:24.166828 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:56:24.178159 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:56:24.251559 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:56:24.263708 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:56:24.277181 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:56:24.284126 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:56:24.288767 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:56:24.292243 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:56:24.297057 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:56:24.303134 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:56:24.311298 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 12 20:56:24.327115 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 20:56:24.327332 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:56:24.327359 kernel: AES CTR mode by8 optimization enabled Nov 12 20:56:24.327373 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 20:56:24.327386 kernel: GPT:9289727 != 19775487 Nov 12 20:56:24.327397 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 20:56:24.327407 kernel: GPT:9289727 != 19775487 Nov 12 20:56:24.327416 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:56:24.327426 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:56:24.334147 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:56:24.341999 kernel: libata version 3.00 loaded. Nov 12 20:56:24.344603 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:56:24.345763 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:56:24.351736 kernel: ahci 0000:00:1f.2: version 3.0 Nov 12 20:56:24.381521 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 12 20:56:24.381540 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 12 20:56:24.381708 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 12 20:56:24.381858 kernel: scsi host0: ahci Nov 12 20:56:24.382046 kernel: scsi host1: ahci Nov 12 20:56:24.382206 kernel: scsi host2: ahci Nov 12 20:56:24.382380 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (461) Nov 12 20:56:24.382394 kernel: scsi host3: ahci Nov 12 20:56:24.382550 kernel: scsi host4: ahci Nov 12 20:56:24.382712 kernel: scsi host5: ahci Nov 12 20:56:24.382866 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Nov 12 20:56:24.382878 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Nov 12 20:56:24.382890 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Nov 12 20:56:24.382906 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Nov 12 20:56:24.382918 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Nov 12 20:56:24.382930 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Nov 12 20:56:24.353225 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:56:24.400215 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (468) Nov 12 20:56:24.357385 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:56:24.357630 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:56:24.359831 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:56:24.393428 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:56:24.419586 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 20:56:24.453306 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:56:24.468530 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 20:56:24.474707 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 20:56:24.477964 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 20:56:24.485284 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:56:24.500248 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:56:24.503961 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:56:24.512191 disk-uuid[556]: Primary Header is updated. Nov 12 20:56:24.512191 disk-uuid[556]: Secondary Entries is updated. Nov 12 20:56:24.512191 disk-uuid[556]: Secondary Header is updated. Nov 12 20:56:24.515000 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:56:24.521015 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:56:24.534624 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:56:24.704446 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 12 20:56:24.704542 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 12 20:56:24.704558 kernel: ata3.00: applying bridge limits Nov 12 20:56:24.704572 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 12 20:56:24.706072 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 12 20:56:24.706198 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 12 20:56:24.707001 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 12 20:56:24.708016 kernel: ata3.00: configured for UDMA/100 Nov 12 20:56:24.709012 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 12 20:56:24.714029 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 12 20:56:24.761443 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 12 20:56:24.778883 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 20:56:24.779413 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 12 20:56:25.522007 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:56:25.522262 disk-uuid[559]: The operation has completed successfully. Nov 12 20:56:25.552946 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:56:25.553093 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:56:25.582439 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:56:25.588792 sh[592]: Success Nov 12 20:56:25.603113 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 12 20:56:25.643479 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:56:25.661140 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:56:25.694773 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:56:25.701971 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:56:25.702011 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:56:25.702024 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:56:25.702038 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:56:25.702051 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:56:25.706106 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:56:25.706705 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:56:25.719174 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:56:25.721118 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:56:25.730000 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:56:25.730030 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:56:25.730041 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:56:25.734005 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:56:25.742572 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:56:25.744331 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:56:25.825291 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:56:25.842221 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:56:25.868583 systemd-networkd[770]: lo: Link UP Nov 12 20:56:25.868594 systemd-networkd[770]: lo: Gained carrier Nov 12 20:56:25.870214 systemd-networkd[770]: Enumeration completed Nov 12 20:56:25.870598 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:56:25.870602 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:56:25.878960 systemd-networkd[770]: eth0: Link UP Nov 12 20:56:25.878964 systemd-networkd[770]: eth0: Gained carrier Nov 12 20:56:25.878975 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:56:25.879130 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:56:25.884449 systemd[1]: Reached target network.target - Network. Nov 12 20:56:25.895056 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.153/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 20:56:25.900599 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:56:25.913218 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:56:26.050784 ignition[775]: Ignition 2.19.0 Nov 12 20:56:26.050798 ignition[775]: Stage: fetch-offline Nov 12 20:56:26.050897 ignition[775]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:56:26.050913 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:56:26.051078 ignition[775]: parsed url from cmdline: "" Nov 12 20:56:26.051085 ignition[775]: no config URL provided Nov 12 20:56:26.051094 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:56:26.051109 ignition[775]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:56:26.051149 ignition[775]: op(1): [started] loading QEMU firmware config module Nov 12 20:56:26.051158 ignition[775]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 20:56:26.064408 ignition[775]: op(1): [finished] loading QEMU firmware config module Nov 12 20:56:26.111757 ignition[775]: parsing config with SHA512: bcbd1d94e9e4d3acfec8b1ec18516c155be1e36d2bbaa886ba5250dc42ac9e0f1387bc0d6f858beef31a34835a8211562148f03738cfa1e520ff148ca9110966 Nov 12 20:56:26.119904 systemd-resolved[229]: Detected conflict on linux IN A 10.0.0.153 Nov 12 20:56:26.119927 systemd-resolved[229]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Nov 12 20:56:26.141325 unknown[775]: fetched base config from "system" Nov 12 20:56:26.141348 unknown[775]: fetched user config from "qemu" Nov 12 20:56:26.142091 ignition[775]: fetch-offline: fetch-offline passed Nov 12 20:56:26.144933 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:56:26.142243 ignition[775]: Ignition finished successfully Nov 12 20:56:26.146794 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 20:56:26.153362 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:56:26.173080 ignition[785]: Ignition 2.19.0 Nov 12 20:56:26.173094 ignition[785]: Stage: kargs Nov 12 20:56:26.173320 ignition[785]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:56:26.173334 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:56:26.174460 ignition[785]: kargs: kargs passed Nov 12 20:56:26.174518 ignition[785]: Ignition finished successfully Nov 12 20:56:26.182540 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:56:26.191622 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:56:26.206766 ignition[794]: Ignition 2.19.0 Nov 12 20:56:26.206777 ignition[794]: Stage: disks Nov 12 20:56:26.206936 ignition[794]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:56:26.206947 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:56:26.207892 ignition[794]: disks: disks passed Nov 12 20:56:26.211749 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:56:26.207947 ignition[794]: Ignition finished successfully Nov 12 20:56:26.214087 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:56:26.214597 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:56:26.215260 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:56:26.220413 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:56:26.222690 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:56:26.237362 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:56:26.255923 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 20:56:26.263900 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:56:26.273222 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:56:26.397020 kernel: EXT4-fs (vda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:56:26.397687 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:56:26.399655 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:56:26.410130 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:56:26.412149 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:56:26.413717 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 20:56:26.419353 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (813) Nov 12 20:56:26.419377 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:56:26.413784 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:56:26.427383 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:56:26.427423 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:56:26.427437 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:56:26.413806 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:56:26.422575 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:56:26.444320 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:56:26.447412 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:56:26.548386 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:56:26.553656 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:56:26.558111 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:56:26.562347 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:56:26.689229 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:56:26.702326 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:56:26.704517 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:56:26.711513 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:56:26.713022 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:56:26.787369 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:56:26.793330 ignition[927]: INFO : Ignition 2.19.0 Nov 12 20:56:26.793330 ignition[927]: INFO : Stage: mount Nov 12 20:56:26.795259 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:56:26.795259 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:56:26.795259 ignition[927]: INFO : mount: mount passed Nov 12 20:56:26.795259 ignition[927]: INFO : Ignition finished successfully Nov 12 20:56:26.801507 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:56:26.809092 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:56:26.816908 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:56:26.833014 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (942) Nov 12 20:56:26.833073 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:56:26.834556 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:56:26.834582 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:56:26.838007 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:56:26.839865 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:56:26.860385 ignition[959]: INFO : Ignition 2.19.0 Nov 12 20:56:26.860385 ignition[959]: INFO : Stage: files Nov 12 20:56:26.862474 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:56:26.862474 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:56:26.866083 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:56:26.868079 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:56:26.868079 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:56:26.874442 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:56:26.876160 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:56:26.878155 unknown[959]: wrote ssh authorized keys file for user: core Nov 12 20:56:26.879578 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:56:26.881601 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 20:56:26.881601 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 20:56:26.881601 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:56:26.881601 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:56:26.922064 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 20:56:27.024231 systemd-networkd[770]: eth0: Gained IPv6LL Nov 12 20:56:27.038472 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:56:27.038472 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 20:56:27.055437 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 12 20:56:27.412201 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Nov 12 20:56:27.486499 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 20:56:27.516086 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:56:27.516086 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:56:27.516086 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:56:27.516086 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:56:27.516086 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:56:27.516086 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:56:27.516086 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:56:27.516086 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:56:27.516086 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:56:27.516086 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:56:27.516086 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:56:27.516086 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:56:27.516086 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:56:27.516086 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 12 20:56:27.818878 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Nov 12 20:56:28.162532 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:56:28.162532 ignition[959]: INFO : files: op(d): [started] processing unit "containerd.service" Nov 12 20:56:28.190701 ignition[959]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 20:56:28.193745 ignition[959]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 20:56:28.193745 ignition[959]: INFO : files: op(d): [finished] processing unit "containerd.service" Nov 12 20:56:28.193745 ignition[959]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Nov 12 20:56:28.199411 ignition[959]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:56:28.199411 ignition[959]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:56:28.199411 ignition[959]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Nov 12 20:56:28.199411 ignition[959]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Nov 12 20:56:28.199411 ignition[959]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:56:28.209073 ignition[959]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:56:28.209073 ignition[959]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Nov 12 20:56:28.209073 ignition[959]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 20:56:28.236669 ignition[959]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:56:28.243550 ignition[959]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:56:28.245541 ignition[959]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 20:56:28.245541 ignition[959]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:56:28.249003 ignition[959]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:56:28.250827 ignition[959]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:56:28.253098 ignition[959]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:56:28.255166 ignition[959]: INFO : files: files passed Nov 12 20:56:28.256070 ignition[959]: INFO : Ignition finished successfully Nov 12 20:56:28.259579 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:56:28.270253 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:56:28.272147 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:56:28.279045 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:56:28.279201 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:56:28.285962 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 20:56:28.291426 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:56:28.293857 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:56:28.295656 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:56:28.297607 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:56:28.301116 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:56:28.318413 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:56:28.350638 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:56:28.362659 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:56:28.366044 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:56:28.368500 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:56:28.371150 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:56:28.389300 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:56:28.405208 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:56:28.419249 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:56:28.430423 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:56:28.447356 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:56:28.447729 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:56:28.448334 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:56:28.448514 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:56:28.455403 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:56:28.456294 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:56:28.456640 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:56:28.457062 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:56:28.457626 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:56:28.458194 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:56:28.458580 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:56:28.458966 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:56:28.459602 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:56:28.460005 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:56:28.460507 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:56:28.460663 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:56:28.479779 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:56:28.480774 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:56:28.481289 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:56:28.485427 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:56:28.486364 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:56:28.486500 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:56:28.492384 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:56:28.492592 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:56:28.493443 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:56:28.497882 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:56:28.501052 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:56:28.501611 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:56:28.504425 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:56:28.506416 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:56:28.506525 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:56:28.508310 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:56:28.508398 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:56:28.509948 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:56:28.510135 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:56:28.511897 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:56:28.512018 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:56:28.532279 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:56:28.535760 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:56:28.536274 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:56:28.536392 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:56:28.538664 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:56:28.538807 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:56:28.547335 ignition[1015]: INFO : Ignition 2.19.0 Nov 12 20:56:28.547335 ignition[1015]: INFO : Stage: umount Nov 12 20:56:28.549609 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:56:28.549609 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:56:28.549609 ignition[1015]: INFO : umount: umount passed Nov 12 20:56:28.549609 ignition[1015]: INFO : Ignition finished successfully Nov 12 20:56:28.551250 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:56:28.551397 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:56:28.555342 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:56:28.555475 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:56:28.557264 systemd[1]: Stopped target network.target - Network. Nov 12 20:56:28.558922 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:56:28.559027 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:56:28.559396 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:56:28.559443 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:56:28.562475 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:56:28.562520 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:56:28.564864 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:56:28.564925 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:56:28.569645 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:56:28.572313 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:56:28.574045 systemd-networkd[770]: eth0: DHCPv6 lease lost Nov 12 20:56:28.576461 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:56:28.576634 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:56:28.579378 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:56:28.579457 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:56:28.586119 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:56:28.587411 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:56:28.587489 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:56:28.590385 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:56:28.594519 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:56:28.595092 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:56:28.595247 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:56:28.641758 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:56:28.642109 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:56:28.646416 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:56:28.646561 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:56:28.649350 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:56:28.649452 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:56:28.651055 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:56:28.651097 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:56:28.653493 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:56:28.653560 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:56:28.656399 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:56:28.656463 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:56:28.658398 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:56:28.658458 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:56:28.671347 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:56:28.673659 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:56:28.673749 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:56:28.675749 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:56:28.675816 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:56:28.677991 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:56:28.678044 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:56:28.714744 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 20:56:28.714800 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:56:28.715262 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:56:28.715309 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:56:28.715587 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:56:28.715628 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:56:28.715942 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:56:28.715994 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:56:28.716727 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:56:28.716838 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:56:29.082676 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:56:29.082871 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:56:29.085651 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:56:29.086826 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:56:29.086913 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:56:29.103290 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:56:29.111616 systemd[1]: Switching root. Nov 12 20:56:29.144954 systemd-journald[192]: Journal stopped Nov 12 20:56:30.688503 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Nov 12 20:56:30.688606 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:56:30.688634 kernel: SELinux: policy capability open_perms=1 Nov 12 20:56:30.688651 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:56:30.688666 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:56:30.688682 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:56:30.688704 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:56:30.688720 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:56:30.688736 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:56:30.688757 kernel: audit: type=1403 audit(1731444989.814:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:56:30.688774 systemd[1]: Successfully loaded SELinux policy in 43.383ms. Nov 12 20:56:30.688806 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.534ms. Nov 12 20:56:30.688824 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:56:30.688841 systemd[1]: Detected virtualization kvm. Nov 12 20:56:30.688859 systemd[1]: Detected architecture x86-64. Nov 12 20:56:30.688876 systemd[1]: Detected first boot. Nov 12 20:56:30.688892 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:56:30.688907 zram_generator::config[1076]: No configuration found. Nov 12 20:56:30.688928 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:56:30.688945 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:56:30.688961 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 20:56:30.689001 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:56:30.689023 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:56:30.689040 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:56:30.689059 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:56:30.689084 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:56:30.689106 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:56:30.689123 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:56:30.689141 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:56:30.689158 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:56:30.689175 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:56:30.689192 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:56:30.689208 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:56:30.689226 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:56:30.689247 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:56:30.689263 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:56:30.689279 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:56:30.689297 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:56:30.689314 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:56:30.689331 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:56:30.689348 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:56:30.689364 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:56:30.689381 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:56:30.689401 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:56:30.689418 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:56:30.689439 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:56:30.689456 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:56:30.689472 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:56:30.689488 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:56:30.689504 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:56:30.689520 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:56:30.689536 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:56:30.689555 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:56:30.689572 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:56:30.689592 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:56:30.689613 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:56:30.689642 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:56:30.689664 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:56:30.689684 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:56:30.689701 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:56:30.689721 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:56:30.689737 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:56:30.689754 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:56:30.689769 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:56:30.689785 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:56:30.689800 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:56:30.689819 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:56:30.689836 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 12 20:56:30.689856 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 12 20:56:30.689872 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:56:30.689888 kernel: loop: module loaded Nov 12 20:56:30.689904 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:56:30.689944 systemd-journald[1154]: Collecting audit messages is disabled. Nov 12 20:56:30.689974 kernel: fuse: init (API version 7.39) Nov 12 20:56:30.690005 systemd-journald[1154]: Journal started Nov 12 20:56:30.690039 systemd-journald[1154]: Runtime Journal (/run/log/journal/8f0017c691a443548db450b7746640cc) is 6.0M, max 48.4M, 42.3M free. Nov 12 20:56:30.722023 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:56:30.728520 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:56:30.735999 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:56:30.736036 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:56:30.739000 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:56:30.740711 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:56:30.742241 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:56:30.743599 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:56:30.744704 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:56:30.745931 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:56:30.747179 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:56:30.748783 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:56:30.750093 kernel: ACPI: bus type drm_connector registered Nov 12 20:56:30.751147 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:56:30.751373 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:56:30.752902 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:56:30.753134 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:56:30.754936 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:56:30.755191 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:56:30.756720 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:56:30.757005 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:56:30.758579 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:56:30.758818 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:56:30.760367 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:56:30.760595 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:56:30.762186 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:56:30.764274 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:56:30.766690 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:56:30.781049 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:56:30.790143 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:56:30.831687 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:56:30.832878 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:56:30.900212 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:56:30.903182 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:56:30.904622 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:56:30.906254 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:56:30.907543 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:56:30.909415 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:56:30.915702 systemd-journald[1154]: Time spent on flushing to /var/log/journal/8f0017c691a443548db450b7746640cc is 19.325ms for 946 entries. Nov 12 20:56:30.915702 systemd-journald[1154]: System Journal (/var/log/journal/8f0017c691a443548db450b7746640cc) is 8.0M, max 195.6M, 187.6M free. Nov 12 20:56:31.582290 systemd-journald[1154]: Received client request to flush runtime journal. Nov 12 20:56:30.919331 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:56:30.924302 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:56:30.926160 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:56:30.928832 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:56:30.942220 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:56:30.979693 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:56:31.028785 udevadm[1212]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 12 20:56:31.096592 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Nov 12 20:56:31.096610 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Nov 12 20:56:31.103126 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:56:31.188713 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:56:31.243454 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:56:31.534565 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:56:31.559439 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:56:31.584467 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:56:31.629033 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:56:31.639166 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:56:31.657559 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Nov 12 20:56:31.657582 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Nov 12 20:56:31.663771 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:56:32.423339 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:56:32.442307 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:56:32.526796 systemd-udevd[1241]: Using default interface naming scheme 'v255'. Nov 12 20:56:32.543272 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:56:32.559212 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:56:32.573134 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:56:32.576119 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 12 20:56:32.586011 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1258) Nov 12 20:56:32.597008 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1252) Nov 12 20:56:32.601150 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1258) Nov 12 20:56:32.633527 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:56:32.688004 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 12 20:56:32.703069 kernel: ACPI: button: Power Button [PWRF] Nov 12 20:56:32.704626 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:56:32.724120 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 12 20:56:32.730145 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 12 20:56:32.730196 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 12 20:56:32.730476 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 12 20:56:32.754123 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:56:32.765386 systemd-networkd[1246]: lo: Link UP Nov 12 20:56:32.765401 systemd-networkd[1246]: lo: Gained carrier Nov 12 20:56:32.767214 systemd-networkd[1246]: Enumeration completed Nov 12 20:56:32.767708 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:56:32.767713 systemd-networkd[1246]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:56:32.768555 systemd-networkd[1246]: eth0: Link UP Nov 12 20:56:32.768559 systemd-networkd[1246]: eth0: Gained carrier Nov 12 20:56:32.768571 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:56:32.771249 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:56:32.810655 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:56:32.815554 systemd-networkd[1246]: eth0: DHCPv4 address 10.0.0.153/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 20:56:32.830059 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:56:32.897569 kernel: kvm_amd: TSC scaling supported Nov 12 20:56:32.897681 kernel: kvm_amd: Nested Virtualization enabled Nov 12 20:56:32.897699 kernel: kvm_amd: Nested Paging enabled Nov 12 20:56:32.897716 kernel: kvm_amd: LBR virtualization supported Nov 12 20:56:32.898189 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 12 20:56:32.899445 kernel: kvm_amd: Virtual GIF supported Nov 12 20:56:32.922024 kernel: EDAC MC: Ver: 3.0.0 Nov 12 20:56:32.951867 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:56:32.971084 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:56:32.994279 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:56:33.004668 lvm[1287]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:56:33.043440 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:56:33.045006 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:56:33.059124 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:56:33.064685 lvm[1290]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:56:33.098614 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:56:33.100222 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:56:33.101564 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:56:33.101594 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:56:33.102671 systemd[1]: Reached target machines.target - Containers. Nov 12 20:56:33.104814 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:56:33.116249 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:56:33.119508 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:56:33.120777 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:56:33.122035 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:56:33.125059 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:56:33.130660 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:56:33.133165 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:56:33.205888 kernel: loop0: detected capacity change from 0 to 142488 Nov 12 20:56:33.209026 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:56:33.227023 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:56:33.370028 kernel: loop1: detected capacity change from 0 to 140768 Nov 12 20:56:33.439054 kernel: loop2: detected capacity change from 0 to 211296 Nov 12 20:56:33.612062 kernel: loop3: detected capacity change from 0 to 142488 Nov 12 20:56:33.624035 kernel: loop4: detected capacity change from 0 to 140768 Nov 12 20:56:33.636021 kernel: loop5: detected capacity change from 0 to 211296 Nov 12 20:56:33.642176 (sd-merge)[1308]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 20:56:33.642778 (sd-merge)[1308]: Merged extensions into '/usr'. Nov 12 20:56:33.647603 systemd[1]: Reloading requested from client PID 1298 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:56:33.647617 systemd[1]: Reloading... Nov 12 20:56:33.739137 zram_generator::config[1337]: No configuration found. Nov 12 20:56:33.780076 ldconfig[1294]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:56:33.881391 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:56:33.970582 systemd[1]: Reloading finished in 322 ms. Nov 12 20:56:33.996178 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:56:33.997857 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:56:34.003606 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:56:34.005381 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:56:34.019338 systemd[1]: Starting ensure-sysext.service... Nov 12 20:56:34.021907 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:56:34.026452 systemd[1]: Reloading requested from client PID 1383 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:56:34.026468 systemd[1]: Reloading... Nov 12 20:56:34.056546 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:56:34.056920 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:56:34.057960 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:56:34.058333 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Nov 12 20:56:34.058414 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Nov 12 20:56:34.062897 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:56:34.062915 systemd-tmpfiles[1384]: Skipping /boot Nov 12 20:56:34.079033 zram_generator::config[1412]: No configuration found. Nov 12 20:56:34.081724 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:56:34.081742 systemd-tmpfiles[1384]: Skipping /boot Nov 12 20:56:34.220782 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:56:34.292100 systemd[1]: Reloading finished in 265 ms. Nov 12 20:56:34.314413 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:56:34.331602 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:56:34.334632 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:56:34.337429 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:56:34.342175 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:56:34.346556 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:56:34.354492 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:56:34.354752 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:56:34.358674 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:56:34.374512 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:56:34.381375 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:56:34.382815 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:56:34.382958 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:56:34.386898 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:56:34.389742 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:56:34.390102 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:56:34.392684 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:56:34.393118 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:56:34.395377 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:56:34.395694 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:56:34.401089 augenrules[1486]: No rules Nov 12 20:56:34.404018 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:56:34.411417 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:56:34.418737 systemd[1]: Finished ensure-sysext.service. Nov 12 20:56:34.421568 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:56:34.422132 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:56:34.435368 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:56:34.438411 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:56:34.443160 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:56:34.448159 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:56:34.449421 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:56:34.452939 systemd-resolved[1461]: Positive Trust Anchors: Nov 12 20:56:34.452954 systemd-resolved[1461]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:56:34.453168 systemd-resolved[1461]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:56:34.454143 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 20:56:34.457182 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:56:34.457779 systemd-resolved[1461]: Defaulting to hostname 'linux'. Nov 12 20:56:34.460111 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:56:34.460801 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:56:34.462639 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:56:34.464600 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:56:34.464830 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:56:34.466366 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:56:34.466584 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:56:34.468029 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:56:34.468280 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:56:34.469837 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:56:34.470123 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:56:34.487909 systemd[1]: Reached target network.target - Network. Nov 12 20:56:34.489111 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:56:34.490424 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:56:34.490517 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:56:34.490554 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:56:34.517486 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:56:34.569972 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 20:56:34.571702 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:56:35.269298 systemd-resolved[1461]: Clock change detected. Flushing caches. Nov 12 20:56:35.269310 systemd-timesyncd[1508]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 20:56:35.269359 systemd-timesyncd[1508]: Initial clock synchronization to Tue 2024-11-12 20:56:35.269214 UTC. Nov 12 20:56:35.270155 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:56:35.271477 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:56:35.272770 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:56:35.274074 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:56:35.274117 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:56:35.275101 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:56:35.276375 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:56:35.277715 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:56:35.278991 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:56:35.280852 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:56:35.284464 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:56:35.287180 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:56:35.292644 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:56:35.293851 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:56:35.294912 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:56:35.296180 systemd[1]: System is tainted: cgroupsv1 Nov 12 20:56:35.296231 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:56:35.296260 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:56:35.297930 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:56:35.300452 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:56:35.302664 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:56:35.306846 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:56:35.308246 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:56:35.310082 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:56:35.313311 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:56:35.317309 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:56:35.322383 jq[1525]: false Nov 12 20:56:35.324497 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:56:35.338360 systemd-networkd[1246]: eth0: Gained IPv6LL Nov 12 20:56:35.339833 extend-filesystems[1526]: Found loop3 Nov 12 20:56:35.339833 extend-filesystems[1526]: Found loop4 Nov 12 20:56:35.339833 extend-filesystems[1526]: Found loop5 Nov 12 20:56:35.339833 extend-filesystems[1526]: Found sr0 Nov 12 20:56:35.339833 extend-filesystems[1526]: Found vda Nov 12 20:56:35.339833 extend-filesystems[1526]: Found vda1 Nov 12 20:56:35.339833 extend-filesystems[1526]: Found vda2 Nov 12 20:56:35.339833 extend-filesystems[1526]: Found vda3 Nov 12 20:56:35.339833 extend-filesystems[1526]: Found usr Nov 12 20:56:35.339833 extend-filesystems[1526]: Found vda4 Nov 12 20:56:35.339833 extend-filesystems[1526]: Found vda6 Nov 12 20:56:35.339833 extend-filesystems[1526]: Found vda7 Nov 12 20:56:35.339833 extend-filesystems[1526]: Found vda9 Nov 12 20:56:35.339833 extend-filesystems[1526]: Checking size of /dev/vda9 Nov 12 20:56:35.339377 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:56:35.345887 dbus-daemon[1524]: [system] SELinux support is enabled Nov 12 20:56:35.344364 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 20:56:35.352382 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:56:35.355314 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:56:35.374259 jq[1549]: true Nov 12 20:56:35.358505 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:56:35.363488 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:56:35.367719 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:56:35.368166 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:56:35.368629 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:56:35.369028 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:56:35.388699 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:56:35.389191 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:56:35.393955 update_engine[1546]: I20241112 20:56:35.393101 1546 main.cc:92] Flatcar Update Engine starting Nov 12 20:56:35.428617 extend-filesystems[1526]: Resized partition /dev/vda9 Nov 12 20:56:35.435550 extend-filesystems[1558]: resize2fs 1.47.1 (20-May-2024) Nov 12 20:56:35.437341 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1247) Nov 12 20:56:35.439251 update_engine[1546]: I20241112 20:56:35.438103 1546 update_check_scheduler.cc:74] Next update check in 9m9s Nov 12 20:56:35.450228 jq[1556]: true Nov 12 20:56:35.463716 tar[1555]: linux-amd64/helm Nov 12 20:56:35.466526 (ntainerd)[1559]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:56:35.488458 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 20:56:35.472005 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:56:35.503807 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:56:35.515317 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 20:56:35.564410 systemd-logind[1540]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 20:56:35.609609 sshd_keygen[1550]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:56:35.564449 systemd-logind[1540]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 20:56:35.566257 systemd-logind[1540]: New seat seat0. Nov 12 20:56:35.609859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:56:35.613349 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:56:35.614644 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:56:35.614677 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:56:35.616269 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:56:35.616292 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:56:35.618656 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:56:35.622047 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:56:35.624491 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:56:35.626418 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:56:35.654226 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 20:56:35.685162 extend-filesystems[1558]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 20:56:35.685162 extend-filesystems[1558]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 20:56:35.685162 extend-filesystems[1558]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 20:56:35.685135 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:56:35.705503 bash[1586]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:56:35.705622 extend-filesystems[1526]: Resized filesystem in /dev/vda9 Nov 12 20:56:35.685530 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:56:35.709062 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:56:35.760471 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:56:35.769220 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 20:56:35.772991 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:56:35.773428 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:56:35.775522 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 20:56:35.775887 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 20:56:35.786567 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:56:35.831837 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:56:35.834524 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:56:35.850092 locksmithd[1599]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:56:35.861726 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:56:35.891623 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:56:35.896899 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:56:35.899798 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:56:36.156658 containerd[1559]: time="2024-11-12T20:56:36.156483152Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:56:36.193881 containerd[1559]: time="2024-11-12T20:56:36.193684621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:56:36.196348 containerd[1559]: time="2024-11-12T20:56:36.196277648Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:56:36.196348 containerd[1559]: time="2024-11-12T20:56:36.196334885Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:56:36.196348 containerd[1559]: time="2024-11-12T20:56:36.196359381Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:56:36.198194 containerd[1559]: time="2024-11-12T20:56:36.196587810Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:56:36.198194 containerd[1559]: time="2024-11-12T20:56:36.196614149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:56:36.198194 containerd[1559]: time="2024-11-12T20:56:36.196695041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:56:36.198194 containerd[1559]: time="2024-11-12T20:56:36.196711111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:56:36.198194 containerd[1559]: time="2024-11-12T20:56:36.197047382Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:56:36.198194 containerd[1559]: time="2024-11-12T20:56:36.197065146Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:56:36.198194 containerd[1559]: time="2024-11-12T20:56:36.197091455Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:56:36.198194 containerd[1559]: time="2024-11-12T20:56:36.197106063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:56:36.198194 containerd[1559]: time="2024-11-12T20:56:36.197252367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:56:36.198194 containerd[1559]: time="2024-11-12T20:56:36.197536520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:56:36.198194 containerd[1559]: time="2024-11-12T20:56:36.197768977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:56:36.198550 containerd[1559]: time="2024-11-12T20:56:36.197793132Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:56:36.198550 containerd[1559]: time="2024-11-12T20:56:36.197933345Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:56:36.198550 containerd[1559]: time="2024-11-12T20:56:36.197995281Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:56:36.263253 tar[1555]: linux-amd64/LICENSE Nov 12 20:56:36.263390 tar[1555]: linux-amd64/README.md Nov 12 20:56:36.278488 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:56:36.419855 containerd[1559]: time="2024-11-12T20:56:36.419690947Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:56:36.419855 containerd[1559]: time="2024-11-12T20:56:36.419826100Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:56:36.419855 containerd[1559]: time="2024-11-12T20:56:36.419855676Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:56:36.420053 containerd[1559]: time="2024-11-12T20:56:36.419879901Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:56:36.420053 containerd[1559]: time="2024-11-12T20:56:36.419903095Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:56:36.420227 containerd[1559]: time="2024-11-12T20:56:36.420197227Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:56:36.420800 containerd[1559]: time="2024-11-12T20:56:36.420723955Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:56:36.421070 containerd[1559]: time="2024-11-12T20:56:36.421026193Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:56:36.421070 containerd[1559]: time="2024-11-12T20:56:36.421069334Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:56:36.421172 containerd[1559]: time="2024-11-12T20:56:36.421087027Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:56:36.421172 containerd[1559]: time="2024-11-12T20:56:36.421103748Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:56:36.421172 containerd[1559]: time="2024-11-12T20:56:36.421167909Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:56:36.421254 containerd[1559]: time="2024-11-12T20:56:36.421184159Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:56:36.421254 containerd[1559]: time="2024-11-12T20:56:36.421204928Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:56:36.421254 containerd[1559]: time="2024-11-12T20:56:36.421227661Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:56:36.421254 containerd[1559]: time="2024-11-12T20:56:36.421248309Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:56:36.421368 containerd[1559]: time="2024-11-12T20:56:36.421266554Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:56:36.421368 containerd[1559]: time="2024-11-12T20:56:36.421284097Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:56:36.421368 containerd[1559]: time="2024-11-12T20:56:36.421321657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:56:36.421368 containerd[1559]: time="2024-11-12T20:56:36.421341394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:56:36.421368 containerd[1559]: time="2024-11-12T20:56:36.421356573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:56:36.421487 containerd[1559]: time="2024-11-12T20:56:36.421384786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:56:36.421487 containerd[1559]: time="2024-11-12T20:56:36.421405685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:56:36.421487 containerd[1559]: time="2024-11-12T20:56:36.421430692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:56:36.421487 containerd[1559]: time="2024-11-12T20:56:36.421448395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:56:36.421487 containerd[1559]: time="2024-11-12T20:56:36.421470016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:56:36.421487 containerd[1559]: time="2024-11-12T20:56:36.421487528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:56:36.421595 containerd[1559]: time="2024-11-12T20:56:36.421509209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:56:36.421595 containerd[1559]: time="2024-11-12T20:56:36.421524478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:56:36.421595 containerd[1559]: time="2024-11-12T20:56:36.421542081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:56:36.421595 containerd[1559]: time="2024-11-12T20:56:36.421568540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:56:36.421595 containerd[1559]: time="2024-11-12T20:56:36.421588778Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:56:36.421685 containerd[1559]: time="2024-11-12T20:56:36.421622341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:56:36.421685 containerd[1559]: time="2024-11-12T20:56:36.421638983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:56:36.421685 containerd[1559]: time="2024-11-12T20:56:36.421654331Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:56:36.421737 containerd[1559]: time="2024-11-12T20:56:36.421719664Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:56:36.421763 containerd[1559]: time="2024-11-12T20:56:36.421745132Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:56:36.421786 containerd[1559]: time="2024-11-12T20:56:36.421760481Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:56:36.421786 containerd[1559]: time="2024-11-12T20:56:36.421778194Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:56:36.421823 containerd[1559]: time="2024-11-12T20:56:36.421794765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:56:36.421843 containerd[1559]: time="2024-11-12T20:56:36.421828448Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:56:36.421864 containerd[1559]: time="2024-11-12T20:56:36.421851211Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:56:36.421958 containerd[1559]: time="2024-11-12T20:56:36.421867161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:56:36.422385 containerd[1559]: time="2024-11-12T20:56:36.422305644Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:56:36.422385 containerd[1559]: time="2024-11-12T20:56:36.422372389Z" level=info msg="Connect containerd service" Nov 12 20:56:36.422564 containerd[1559]: time="2024-11-12T20:56:36.422433123Z" level=info msg="using legacy CRI server" Nov 12 20:56:36.422564 containerd[1559]: time="2024-11-12T20:56:36.422441108Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:56:36.422564 containerd[1559]: time="2024-11-12T20:56:36.422545935Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:56:36.423235 containerd[1559]: time="2024-11-12T20:56:36.423197528Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:56:36.423417 containerd[1559]: time="2024-11-12T20:56:36.423359321Z" level=info msg="Start subscribing containerd event" Nov 12 20:56:36.423465 containerd[1559]: time="2024-11-12T20:56:36.423445934Z" level=info msg="Start recovering state" Nov 12 20:56:36.423609 containerd[1559]: time="2024-11-12T20:56:36.423595545Z" level=info msg="Start event monitor" Nov 12 20:56:36.423658 containerd[1559]: time="2024-11-12T20:56:36.423610503Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:56:36.423658 containerd[1559]: time="2024-11-12T20:56:36.423624699Z" level=info msg="Start snapshots syncer" Nov 12 20:56:36.423658 containerd[1559]: time="2024-11-12T20:56:36.423644867Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:56:36.423658 containerd[1559]: time="2024-11-12T20:56:36.423653534Z" level=info msg="Start streaming server" Nov 12 20:56:36.423746 containerd[1559]: time="2024-11-12T20:56:36.423660797Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:56:36.423746 containerd[1559]: time="2024-11-12T20:56:36.423722353Z" level=info msg="containerd successfully booted in 0.271259s" Nov 12 20:56:36.423870 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:56:37.072732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:56:37.074554 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:56:37.076910 systemd[1]: Startup finished in 8.183s (kernel) + 6.605s (userspace) = 14.789s. Nov 12 20:56:37.099882 (kubelet)[1662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:56:37.976460 kubelet[1662]: E1112 20:56:37.976350 1662 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:56:37.981490 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:56:37.981901 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:56:43.679949 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:56:43.695681 systemd[1]: Started sshd@0-10.0.0.153:22-10.0.0.1:60028.service - OpenSSH per-connection server daemon (10.0.0.1:60028). Nov 12 20:56:43.737475 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 60028 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:56:43.740267 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:43.751291 systemd-logind[1540]: New session 1 of user core. Nov 12 20:56:43.752640 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:56:43.764484 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:56:43.779962 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:56:43.787454 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:56:43.792752 (systemd)[1682]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:56:43.911200 systemd[1682]: Queued start job for default target default.target. Nov 12 20:56:43.911659 systemd[1682]: Created slice app.slice - User Application Slice. Nov 12 20:56:43.911678 systemd[1682]: Reached target paths.target - Paths. Nov 12 20:56:43.911692 systemd[1682]: Reached target timers.target - Timers. Nov 12 20:56:43.929377 systemd[1682]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:56:43.938374 systemd[1682]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:56:43.938467 systemd[1682]: Reached target sockets.target - Sockets. Nov 12 20:56:43.938486 systemd[1682]: Reached target basic.target - Basic System. Nov 12 20:56:43.938534 systemd[1682]: Reached target default.target - Main User Target. Nov 12 20:56:43.938575 systemd[1682]: Startup finished in 138ms. Nov 12 20:56:43.939417 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:56:43.941282 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:56:43.999540 systemd[1]: Started sshd@1-10.0.0.153:22-10.0.0.1:60030.service - OpenSSH per-connection server daemon (10.0.0.1:60030). Nov 12 20:56:44.029851 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 60030 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:56:44.031650 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:44.036817 systemd-logind[1540]: New session 2 of user core. Nov 12 20:56:44.050619 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:56:44.107890 sshd[1694]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:44.121401 systemd[1]: Started sshd@2-10.0.0.153:22-10.0.0.1:60042.service - OpenSSH per-connection server daemon (10.0.0.1:60042). Nov 12 20:56:44.121859 systemd[1]: sshd@1-10.0.0.153:22-10.0.0.1:60030.service: Deactivated successfully. Nov 12 20:56:44.124242 systemd-logind[1540]: Session 2 logged out. Waiting for processes to exit. Nov 12 20:56:44.124925 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 20:56:44.126317 systemd-logind[1540]: Removed session 2. Nov 12 20:56:44.150547 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 60042 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:56:44.152608 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:44.158254 systemd-logind[1540]: New session 3 of user core. Nov 12 20:56:44.171652 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:56:44.225131 sshd[1699]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:44.240534 systemd[1]: Started sshd@3-10.0.0.153:22-10.0.0.1:60050.service - OpenSSH per-connection server daemon (10.0.0.1:60050). Nov 12 20:56:44.241182 systemd[1]: sshd@2-10.0.0.153:22-10.0.0.1:60042.service: Deactivated successfully. Nov 12 20:56:44.244266 systemd-logind[1540]: Session 3 logged out. Waiting for processes to exit. Nov 12 20:56:44.245397 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 20:56:44.246542 systemd-logind[1540]: Removed session 3. Nov 12 20:56:44.270674 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 60050 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:56:44.272364 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:44.277161 systemd-logind[1540]: New session 4 of user core. Nov 12 20:56:44.295635 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:56:44.354260 sshd[1707]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:44.363596 systemd[1]: Started sshd@4-10.0.0.153:22-10.0.0.1:60056.service - OpenSSH per-connection server daemon (10.0.0.1:60056). Nov 12 20:56:44.364187 systemd[1]: sshd@3-10.0.0.153:22-10.0.0.1:60050.service: Deactivated successfully. Nov 12 20:56:44.366645 systemd-logind[1540]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:56:44.367396 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:56:44.368777 systemd-logind[1540]: Removed session 4. Nov 12 20:56:44.393056 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 60056 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:56:44.394900 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:44.399770 systemd-logind[1540]: New session 5 of user core. Nov 12 20:56:44.419701 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:56:44.482113 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:56:44.482584 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:56:44.502383 sudo[1722]: pam_unix(sudo:session): session closed for user root Nov 12 20:56:44.504679 sshd[1715]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:44.519711 systemd[1]: Started sshd@5-10.0.0.153:22-10.0.0.1:60060.service - OpenSSH per-connection server daemon (10.0.0.1:60060). Nov 12 20:56:44.520437 systemd[1]: sshd@4-10.0.0.153:22-10.0.0.1:60056.service: Deactivated successfully. Nov 12 20:56:44.522425 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:56:44.523349 systemd-logind[1540]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:56:44.524763 systemd-logind[1540]: Removed session 5. Nov 12 20:56:44.550916 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 60060 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:56:44.553169 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:44.557769 systemd-logind[1540]: New session 6 of user core. Nov 12 20:56:44.571627 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:56:44.627507 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:56:44.627872 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:56:44.632406 sudo[1732]: pam_unix(sudo:session): session closed for user root Nov 12 20:56:44.639202 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:56:44.639528 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:56:44.665519 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:56:44.667620 auditctl[1735]: No rules Nov 12 20:56:44.668966 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:56:44.669363 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:56:44.671596 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:56:44.702439 augenrules[1754]: No rules Nov 12 20:56:44.704640 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:56:44.706055 sudo[1731]: pam_unix(sudo:session): session closed for user root Nov 12 20:56:44.708024 sshd[1724]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:44.717440 systemd[1]: Started sshd@6-10.0.0.153:22-10.0.0.1:60068.service - OpenSSH per-connection server daemon (10.0.0.1:60068). Nov 12 20:56:44.718073 systemd[1]: sshd@5-10.0.0.153:22-10.0.0.1:60060.service: Deactivated successfully. Nov 12 20:56:44.720713 systemd-logind[1540]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:56:44.721618 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:56:44.722976 systemd-logind[1540]: Removed session 6. Nov 12 20:56:44.748558 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 60068 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:56:44.750673 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:44.755317 systemd-logind[1540]: New session 7 of user core. Nov 12 20:56:44.765447 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:56:44.820082 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:56:44.820488 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:56:45.362497 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:56:45.362683 (dockerd)[1785]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:56:45.981624 dockerd[1785]: time="2024-11-12T20:56:45.981541659Z" level=info msg="Starting up" Nov 12 20:56:48.196545 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:56:48.211313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:56:48.387266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:56:48.391712 (kubelet)[1821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:56:48.952830 kubelet[1821]: E1112 20:56:48.952703 1821 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:56:48.960663 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:56:48.960923 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:56:49.136505 dockerd[1785]: time="2024-11-12T20:56:49.136444014Z" level=info msg="Loading containers: start." Nov 12 20:56:49.395174 kernel: Initializing XFRM netlink socket Nov 12 20:56:49.473050 systemd-networkd[1246]: docker0: Link UP Nov 12 20:56:49.498114 dockerd[1785]: time="2024-11-12T20:56:49.498049230Z" level=info msg="Loading containers: done." Nov 12 20:56:49.517506 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1017815662-merged.mount: Deactivated successfully. Nov 12 20:56:49.520555 dockerd[1785]: time="2024-11-12T20:56:49.520478515Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:56:49.520750 dockerd[1785]: time="2024-11-12T20:56:49.520615121Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:56:49.520813 dockerd[1785]: time="2024-11-12T20:56:49.520762948Z" level=info msg="Daemon has completed initialization" Nov 12 20:56:49.567002 dockerd[1785]: time="2024-11-12T20:56:49.566897443Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:56:49.567215 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:56:50.474836 containerd[1559]: time="2024-11-12T20:56:50.474777950Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 20:56:51.478495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount559976746.mount: Deactivated successfully. Nov 12 20:56:53.068132 containerd[1559]: time="2024-11-12T20:56:53.068053322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:53.068754 containerd[1559]: time="2024-11-12T20:56:53.068673686Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=35140799" Nov 12 20:56:53.070083 containerd[1559]: time="2024-11-12T20:56:53.070047294Z" level=info msg="ImageCreate event name:\"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:53.073465 containerd[1559]: time="2024-11-12T20:56:53.073436274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:53.075265 containerd[1559]: time="2024-11-12T20:56:53.075188673Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"35137599\" in 2.600363544s" Nov 12 20:56:53.075319 containerd[1559]: time="2024-11-12T20:56:53.075266529Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\"" Nov 12 20:56:53.101381 containerd[1559]: time="2024-11-12T20:56:53.101328652Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 20:56:55.185652 containerd[1559]: time="2024-11-12T20:56:55.185537345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:55.188357 containerd[1559]: time="2024-11-12T20:56:55.188285452Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=32218299" Nov 12 20:56:55.189990 containerd[1559]: time="2024-11-12T20:56:55.189941851Z" level=info msg="ImageCreate event name:\"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:55.192912 containerd[1559]: time="2024-11-12T20:56:55.192856802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:55.193930 containerd[1559]: time="2024-11-12T20:56:55.193873590Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"33663665\" in 2.092501676s" Nov 12 20:56:55.193930 containerd[1559]: time="2024-11-12T20:56:55.193915088Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\"" Nov 12 20:56:55.222308 containerd[1559]: time="2024-11-12T20:56:55.222270064Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 20:56:56.644291 containerd[1559]: time="2024-11-12T20:56:56.644223516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:56.645038 containerd[1559]: time="2024-11-12T20:56:56.645000294Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=17332660" Nov 12 20:56:56.646174 containerd[1559]: time="2024-11-12T20:56:56.646122289Z" level=info msg="ImageCreate event name:\"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:56.650925 containerd[1559]: time="2024-11-12T20:56:56.650870511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:56.652211 containerd[1559]: time="2024-11-12T20:56:56.652164710Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"18778044\" in 1.429856764s" Nov 12 20:56:56.652211 containerd[1559]: time="2024-11-12T20:56:56.652205817Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\"" Nov 12 20:56:56.676465 containerd[1559]: time="2024-11-12T20:56:56.676416255Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 20:56:57.752879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount441240690.mount: Deactivated successfully. Nov 12 20:56:58.520369 containerd[1559]: time="2024-11-12T20:56:58.520291804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:58.521340 containerd[1559]: time="2024-11-12T20:56:58.521286711Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=28616816" Nov 12 20:56:58.522503 containerd[1559]: time="2024-11-12T20:56:58.522461065Z" level=info msg="ImageCreate event name:\"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:58.525018 containerd[1559]: time="2024-11-12T20:56:58.524950597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:58.525771 containerd[1559]: time="2024-11-12T20:56:58.525718178Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"28615835\" in 1.849255476s" Nov 12 20:56:58.525771 containerd[1559]: time="2024-11-12T20:56:58.525756239Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\"" Nov 12 20:56:58.563226 containerd[1559]: time="2024-11-12T20:56:58.563183121Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:56:59.211332 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:56:59.219476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:56:59.246725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount170159413.mount: Deactivated successfully. Nov 12 20:56:59.371520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:56:59.376876 (kubelet)[2072]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:56:59.547548 kubelet[2072]: E1112 20:56:59.547337 2072 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:56:59.552485 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:56:59.552799 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:57:00.887853 containerd[1559]: time="2024-11-12T20:57:00.887776675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:00.888796 containerd[1559]: time="2024-11-12T20:57:00.888750422Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 20:57:00.890239 containerd[1559]: time="2024-11-12T20:57:00.890203710Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:00.893304 containerd[1559]: time="2024-11-12T20:57:00.893274794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:00.894607 containerd[1559]: time="2024-11-12T20:57:00.894573070Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.331323203s" Nov 12 20:57:00.894663 containerd[1559]: time="2024-11-12T20:57:00.894617964Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:57:00.917602 containerd[1559]: time="2024-11-12T20:57:00.917539994Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 20:57:01.456891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4192434728.mount: Deactivated successfully. Nov 12 20:57:01.462992 containerd[1559]: time="2024-11-12T20:57:01.462938360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:01.463912 containerd[1559]: time="2024-11-12T20:57:01.463819283Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Nov 12 20:57:01.465354 containerd[1559]: time="2024-11-12T20:57:01.465294021Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:01.467697 containerd[1559]: time="2024-11-12T20:57:01.467653479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:01.468578 containerd[1559]: time="2024-11-12T20:57:01.468541396Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 550.9475ms" Nov 12 20:57:01.468629 containerd[1559]: time="2024-11-12T20:57:01.468580319Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 20:57:01.491159 containerd[1559]: time="2024-11-12T20:57:01.491104321Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 20:57:02.235854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount490214893.mount: Deactivated successfully. Nov 12 20:57:05.103653 containerd[1559]: time="2024-11-12T20:57:05.103583520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:05.104406 containerd[1559]: time="2024-11-12T20:57:05.104318970Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Nov 12 20:57:05.105765 containerd[1559]: time="2024-11-12T20:57:05.105727754Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:05.108795 containerd[1559]: time="2024-11-12T20:57:05.108760085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:05.110271 containerd[1559]: time="2024-11-12T20:57:05.110231286Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.619067504s" Nov 12 20:57:05.110313 containerd[1559]: time="2024-11-12T20:57:05.110277884Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Nov 12 20:57:07.686698 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:57:07.697401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:57:07.716877 systemd[1]: Reloading requested from client PID 2265 ('systemctl') (unit session-7.scope)... Nov 12 20:57:07.716895 systemd[1]: Reloading... Nov 12 20:57:07.793306 zram_generator::config[2307]: No configuration found. Nov 12 20:57:08.072672 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:57:08.145987 systemd[1]: Reloading finished in 428 ms. Nov 12 20:57:08.198015 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 20:57:08.198120 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 20:57:08.198499 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:57:08.200653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:57:08.349748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:57:08.355272 (kubelet)[2364]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:57:08.405199 kubelet[2364]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:57:08.405199 kubelet[2364]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:57:08.405199 kubelet[2364]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:57:08.405765 kubelet[2364]: I1112 20:57:08.405409 2364 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:57:08.568152 kubelet[2364]: I1112 20:57:08.568074 2364 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:57:08.568152 kubelet[2364]: I1112 20:57:08.568121 2364 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:57:08.568458 kubelet[2364]: I1112 20:57:08.568432 2364 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:57:08.585319 kubelet[2364]: E1112 20:57:08.585268 2364 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.153:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.153:6443: connect: connection refused Nov 12 20:57:08.588220 kubelet[2364]: I1112 20:57:08.588188 2364 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:57:08.602272 kubelet[2364]: I1112 20:57:08.602160 2364 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:57:08.603263 kubelet[2364]: I1112 20:57:08.603238 2364 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:57:08.603440 kubelet[2364]: I1112 20:57:08.603417 2364 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:57:08.603538 kubelet[2364]: I1112 20:57:08.603445 2364 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:57:08.603538 kubelet[2364]: I1112 20:57:08.603454 2364 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:57:08.603590 kubelet[2364]: I1112 20:57:08.603582 2364 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:57:08.603698 kubelet[2364]: I1112 20:57:08.603680 2364 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:57:08.603698 kubelet[2364]: I1112 20:57:08.603697 2364 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:57:08.603746 kubelet[2364]: I1112 20:57:08.603740 2364 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:57:08.603769 kubelet[2364]: I1112 20:57:08.603763 2364 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:57:08.605657 kubelet[2364]: W1112 20:57:08.605513 2364 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Nov 12 20:57:08.605657 kubelet[2364]: E1112 20:57:08.605590 2364 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Nov 12 20:57:08.606032 kubelet[2364]: I1112 20:57:08.606014 2364 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:57:08.606518 kubelet[2364]: W1112 20:57:08.606388 2364 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Nov 12 20:57:08.606518 kubelet[2364]: E1112 20:57:08.606441 2364 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Nov 12 20:57:08.609647 kubelet[2364]: I1112 20:57:08.609607 2364 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:57:08.611153 kubelet[2364]: W1112 20:57:08.611116 2364 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:57:08.612174 kubelet[2364]: I1112 20:57:08.611981 2364 server.go:1256] "Started kubelet" Nov 12 20:57:08.612370 kubelet[2364]: I1112 20:57:08.612298 2364 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:57:08.612409 kubelet[2364]: I1112 20:57:08.612396 2364 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:57:08.613214 kubelet[2364]: I1112 20:57:08.612667 2364 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:57:08.613411 kubelet[2364]: I1112 20:57:08.613386 2364 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:57:08.616082 kubelet[2364]: I1112 20:57:08.614272 2364 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:57:08.616082 kubelet[2364]: I1112 20:57:08.614452 2364 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:57:08.616082 kubelet[2364]: I1112 20:57:08.614507 2364 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:57:08.616082 kubelet[2364]: I1112 20:57:08.614553 2364 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:57:08.616082 kubelet[2364]: W1112 20:57:08.614832 2364 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Nov 12 20:57:08.616082 kubelet[2364]: E1112 20:57:08.614863 2364 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Nov 12 20:57:08.616082 kubelet[2364]: E1112 20:57:08.615079 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="200ms" Nov 12 20:57:08.616082 kubelet[2364]: I1112 20:57:08.615936 2364 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:57:08.616082 kubelet[2364]: I1112 20:57:08.616020 2364 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:57:08.617871 kubelet[2364]: E1112 20:57:08.617818 2364 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:57:08.617973 kubelet[2364]: I1112 20:57:08.617931 2364 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:57:08.620850 kubelet[2364]: E1112 20:57:08.620806 2364 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.153:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.153:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1807541476881a20 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 20:57:08.611955232 +0000 UTC m=+0.251916758,LastTimestamp:2024-11-12 20:57:08.611955232 +0000 UTC m=+0.251916758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 20:57:08.634849 kubelet[2364]: I1112 20:57:08.634810 2364 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:57:08.637711 kubelet[2364]: I1112 20:57:08.637690 2364 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:57:08.637762 kubelet[2364]: I1112 20:57:08.637735 2364 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:57:08.637762 kubelet[2364]: I1112 20:57:08.637760 2364 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:57:08.637843 kubelet[2364]: E1112 20:57:08.637815 2364 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:57:08.638364 kubelet[2364]: W1112 20:57:08.638286 2364 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Nov 12 20:57:08.638364 kubelet[2364]: E1112 20:57:08.638314 2364 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Nov 12 20:57:08.645106 kubelet[2364]: I1112 20:57:08.645072 2364 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:57:08.645106 kubelet[2364]: I1112 20:57:08.645094 2364 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:57:08.645250 kubelet[2364]: I1112 20:57:08.645120 2364 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:57:08.716296 kubelet[2364]: I1112 20:57:08.716276 2364 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:57:08.716664 kubelet[2364]: E1112 20:57:08.716645 2364 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.153:6443/api/v1/nodes\": dial tcp 10.0.0.153:6443: connect: connection refused" node="localhost" Nov 12 20:57:08.738760 kubelet[2364]: E1112 20:57:08.738735 2364 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:57:08.816634 kubelet[2364]: E1112 20:57:08.816593 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="400ms" Nov 12 20:57:08.918542 kubelet[2364]: I1112 20:57:08.918486 2364 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:57:08.918966 kubelet[2364]: E1112 20:57:08.918938 2364 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.153:6443/api/v1/nodes\": dial tcp 10.0.0.153:6443: connect: connection refused" node="localhost" Nov 12 20:57:08.939059 kubelet[2364]: E1112 20:57:08.939023 2364 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:57:09.218242 kubelet[2364]: E1112 20:57:09.218064 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="800ms" Nov 12 20:57:09.320700 kubelet[2364]: I1112 20:57:09.320647 2364 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:57:09.320984 kubelet[2364]: E1112 20:57:09.320958 2364 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.153:6443/api/v1/nodes\": dial tcp 10.0.0.153:6443: connect: connection refused" node="localhost" Nov 12 20:57:09.322436 kubelet[2364]: I1112 20:57:09.322406 2364 policy_none.go:49] "None policy: Start" Nov 12 20:57:09.323074 kubelet[2364]: I1112 20:57:09.323042 2364 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:57:09.323074 kubelet[2364]: I1112 20:57:09.323067 2364 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:57:09.339236 kubelet[2364]: E1112 20:57:09.339201 2364 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:57:09.343020 kubelet[2364]: I1112 20:57:09.342976 2364 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:57:09.343801 kubelet[2364]: I1112 20:57:09.343348 2364 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:57:09.345337 kubelet[2364]: E1112 20:57:09.345291 2364 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 20:57:09.410038 kubelet[2364]: W1112 20:57:09.409952 2364 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Nov 12 20:57:09.410038 kubelet[2364]: E1112 20:57:09.410022 2364 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Nov 12 20:57:09.770485 kubelet[2364]: W1112 20:57:09.770421 2364 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Nov 12 20:57:09.770485 kubelet[2364]: E1112 20:57:09.770482 2364 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Nov 12 20:57:09.815937 kubelet[2364]: W1112 20:57:09.815901 2364 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Nov 12 20:57:09.815976 kubelet[2364]: E1112 20:57:09.815939 2364 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Nov 12 20:57:09.885590 kubelet[2364]: E1112 20:57:09.885566 2364 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.153:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.153:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1807541476881a20 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 20:57:08.611955232 +0000 UTC m=+0.251916758,LastTimestamp:2024-11-12 20:57:08.611955232 +0000 UTC m=+0.251916758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 20:57:10.018723 kubelet[2364]: E1112 20:57:10.018678 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="1.6s" Nov 12 20:57:10.122630 kubelet[2364]: I1112 20:57:10.122524 2364 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:57:10.122988 kubelet[2364]: E1112 20:57:10.122947 2364 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.153:6443/api/v1/nodes\": dial tcp 10.0.0.153:6443: connect: connection refused" node="localhost" Nov 12 20:57:10.140038 kubelet[2364]: I1112 20:57:10.140010 2364 topology_manager.go:215] "Topology Admit Handler" podUID="143dc66c43b329cdbb12c6f996ddb8e4" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 20:57:10.140939 kubelet[2364]: I1112 20:57:10.140908 2364 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 20:57:10.141956 kubelet[2364]: I1112 20:57:10.141934 2364 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 20:57:10.193950 kubelet[2364]: W1112 20:57:10.193907 2364 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Nov 12 20:57:10.193950 kubelet[2364]: E1112 20:57:10.193949 2364 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Nov 12 20:57:10.223279 kubelet[2364]: I1112 20:57:10.223248 2364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:57:10.223359 kubelet[2364]: I1112 20:57:10.223301 2364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:57:10.223359 kubelet[2364]: I1112 20:57:10.223327 2364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:57:10.223359 kubelet[2364]: I1112 20:57:10.223356 2364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:57:10.223448 kubelet[2364]: I1112 20:57:10.223396 2364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:57:10.223448 kubelet[2364]: I1112 20:57:10.223422 2364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/143dc66c43b329cdbb12c6f996ddb8e4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"143dc66c43b329cdbb12c6f996ddb8e4\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:57:10.223497 kubelet[2364]: I1112 20:57:10.223450 2364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/143dc66c43b329cdbb12c6f996ddb8e4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"143dc66c43b329cdbb12c6f996ddb8e4\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:57:10.223497 kubelet[2364]: I1112 20:57:10.223476 2364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:57:10.223596 kubelet[2364]: I1112 20:57:10.223541 2364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/143dc66c43b329cdbb12c6f996ddb8e4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"143dc66c43b329cdbb12c6f996ddb8e4\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:57:10.445919 kubelet[2364]: E1112 20:57:10.445831 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:10.446446 kubelet[2364]: E1112 20:57:10.446009 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:10.446635 containerd[1559]: time="2024-11-12T20:57:10.446588243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:143dc66c43b329cdbb12c6f996ddb8e4,Namespace:kube-system,Attempt:0,}" Nov 12 20:57:10.446891 containerd[1559]: time="2024-11-12T20:57:10.446600145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,}" Nov 12 20:57:10.447661 kubelet[2364]: E1112 20:57:10.447638 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:10.447936 containerd[1559]: time="2024-11-12T20:57:10.447902115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,}" Nov 12 20:57:10.671427 kubelet[2364]: E1112 20:57:10.671376 2364 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.153:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.153:6443: connect: connection refused Nov 12 20:57:11.249793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3691470687.mount: Deactivated successfully. Nov 12 20:57:11.255160 containerd[1559]: time="2024-11-12T20:57:11.255060782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:57:11.256848 containerd[1559]: time="2024-11-12T20:57:11.256783226Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:57:11.257970 containerd[1559]: time="2024-11-12T20:57:11.257923881Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:57:11.258883 containerd[1559]: time="2024-11-12T20:57:11.258848239Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:57:11.259869 containerd[1559]: time="2024-11-12T20:57:11.259832763Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:57:11.260806 containerd[1559]: time="2024-11-12T20:57:11.260729349Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:57:11.261582 containerd[1559]: time="2024-11-12T20:57:11.261541310Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 20:57:11.263433 containerd[1559]: time="2024-11-12T20:57:11.263377423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:57:11.264801 containerd[1559]: time="2024-11-12T20:57:11.264758942Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 816.808714ms" Nov 12 20:57:11.266018 containerd[1559]: time="2024-11-12T20:57:11.265986624Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 819.229615ms" Nov 12 20:57:11.267406 containerd[1559]: time="2024-11-12T20:57:11.267373562Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 820.695847ms" Nov 12 20:57:11.318076 kubelet[2364]: W1112 20:57:11.318028 2364 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Nov 12 20:57:11.318076 kubelet[2364]: E1112 20:57:11.318076 2364 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Nov 12 20:57:11.422442 containerd[1559]: time="2024-11-12T20:57:11.422320414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:57:11.422442 containerd[1559]: time="2024-11-12T20:57:11.422368677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:57:11.422442 containerd[1559]: time="2024-11-12T20:57:11.422386641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:11.422442 containerd[1559]: time="2024-11-12T20:57:11.422096212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:57:11.422442 containerd[1559]: time="2024-11-12T20:57:11.422242464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:57:11.422442 containerd[1559]: time="2024-11-12T20:57:11.422258465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:11.422442 containerd[1559]: time="2024-11-12T20:57:11.422344711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:11.423293 containerd[1559]: time="2024-11-12T20:57:11.423221769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:11.425047 containerd[1559]: time="2024-11-12T20:57:11.424888945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:57:11.425047 containerd[1559]: time="2024-11-12T20:57:11.424959011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:57:11.425047 containerd[1559]: time="2024-11-12T20:57:11.424975393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:11.425324 containerd[1559]: time="2024-11-12T20:57:11.425066207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:11.484216 containerd[1559]: time="2024-11-12T20:57:11.484122773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d71ee11ac3edae95a4448ead760807498de838e2b5b83ad06460dad31048515\"" Nov 12 20:57:11.485109 kubelet[2364]: E1112 20:57:11.485086 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:11.489383 containerd[1559]: time="2024-11-12T20:57:11.489341975Z" level=info msg="CreateContainer within sandbox \"7d71ee11ac3edae95a4448ead760807498de838e2b5b83ad06460dad31048515\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:57:11.492097 containerd[1559]: time="2024-11-12T20:57:11.492063402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"292e09d63bebc13b027abcdcf01e0397ec98b7914ee0003c323b7aeb0c008092\"" Nov 12 20:57:11.492744 kubelet[2364]: E1112 20:57:11.492712 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:11.495267 containerd[1559]: time="2024-11-12T20:57:11.495122157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:143dc66c43b329cdbb12c6f996ddb8e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"95a9fa893d984def4b2ec68bc43519dadf421c47f0b6949ea4057e1bb922be64\"" Nov 12 20:57:11.495845 kubelet[2364]: E1112 20:57:11.495807 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:11.496305 containerd[1559]: time="2024-11-12T20:57:11.496275195Z" level=info msg="CreateContainer within sandbox \"292e09d63bebc13b027abcdcf01e0397ec98b7914ee0003c323b7aeb0c008092\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:57:11.498075 containerd[1559]: time="2024-11-12T20:57:11.498043197Z" level=info msg="CreateContainer within sandbox \"95a9fa893d984def4b2ec68bc43519dadf421c47f0b6949ea4057e1bb922be64\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:57:11.514067 containerd[1559]: time="2024-11-12T20:57:11.513803821Z" level=info msg="CreateContainer within sandbox \"7d71ee11ac3edae95a4448ead760807498de838e2b5b83ad06460dad31048515\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ccd5a655881f9134619e7c2c6af1f3760ca6eb0cbd0d197fac2514872bb7ace1\"" Nov 12 20:57:11.514523 containerd[1559]: time="2024-11-12T20:57:11.514488918Z" level=info msg="StartContainer for \"ccd5a655881f9134619e7c2c6af1f3760ca6eb0cbd0d197fac2514872bb7ace1\"" Nov 12 20:57:11.526484 containerd[1559]: time="2024-11-12T20:57:11.526427879Z" level=info msg="CreateContainer within sandbox \"292e09d63bebc13b027abcdcf01e0397ec98b7914ee0003c323b7aeb0c008092\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0feae145b9f1ab91b9be5c4fa3cc54e7f0fdfe0ed1e99508f5c80498ec19d1cd\"" Nov 12 20:57:11.527007 containerd[1559]: time="2024-11-12T20:57:11.526976044Z" level=info msg="StartContainer for \"0feae145b9f1ab91b9be5c4fa3cc54e7f0fdfe0ed1e99508f5c80498ec19d1cd\"" Nov 12 20:57:11.533082 containerd[1559]: time="2024-11-12T20:57:11.533021055Z" level=info msg="CreateContainer within sandbox \"95a9fa893d984def4b2ec68bc43519dadf421c47f0b6949ea4057e1bb922be64\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e5b87adee0f85755da4ff1a70687bd392baa6caa637e3efd2ed4ba046e09e81e\"" Nov 12 20:57:11.534570 containerd[1559]: time="2024-11-12T20:57:11.534475964Z" level=info msg="StartContainer for \"e5b87adee0f85755da4ff1a70687bd392baa6caa637e3efd2ed4ba046e09e81e\"" Nov 12 20:57:11.619889 kubelet[2364]: E1112 20:57:11.619831 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="3.2s" Nov 12 20:57:11.672071 containerd[1559]: time="2024-11-12T20:57:11.672008441Z" level=info msg="StartContainer for \"e5b87adee0f85755da4ff1a70687bd392baa6caa637e3efd2ed4ba046e09e81e\" returns successfully" Nov 12 20:57:11.672273 containerd[1559]: time="2024-11-12T20:57:11.672021946Z" level=info msg="StartContainer for \"ccd5a655881f9134619e7c2c6af1f3760ca6eb0cbd0d197fac2514872bb7ace1\" returns successfully" Nov 12 20:57:11.672273 containerd[1559]: time="2024-11-12T20:57:11.672026957Z" level=info msg="StartContainer for \"0feae145b9f1ab91b9be5c4fa3cc54e7f0fdfe0ed1e99508f5c80498ec19d1cd\" returns successfully" Nov 12 20:57:11.679369 kubelet[2364]: E1112 20:57:11.679342 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:11.681791 kubelet[2364]: E1112 20:57:11.681775 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:11.724726 kubelet[2364]: I1112 20:57:11.724694 2364 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:57:12.683665 kubelet[2364]: E1112 20:57:12.683630 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:12.785165 kubelet[2364]: E1112 20:57:12.782386 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:13.051941 kubelet[2364]: I1112 20:57:13.051565 2364 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 20:57:13.059796 kubelet[2364]: E1112 20:57:13.059763 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:13.160550 kubelet[2364]: E1112 20:57:13.160445 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:13.261015 kubelet[2364]: E1112 20:57:13.260954 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:13.361969 kubelet[2364]: E1112 20:57:13.361819 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:13.462888 kubelet[2364]: E1112 20:57:13.462837 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:13.563396 kubelet[2364]: E1112 20:57:13.563352 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:13.664538 kubelet[2364]: E1112 20:57:13.664487 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:13.686338 kubelet[2364]: E1112 20:57:13.686301 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:13.765320 kubelet[2364]: E1112 20:57:13.765276 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:13.867182 kubelet[2364]: E1112 20:57:13.866428 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:13.968314 kubelet[2364]: E1112 20:57:13.967990 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:14.068436 kubelet[2364]: E1112 20:57:14.068358 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:14.169231 kubelet[2364]: E1112 20:57:14.169174 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:14.269683 kubelet[2364]: E1112 20:57:14.269462 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:14.369994 kubelet[2364]: E1112 20:57:14.369941 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:14.470920 kubelet[2364]: E1112 20:57:14.470867 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:14.571636 kubelet[2364]: E1112 20:57:14.571490 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:14.672645 kubelet[2364]: E1112 20:57:14.672595 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:14.773168 kubelet[2364]: E1112 20:57:14.773099 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:14.873947 kubelet[2364]: E1112 20:57:14.873823 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:14.974611 kubelet[2364]: E1112 20:57:14.974546 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:15.075076 kubelet[2364]: E1112 20:57:15.075023 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:15.175172 kubelet[2364]: E1112 20:57:15.175122 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:15.275682 kubelet[2364]: E1112 20:57:15.275620 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:15.376445 kubelet[2364]: E1112 20:57:15.376399 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:15.477301 kubelet[2364]: E1112 20:57:15.477163 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:15.577604 kubelet[2364]: E1112 20:57:15.577554 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:15.677945 kubelet[2364]: E1112 20:57:15.677891 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:15.732708 kubelet[2364]: E1112 20:57:15.732581 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:15.779049 kubelet[2364]: E1112 20:57:15.778986 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:15.879658 kubelet[2364]: E1112 20:57:15.879607 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:15.980309 kubelet[2364]: E1112 20:57:15.980239 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:16.081058 kubelet[2364]: E1112 20:57:16.080905 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:16.181763 kubelet[2364]: E1112 20:57:16.181704 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:16.282434 kubelet[2364]: E1112 20:57:16.282375 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:16.383246 kubelet[2364]: E1112 20:57:16.383210 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:16.484157 kubelet[2364]: E1112 20:57:16.484102 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:16.584593 kubelet[2364]: E1112 20:57:16.584466 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:16.685673 kubelet[2364]: E1112 20:57:16.685487 2364 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:57:17.094447 systemd[1]: Reloading requested from client PID 2649 ('systemctl') (unit session-7.scope)... Nov 12 20:57:17.094465 systemd[1]: Reloading... Nov 12 20:57:17.173173 zram_generator::config[2688]: No configuration found. Nov 12 20:57:17.287589 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:57:17.351304 kubelet[2364]: E1112 20:57:17.351174 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:17.368225 systemd[1]: Reloading finished in 273 ms. Nov 12 20:57:17.405108 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:57:17.430038 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:57:17.430642 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:57:17.439678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:57:17.624470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:57:17.630526 (kubelet)[2743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:57:17.731257 kubelet[2743]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:57:17.731257 kubelet[2743]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:57:17.731257 kubelet[2743]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:57:17.731750 kubelet[2743]: I1112 20:57:17.731293 2743 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:57:17.736860 kubelet[2743]: I1112 20:57:17.736823 2743 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:57:17.736860 kubelet[2743]: I1112 20:57:17.736857 2743 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:57:17.737182 kubelet[2743]: I1112 20:57:17.737161 2743 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:57:17.739750 kubelet[2743]: I1112 20:57:17.739703 2743 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:57:17.742937 kubelet[2743]: I1112 20:57:17.742871 2743 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:57:17.764318 kubelet[2743]: I1112 20:57:17.764286 2743 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:57:17.775680 kubelet[2743]: I1112 20:57:17.775655 2743 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:57:17.775872 kubelet[2743]: I1112 20:57:17.775846 2743 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:57:17.775953 kubelet[2743]: I1112 20:57:17.775883 2743 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:57:17.775953 kubelet[2743]: I1112 20:57:17.775893 2743 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:57:17.775953 kubelet[2743]: I1112 20:57:17.775935 2743 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:57:17.776055 kubelet[2743]: I1112 20:57:17.776036 2743 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:57:17.776055 kubelet[2743]: I1112 20:57:17.776055 2743 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:57:17.776111 kubelet[2743]: I1112 20:57:17.776100 2743 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:57:17.776153 kubelet[2743]: I1112 20:57:17.776122 2743 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:57:17.778757 kubelet[2743]: I1112 20:57:17.777018 2743 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:57:17.778757 kubelet[2743]: I1112 20:57:17.777394 2743 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:57:17.778757 kubelet[2743]: I1112 20:57:17.777947 2743 server.go:1256] "Started kubelet" Nov 12 20:57:17.778757 kubelet[2743]: I1112 20:57:17.778325 2743 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:57:17.779418 kubelet[2743]: I1112 20:57:17.779126 2743 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:57:18.046903 kubelet[2743]: I1112 20:57:18.046211 2743 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:57:18.050720 kubelet[2743]: I1112 20:57:18.050692 2743 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:57:18.051163 kubelet[2743]: I1112 20:57:18.050982 2743 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:57:18.052109 kubelet[2743]: I1112 20:57:18.052089 2743 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:57:18.052845 kubelet[2743]: I1112 20:57:18.052830 2743 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:57:18.053057 kubelet[2743]: I1112 20:57:18.053043 2743 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:57:18.054846 kubelet[2743]: I1112 20:57:18.053762 2743 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:57:18.054846 kubelet[2743]: I1112 20:57:18.053856 2743 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:57:18.059266 kubelet[2743]: I1112 20:57:18.059235 2743 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:57:18.060094 kubelet[2743]: I1112 20:57:18.060070 2743 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:57:18.061654 kubelet[2743]: I1112 20:57:18.061638 2743 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:57:18.063078 kubelet[2743]: I1112 20:57:18.063038 2743 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:57:18.063078 kubelet[2743]: I1112 20:57:18.063076 2743 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:57:18.063225 kubelet[2743]: E1112 20:57:18.063157 2743 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:57:18.110500 kubelet[2743]: I1112 20:57:18.110458 2743 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:57:18.110500 kubelet[2743]: I1112 20:57:18.110482 2743 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:57:18.110500 kubelet[2743]: I1112 20:57:18.110516 2743 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:57:18.110703 kubelet[2743]: I1112 20:57:18.110666 2743 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:57:18.110703 kubelet[2743]: I1112 20:57:18.110685 2743 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:57:18.110703 kubelet[2743]: I1112 20:57:18.110694 2743 policy_none.go:49] "None policy: Start" Nov 12 20:57:18.111432 kubelet[2743]: I1112 20:57:18.111406 2743 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:57:18.111481 kubelet[2743]: I1112 20:57:18.111447 2743 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:57:18.111709 kubelet[2743]: I1112 20:57:18.111684 2743 state_mem.go:75] "Updated machine memory state" Nov 12 20:57:18.113433 kubelet[2743]: I1112 20:57:18.113414 2743 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:57:18.113720 kubelet[2743]: I1112 20:57:18.113694 2743 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:57:18.157912 kubelet[2743]: I1112 20:57:18.157872 2743 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:57:18.164266 kubelet[2743]: I1112 20:57:18.164218 2743 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 20:57:18.164352 kubelet[2743]: I1112 20:57:18.164340 2743 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 20:57:18.164411 kubelet[2743]: I1112 20:57:18.164389 2743 topology_manager.go:215] "Topology Admit Handler" podUID="143dc66c43b329cdbb12c6f996ddb8e4" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 20:57:18.254365 kubelet[2743]: I1112 20:57:18.254318 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:57:18.254365 kubelet[2743]: I1112 20:57:18.254364 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:57:18.254527 kubelet[2743]: I1112 20:57:18.254388 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:57:18.254527 kubelet[2743]: I1112 20:57:18.254451 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/143dc66c43b329cdbb12c6f996ddb8e4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"143dc66c43b329cdbb12c6f996ddb8e4\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:57:18.254527 kubelet[2743]: I1112 20:57:18.254495 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:57:18.254652 kubelet[2743]: I1112 20:57:18.254610 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:57:18.254710 kubelet[2743]: I1112 20:57:18.254692 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/143dc66c43b329cdbb12c6f996ddb8e4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"143dc66c43b329cdbb12c6f996ddb8e4\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:57:18.254747 kubelet[2743]: I1112 20:57:18.254729 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/143dc66c43b329cdbb12c6f996ddb8e4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"143dc66c43b329cdbb12c6f996ddb8e4\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:57:18.254798 kubelet[2743]: I1112 20:57:18.254755 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:57:18.315602 sudo[2778]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 12 20:57:18.315955 sudo[2778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 12 20:57:18.350735 kubelet[2743]: I1112 20:57:18.350685 2743 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Nov 12 20:57:18.350860 kubelet[2743]: I1112 20:57:18.350779 2743 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 20:57:18.351579 kubelet[2743]: E1112 20:57:18.350897 2743 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 20:57:18.642486 kubelet[2743]: E1112 20:57:18.642447 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:18.642486 kubelet[2743]: E1112 20:57:18.642472 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:18.652586 kubelet[2743]: E1112 20:57:18.652552 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:18.778387 kubelet[2743]: I1112 20:57:18.777460 2743 apiserver.go:52] "Watching apiserver" Nov 12 20:57:18.780397 sudo[2778]: pam_unix(sudo:session): session closed for user root Nov 12 20:57:18.853331 kubelet[2743]: I1112 20:57:18.853270 2743 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 20:57:18.863620 kubelet[2743]: I1112 20:57:18.863582 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.863529855 podStartE2EDuration="1.863529855s" podCreationTimestamp="2024-11-12 20:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:57:18.863007388 +0000 UTC m=+1.225135210" watchObservedRunningTime="2024-11-12 20:57:18.863529855 +0000 UTC m=+1.225657677" Nov 12 20:57:18.977202 kubelet[2743]: I1112 20:57:18.976755 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.976705764 podStartE2EDuration="976.705764ms" podCreationTimestamp="2024-11-12 20:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:57:18.976433836 +0000 UTC m=+1.338561658" watchObservedRunningTime="2024-11-12 20:57:18.976705764 +0000 UTC m=+1.338833587" Nov 12 20:57:18.987134 kubelet[2743]: I1112 20:57:18.987093 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.987044139 podStartE2EDuration="987.044139ms" podCreationTimestamp="2024-11-12 20:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:57:18.986776819 +0000 UTC m=+1.348904641" watchObservedRunningTime="2024-11-12 20:57:18.987044139 +0000 UTC m=+1.349171961" Nov 12 20:57:19.080282 kubelet[2743]: E1112 20:57:19.080184 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:19.080506 kubelet[2743]: E1112 20:57:19.080381 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:19.089703 kubelet[2743]: E1112 20:57:19.089610 2743 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 12 20:57:19.090661 kubelet[2743]: E1112 20:57:19.090562 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:20.081687 kubelet[2743]: E1112 20:57:20.081625 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:20.082106 kubelet[2743]: E1112 20:57:20.081947 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:20.289329 update_engine[1546]: I20241112 20:57:20.289227 1546 update_attempter.cc:509] Updating boot flags... Nov 12 20:57:20.407218 sudo[1767]: pam_unix(sudo:session): session closed for user root Nov 12 20:57:20.409095 sshd[1760]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:20.413447 systemd[1]: sshd@6-10.0.0.153:22-10.0.0.1:60068.service: Deactivated successfully. Nov 12 20:57:20.415723 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:57:20.416416 systemd-logind[1540]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:57:20.417320 systemd-logind[1540]: Removed session 7. Nov 12 20:57:20.621217 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2827) Nov 12 20:57:20.656176 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2826) Nov 12 20:57:20.679544 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2826) Nov 12 20:57:25.839179 kubelet[2743]: E1112 20:57:25.839113 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:26.090899 kubelet[2743]: E1112 20:57:26.090752 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:26.733598 kubelet[2743]: E1112 20:57:26.733552 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:27.092309 kubelet[2743]: E1112 20:57:27.092184 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:28.978231 kubelet[2743]: E1112 20:57:28.978189 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:29.646029 kubelet[2743]: I1112 20:57:29.645984 2743 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:57:29.646531 containerd[1559]: time="2024-11-12T20:57:29.646478770Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:57:29.646947 kubelet[2743]: I1112 20:57:29.646685 2743 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:57:30.729576 kubelet[2743]: I1112 20:57:30.729484 2743 topology_manager.go:215] "Topology Admit Handler" podUID="e36d025e-6968-4351-a17b-0d78d74698a6" podNamespace="kube-system" podName="kube-proxy-fvmlc" Nov 12 20:57:30.734444 kubelet[2743]: I1112 20:57:30.734087 2743 topology_manager.go:215] "Topology Admit Handler" podUID="87e6ecd2-8e03-4e1d-b346-58c0b0524c41" podNamespace="kube-system" podName="cilium-6jn48" Nov 12 20:57:30.822845 kubelet[2743]: I1112 20:57:30.822793 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-xtables-lock\") pod \"cilium-6jn48\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " pod="kube-system/cilium-6jn48" Nov 12 20:57:30.822845 kubelet[2743]: I1112 20:57:30.822857 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-host-proc-sys-kernel\") pod \"cilium-6jn48\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " pod="kube-system/cilium-6jn48" Nov 12 20:57:30.823039 kubelet[2743]: I1112 20:57:30.822912 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e36d025e-6968-4351-a17b-0d78d74698a6-xtables-lock\") pod \"kube-proxy-fvmlc\" (UID: \"e36d025e-6968-4351-a17b-0d78d74698a6\") " pod="kube-system/kube-proxy-fvmlc" Nov 12 20:57:30.823039 kubelet[2743]: I1112 20:57:30.822946 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-clustermesh-secrets\") pod \"cilium-6jn48\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " pod="kube-system/cilium-6jn48" Nov 12 20:57:30.823039 kubelet[2743]: I1112 20:57:30.822983 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e36d025e-6968-4351-a17b-0d78d74698a6-lib-modules\") pod \"kube-proxy-fvmlc\" (UID: \"e36d025e-6968-4351-a17b-0d78d74698a6\") " pod="kube-system/kube-proxy-fvmlc" Nov 12 20:57:30.823039 kubelet[2743]: I1112 20:57:30.823014 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-cilium-run\") pod \"cilium-6jn48\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " pod="kube-system/cilium-6jn48" Nov 12 20:57:30.823178 kubelet[2743]: I1112 20:57:30.823054 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-bpf-maps\") pod \"cilium-6jn48\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " pod="kube-system/cilium-6jn48" Nov 12 20:57:30.823178 kubelet[2743]: I1112 20:57:30.823091 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-cilium-cgroup\") pod \"cilium-6jn48\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " pod="kube-system/cilium-6jn48" Nov 12 20:57:30.823178 kubelet[2743]: I1112 20:57:30.823117 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-cni-path\") pod \"cilium-6jn48\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " pod="kube-system/cilium-6jn48" Nov 12 20:57:30.823178 kubelet[2743]: I1112 20:57:30.823164 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjp68\" (UniqueName: \"kubernetes.io/projected/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-kube-api-access-cjp68\") pod \"cilium-6jn48\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " pod="kube-system/cilium-6jn48" Nov 12 20:57:30.823282 kubelet[2743]: I1112 20:57:30.823191 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-hostproc\") pod \"cilium-6jn48\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " pod="kube-system/cilium-6jn48" Nov 12 20:57:30.823282 kubelet[2743]: I1112 20:57:30.823218 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-cilium-config-path\") pod \"cilium-6jn48\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " pod="kube-system/cilium-6jn48" Nov 12 20:57:30.823282 kubelet[2743]: I1112 20:57:30.823241 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-hubble-tls\") pod \"cilium-6jn48\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " pod="kube-system/cilium-6jn48" Nov 12 20:57:30.823282 kubelet[2743]: I1112 20:57:30.823275 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e36d025e-6968-4351-a17b-0d78d74698a6-kube-proxy\") pod \"kube-proxy-fvmlc\" (UID: \"e36d025e-6968-4351-a17b-0d78d74698a6\") " pod="kube-system/kube-proxy-fvmlc" Nov 12 20:57:30.823382 kubelet[2743]: I1112 20:57:30.823312 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgngz\" (UniqueName: \"kubernetes.io/projected/e36d025e-6968-4351-a17b-0d78d74698a6-kube-api-access-vgngz\") pod \"kube-proxy-fvmlc\" (UID: \"e36d025e-6968-4351-a17b-0d78d74698a6\") " pod="kube-system/kube-proxy-fvmlc" Nov 12 20:57:30.823382 kubelet[2743]: I1112 20:57:30.823350 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-etc-cni-netd\") pod \"cilium-6jn48\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " pod="kube-system/cilium-6jn48" Nov 12 20:57:30.823430 kubelet[2743]: I1112 20:57:30.823383 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-host-proc-sys-net\") pod \"cilium-6jn48\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " pod="kube-system/cilium-6jn48" Nov 12 20:57:30.823451 kubelet[2743]: I1112 20:57:30.823430 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-lib-modules\") pod \"cilium-6jn48\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " pod="kube-system/cilium-6jn48" Nov 12 20:57:31.035255 kubelet[2743]: E1112 20:57:31.034981 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:31.035827 containerd[1559]: time="2024-11-12T20:57:31.035784078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fvmlc,Uid:e36d025e-6968-4351-a17b-0d78d74698a6,Namespace:kube-system,Attempt:0,}" Nov 12 20:57:31.038536 kubelet[2743]: E1112 20:57:31.038515 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:31.038869 containerd[1559]: time="2024-11-12T20:57:31.038836146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6jn48,Uid:87e6ecd2-8e03-4e1d-b346-58c0b0524c41,Namespace:kube-system,Attempt:0,}" Nov 12 20:57:31.404607 kubelet[2743]: I1112 20:57:31.403897 2743 topology_manager.go:215] "Topology Admit Handler" podUID="46b473b4-a8bc-43d6-82fe-829b7f9fc0c6" podNamespace="kube-system" podName="cilium-operator-5cc964979-wkgtv" Nov 12 20:57:31.828824 kubelet[2743]: I1112 20:57:31.427869 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46b473b4-a8bc-43d6-82fe-829b7f9fc0c6-cilium-config-path\") pod \"cilium-operator-5cc964979-wkgtv\" (UID: \"46b473b4-a8bc-43d6-82fe-829b7f9fc0c6\") " pod="kube-system/cilium-operator-5cc964979-wkgtv" Nov 12 20:57:31.828824 kubelet[2743]: I1112 20:57:31.427918 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffnwn\" (UniqueName: \"kubernetes.io/projected/46b473b4-a8bc-43d6-82fe-829b7f9fc0c6-kube-api-access-ffnwn\") pod \"cilium-operator-5cc964979-wkgtv\" (UID: \"46b473b4-a8bc-43d6-82fe-829b7f9fc0c6\") " pod="kube-system/cilium-operator-5cc964979-wkgtv" Nov 12 20:57:31.895832 containerd[1559]: time="2024-11-12T20:57:31.895679102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:57:31.895832 containerd[1559]: time="2024-11-12T20:57:31.895750596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:57:31.895832 containerd[1559]: time="2024-11-12T20:57:31.895768981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:31.896757 containerd[1559]: time="2024-11-12T20:57:31.896634847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:31.898796 containerd[1559]: time="2024-11-12T20:57:31.898708526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:57:31.898796 containerd[1559]: time="2024-11-12T20:57:31.898760044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:57:31.898989 containerd[1559]: time="2024-11-12T20:57:31.898788507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:31.898989 containerd[1559]: time="2024-11-12T20:57:31.898897944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:31.945206 containerd[1559]: time="2024-11-12T20:57:31.945090328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fvmlc,Uid:e36d025e-6968-4351-a17b-0d78d74698a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ecca105804299be0420b8998a4a675926006ce397d649c31e7aa8a860cb758e\"" Nov 12 20:57:31.946378 containerd[1559]: time="2024-11-12T20:57:31.946311325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6jn48,Uid:87e6ecd2-8e03-4e1d-b346-58c0b0524c41,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7307d07d717cd80153343195bb12291eceb35a8a1c8a634c8aab591941fcf05\"" Nov 12 20:57:31.946849 kubelet[2743]: E1112 20:57:31.946818 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:31.948381 kubelet[2743]: E1112 20:57:31.948351 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:31.949715 containerd[1559]: time="2024-11-12T20:57:31.949676695Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 12 20:57:31.950152 containerd[1559]: time="2024-11-12T20:57:31.950098372Z" level=info msg="CreateContainer within sandbox \"2ecca105804299be0420b8998a4a675926006ce397d649c31e7aa8a860cb758e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:57:31.977077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount601881175.mount: Deactivated successfully. Nov 12 20:57:31.979289 containerd[1559]: time="2024-11-12T20:57:31.979188446Z" level=info msg="CreateContainer within sandbox \"2ecca105804299be0420b8998a4a675926006ce397d649c31e7aa8a860cb758e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"71d61bab38bbaa21ade6c3d49af0ee6e2c73828d74fdfe0cce338fab9ae63a08\"" Nov 12 20:57:31.980075 containerd[1559]: time="2024-11-12T20:57:31.979997544Z" level=info msg="StartContainer for \"71d61bab38bbaa21ade6c3d49af0ee6e2c73828d74fdfe0cce338fab9ae63a08\"" Nov 12 20:57:32.054110 containerd[1559]: time="2024-11-12T20:57:32.054044434Z" level=info msg="StartContainer for \"71d61bab38bbaa21ade6c3d49af0ee6e2c73828d74fdfe0cce338fab9ae63a08\" returns successfully" Nov 12 20:57:32.106269 kubelet[2743]: E1112 20:57:32.105456 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:32.129607 kubelet[2743]: E1112 20:57:32.129567 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:32.130053 kubelet[2743]: I1112 20:57:32.130025 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fvmlc" podStartSLOduration=2.129991095 podStartE2EDuration="2.129991095s" podCreationTimestamp="2024-11-12 20:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:57:32.128653358 +0000 UTC m=+14.490781180" watchObservedRunningTime="2024-11-12 20:57:32.129991095 +0000 UTC m=+14.492118907" Nov 12 20:57:32.131005 containerd[1559]: time="2024-11-12T20:57:32.130397813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-wkgtv,Uid:46b473b4-a8bc-43d6-82fe-829b7f9fc0c6,Namespace:kube-system,Attempt:0,}" Nov 12 20:57:32.163187 containerd[1559]: time="2024-11-12T20:57:32.163052186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:57:32.163187 containerd[1559]: time="2024-11-12T20:57:32.163111979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:57:32.163187 containerd[1559]: time="2024-11-12T20:57:32.163124784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:32.163384 containerd[1559]: time="2024-11-12T20:57:32.163271760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:32.229176 containerd[1559]: time="2024-11-12T20:57:32.229118102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-wkgtv,Uid:46b473b4-a8bc-43d6-82fe-829b7f9fc0c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d05b1d5ad0d9c9502d474c5a34344f2632c97301a44e01fee2c46dfc20ec89e\"" Nov 12 20:57:32.229942 kubelet[2743]: E1112 20:57:32.229923 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:35.947257 systemd-journald[1154]: Under memory pressure, flushing caches. Nov 12 20:57:35.945466 systemd-resolved[1461]: Under memory pressure, flushing caches. Nov 12 20:57:35.945529 systemd-resolved[1461]: Flushed all caches. Nov 12 20:57:37.281195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3105469866.mount: Deactivated successfully. Nov 12 20:57:37.993367 systemd-resolved[1461]: Under memory pressure, flushing caches. Nov 12 20:57:37.993381 systemd-resolved[1461]: Flushed all caches. Nov 12 20:57:37.995166 systemd-journald[1154]: Under memory pressure, flushing caches. Nov 12 20:57:39.973348 containerd[1559]: time="2024-11-12T20:57:39.973264659Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:39.974211 containerd[1559]: time="2024-11-12T20:57:39.974109781Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735343" Nov 12 20:57:39.975699 containerd[1559]: time="2024-11-12T20:57:39.975662327Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:39.977984 containerd[1559]: time="2024-11-12T20:57:39.977818701Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.027995149s" Nov 12 20:57:39.977984 containerd[1559]: time="2024-11-12T20:57:39.977868685Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 12 20:57:39.981168 containerd[1559]: time="2024-11-12T20:57:39.980053041Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 12 20:57:39.982459 containerd[1559]: time="2024-11-12T20:57:39.982415184Z" level=info msg="CreateContainer within sandbox \"d7307d07d717cd80153343195bb12291eceb35a8a1c8a634c8aab591941fcf05\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 20:57:40.000579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3564341688.mount: Deactivated successfully. Nov 12 20:57:40.001072 containerd[1559]: time="2024-11-12T20:57:40.000755216Z" level=info msg="CreateContainer within sandbox \"d7307d07d717cd80153343195bb12291eceb35a8a1c8a634c8aab591941fcf05\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ac07d8bc9c3471f52f1b004c790aa577709e53a8b371347ac2a765bde92f943b\"" Nov 12 20:57:40.002063 containerd[1559]: time="2024-11-12T20:57:40.002009438Z" level=info msg="StartContainer for \"ac07d8bc9c3471f52f1b004c790aa577709e53a8b371347ac2a765bde92f943b\"" Nov 12 20:57:40.059824 containerd[1559]: time="2024-11-12T20:57:40.059766877Z" level=info msg="StartContainer for \"ac07d8bc9c3471f52f1b004c790aa577709e53a8b371347ac2a765bde92f943b\" returns successfully" Nov 12 20:57:40.626349 kubelet[2743]: E1112 20:57:40.626308 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:40.958575 containerd[1559]: time="2024-11-12T20:57:40.956771031Z" level=info msg="shim disconnected" id=ac07d8bc9c3471f52f1b004c790aa577709e53a8b371347ac2a765bde92f943b namespace=k8s.io Nov 12 20:57:40.958575 containerd[1559]: time="2024-11-12T20:57:40.958511400Z" level=warning msg="cleaning up after shim disconnected" id=ac07d8bc9c3471f52f1b004c790aa577709e53a8b371347ac2a765bde92f943b namespace=k8s.io Nov 12 20:57:40.958575 containerd[1559]: time="2024-11-12T20:57:40.958533482Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:57:40.996332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac07d8bc9c3471f52f1b004c790aa577709e53a8b371347ac2a765bde92f943b-rootfs.mount: Deactivated successfully. Nov 12 20:57:41.629873 kubelet[2743]: E1112 20:57:41.629825 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:41.632901 containerd[1559]: time="2024-11-12T20:57:41.632840516Z" level=info msg="CreateContainer within sandbox \"d7307d07d717cd80153343195bb12291eceb35a8a1c8a634c8aab591941fcf05\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 20:57:41.695826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1410282215.mount: Deactivated successfully. Nov 12 20:57:41.715407 containerd[1559]: time="2024-11-12T20:57:41.715346705Z" level=info msg="CreateContainer within sandbox \"d7307d07d717cd80153343195bb12291eceb35a8a1c8a634c8aab591941fcf05\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0393e0bb84eb1a51ebfeab886ba89f66556407a8adc7df715b8665af42cbf661\"" Nov 12 20:57:41.716010 containerd[1559]: time="2024-11-12T20:57:41.715988263Z" level=info msg="StartContainer for \"0393e0bb84eb1a51ebfeab886ba89f66556407a8adc7df715b8665af42cbf661\"" Nov 12 20:57:41.784748 containerd[1559]: time="2024-11-12T20:57:41.784679350Z" level=info msg="StartContainer for \"0393e0bb84eb1a51ebfeab886ba89f66556407a8adc7df715b8665af42cbf661\" returns successfully" Nov 12 20:57:41.794795 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:57:41.795472 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:57:41.795551 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:57:41.805899 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:57:41.823807 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:57:41.830753 containerd[1559]: time="2024-11-12T20:57:41.830654789Z" level=info msg="shim disconnected" id=0393e0bb84eb1a51ebfeab886ba89f66556407a8adc7df715b8665af42cbf661 namespace=k8s.io Nov 12 20:57:41.830753 containerd[1559]: time="2024-11-12T20:57:41.830734560Z" level=warning msg="cleaning up after shim disconnected" id=0393e0bb84eb1a51ebfeab886ba89f66556407a8adc7df715b8665af42cbf661 namespace=k8s.io Nov 12 20:57:41.830753 containerd[1559]: time="2024-11-12T20:57:41.830745320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:57:41.996995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0393e0bb84eb1a51ebfeab886ba89f66556407a8adc7df715b8665af42cbf661-rootfs.mount: Deactivated successfully. Nov 12 20:57:42.633927 kubelet[2743]: E1112 20:57:42.633736 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:42.636450 containerd[1559]: time="2024-11-12T20:57:42.636375484Z" level=info msg="CreateContainer within sandbox \"d7307d07d717cd80153343195bb12291eceb35a8a1c8a634c8aab591941fcf05\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 20:57:42.743116 containerd[1559]: time="2024-11-12T20:57:42.743033641Z" level=info msg="CreateContainer within sandbox \"d7307d07d717cd80153343195bb12291eceb35a8a1c8a634c8aab591941fcf05\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ee0dc09275c2ad3a24cf55a92bd320a88fc1bcea5ff74817165527362307b1d6\"" Nov 12 20:57:42.743866 containerd[1559]: time="2024-11-12T20:57:42.743828547Z" level=info msg="StartContainer for \"ee0dc09275c2ad3a24cf55a92bd320a88fc1bcea5ff74817165527362307b1d6\"" Nov 12 20:57:42.755814 containerd[1559]: time="2024-11-12T20:57:42.755741011Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:42.758317 containerd[1559]: time="2024-11-12T20:57:42.758241821Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907237" Nov 12 20:57:42.762166 containerd[1559]: time="2024-11-12T20:57:42.760839883Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:42.764600 containerd[1559]: time="2024-11-12T20:57:42.764529141Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.784413941s" Nov 12 20:57:42.764600 containerd[1559]: time="2024-11-12T20:57:42.764601216Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 12 20:57:42.767131 containerd[1559]: time="2024-11-12T20:57:42.767077089Z" level=info msg="CreateContainer within sandbox \"6d05b1d5ad0d9c9502d474c5a34344f2632c97301a44e01fee2c46dfc20ec89e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 12 20:57:42.785573 containerd[1559]: time="2024-11-12T20:57:42.785504811Z" level=info msg="CreateContainer within sandbox \"6d05b1d5ad0d9c9502d474c5a34344f2632c97301a44e01fee2c46dfc20ec89e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8771bfbeb73e1102861562b3c32f4cd1fc8a75d5adb98f3cc31c5b3c82aec561\"" Nov 12 20:57:42.786262 containerd[1559]: time="2024-11-12T20:57:42.786123957Z" level=info msg="StartContainer for \"8771bfbeb73e1102861562b3c32f4cd1fc8a75d5adb98f3cc31c5b3c82aec561\"" Nov 12 20:57:42.911947 containerd[1559]: time="2024-11-12T20:57:42.911792684Z" level=info msg="StartContainer for \"ee0dc09275c2ad3a24cf55a92bd320a88fc1bcea5ff74817165527362307b1d6\" returns successfully" Nov 12 20:57:42.924560 containerd[1559]: time="2024-11-12T20:57:42.924303294Z" level=info msg="StartContainer for \"8771bfbeb73e1102861562b3c32f4cd1fc8a75d5adb98f3cc31c5b3c82aec561\" returns successfully" Nov 12 20:57:42.950164 containerd[1559]: time="2024-11-12T20:57:42.950048686Z" level=info msg="shim disconnected" id=ee0dc09275c2ad3a24cf55a92bd320a88fc1bcea5ff74817165527362307b1d6 namespace=k8s.io Nov 12 20:57:42.950164 containerd[1559]: time="2024-11-12T20:57:42.950121884Z" level=warning msg="cleaning up after shim disconnected" id=ee0dc09275c2ad3a24cf55a92bd320a88fc1bcea5ff74817165527362307b1d6 namespace=k8s.io Nov 12 20:57:42.950164 containerd[1559]: time="2024-11-12T20:57:42.950130450Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:57:43.001338 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee0dc09275c2ad3a24cf55a92bd320a88fc1bcea5ff74817165527362307b1d6-rootfs.mount: Deactivated successfully. Nov 12 20:57:43.645179 kubelet[2743]: E1112 20:57:43.642790 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:43.659217 containerd[1559]: time="2024-11-12T20:57:43.656311902Z" level=info msg="CreateContainer within sandbox \"d7307d07d717cd80153343195bb12291eceb35a8a1c8a634c8aab591941fcf05\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 20:57:43.665169 kubelet[2743]: E1112 20:57:43.661705 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:43.690319 containerd[1559]: time="2024-11-12T20:57:43.690269526Z" level=info msg="CreateContainer within sandbox \"d7307d07d717cd80153343195bb12291eceb35a8a1c8a634c8aab591941fcf05\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cdd596274c4fee47faa7a510e08a07018b1c714fa8db79d142a5bd4254ecfae3\"" Nov 12 20:57:43.691108 containerd[1559]: time="2024-11-12T20:57:43.691039947Z" level=info msg="StartContainer for \"cdd596274c4fee47faa7a510e08a07018b1c714fa8db79d142a5bd4254ecfae3\"" Nov 12 20:57:43.754167 containerd[1559]: time="2024-11-12T20:57:43.753969299Z" level=info msg="StartContainer for \"cdd596274c4fee47faa7a510e08a07018b1c714fa8db79d142a5bd4254ecfae3\" returns successfully" Nov 12 20:57:43.776853 containerd[1559]: time="2024-11-12T20:57:43.776769005Z" level=info msg="shim disconnected" id=cdd596274c4fee47faa7a510e08a07018b1c714fa8db79d142a5bd4254ecfae3 namespace=k8s.io Nov 12 20:57:43.776853 containerd[1559]: time="2024-11-12T20:57:43.776843887Z" level=warning msg="cleaning up after shim disconnected" id=cdd596274c4fee47faa7a510e08a07018b1c714fa8db79d142a5bd4254ecfae3 namespace=k8s.io Nov 12 20:57:43.776853 containerd[1559]: time="2024-11-12T20:57:43.776857021Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:57:43.997223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdd596274c4fee47faa7a510e08a07018b1c714fa8db79d142a5bd4254ecfae3-rootfs.mount: Deactivated successfully. Nov 12 20:57:44.662282 kubelet[2743]: E1112 20:57:44.662052 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:44.662282 kubelet[2743]: E1112 20:57:44.662060 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:44.666347 containerd[1559]: time="2024-11-12T20:57:44.666299161Z" level=info msg="CreateContainer within sandbox \"d7307d07d717cd80153343195bb12291eceb35a8a1c8a634c8aab591941fcf05\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 20:57:44.683168 kubelet[2743]: I1112 20:57:44.680373 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-wkgtv" podStartSLOduration=4.146464517 podStartE2EDuration="14.680308552s" podCreationTimestamp="2024-11-12 20:57:30 +0000 UTC" firstStartedPulling="2024-11-12 20:57:32.231131927 +0000 UTC m=+14.593259749" lastFinishedPulling="2024-11-12 20:57:42.764975952 +0000 UTC m=+25.127103784" observedRunningTime="2024-11-12 20:57:43.683904342 +0000 UTC m=+26.046032164" watchObservedRunningTime="2024-11-12 20:57:44.680308552 +0000 UTC m=+27.042436374" Nov 12 20:57:44.698653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2242750552.mount: Deactivated successfully. Nov 12 20:57:44.702427 containerd[1559]: time="2024-11-12T20:57:44.702376313Z" level=info msg="CreateContainer within sandbox \"d7307d07d717cd80153343195bb12291eceb35a8a1c8a634c8aab591941fcf05\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e0a30a1c3817a4bcf3dd8ce22b2a1cf5b91180ed747ba1d166bf8031a1380cac\"" Nov 12 20:57:44.703008 containerd[1559]: time="2024-11-12T20:57:44.702980681Z" level=info msg="StartContainer for \"e0a30a1c3817a4bcf3dd8ce22b2a1cf5b91180ed747ba1d166bf8031a1380cac\"" Nov 12 20:57:44.770088 containerd[1559]: time="2024-11-12T20:57:44.770029804Z" level=info msg="StartContainer for \"e0a30a1c3817a4bcf3dd8ce22b2a1cf5b91180ed747ba1d166bf8031a1380cac\" returns successfully" Nov 12 20:57:44.965801 kubelet[2743]: I1112 20:57:44.965666 2743 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 20:57:44.991856 kubelet[2743]: I1112 20:57:44.991809 2743 topology_manager.go:215] "Topology Admit Handler" podUID="aed7128b-d7da-4340-8b7a-db2d987d38e9" podNamespace="kube-system" podName="coredns-76f75df574-gd2kd" Nov 12 20:57:44.992104 kubelet[2743]: I1112 20:57:44.992068 2743 topology_manager.go:215] "Topology Admit Handler" podUID="b1eff863-1286-4d77-8f99-bd394706d686" podNamespace="kube-system" podName="coredns-76f75df574-rdswq" Nov 12 20:57:45.022002 kubelet[2743]: I1112 20:57:45.021939 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aed7128b-d7da-4340-8b7a-db2d987d38e9-config-volume\") pod \"coredns-76f75df574-gd2kd\" (UID: \"aed7128b-d7da-4340-8b7a-db2d987d38e9\") " pod="kube-system/coredns-76f75df574-gd2kd" Nov 12 20:57:45.022177 kubelet[2743]: I1112 20:57:45.022025 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rb5n\" (UniqueName: \"kubernetes.io/projected/aed7128b-d7da-4340-8b7a-db2d987d38e9-kube-api-access-6rb5n\") pod \"coredns-76f75df574-gd2kd\" (UID: \"aed7128b-d7da-4340-8b7a-db2d987d38e9\") " pod="kube-system/coredns-76f75df574-gd2kd" Nov 12 20:57:45.022177 kubelet[2743]: I1112 20:57:45.022062 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpb65\" (UniqueName: \"kubernetes.io/projected/b1eff863-1286-4d77-8f99-bd394706d686-kube-api-access-kpb65\") pod \"coredns-76f75df574-rdswq\" (UID: \"b1eff863-1286-4d77-8f99-bd394706d686\") " pod="kube-system/coredns-76f75df574-rdswq" Nov 12 20:57:45.022177 kubelet[2743]: I1112 20:57:45.022091 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1eff863-1286-4d77-8f99-bd394706d686-config-volume\") pod \"coredns-76f75df574-rdswq\" (UID: \"b1eff863-1286-4d77-8f99-bd394706d686\") " pod="kube-system/coredns-76f75df574-rdswq" Nov 12 20:57:45.305297 kubelet[2743]: E1112 20:57:45.304693 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:45.305297 kubelet[2743]: E1112 20:57:45.304831 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:45.306663 containerd[1559]: time="2024-11-12T20:57:45.306243425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rdswq,Uid:b1eff863-1286-4d77-8f99-bd394706d686,Namespace:kube-system,Attempt:0,}" Nov 12 20:57:45.306663 containerd[1559]: time="2024-11-12T20:57:45.306358512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gd2kd,Uid:aed7128b-d7da-4340-8b7a-db2d987d38e9,Namespace:kube-system,Attempt:0,}" Nov 12 20:57:45.667280 kubelet[2743]: E1112 20:57:45.667243 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:46.669552 kubelet[2743]: E1112 20:57:46.669491 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:46.996350 systemd-networkd[1246]: cilium_host: Link UP Nov 12 20:57:46.997304 systemd-networkd[1246]: cilium_net: Link UP Nov 12 20:57:46.997312 systemd-networkd[1246]: cilium_net: Gained carrier Nov 12 20:57:46.998778 systemd-networkd[1246]: cilium_host: Gained carrier Nov 12 20:57:46.999071 systemd-networkd[1246]: cilium_host: Gained IPv6LL Nov 12 20:57:47.133626 systemd-networkd[1246]: cilium_vxlan: Link UP Nov 12 20:57:47.133644 systemd-networkd[1246]: cilium_vxlan: Gained carrier Nov 12 20:57:47.384171 kernel: NET: Registered PF_ALG protocol family Nov 12 20:57:47.672580 kubelet[2743]: E1112 20:57:47.672466 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:47.914440 systemd-networkd[1246]: cilium_net: Gained IPv6LL Nov 12 20:57:48.141458 systemd-networkd[1246]: lxc_health: Link UP Nov 12 20:57:48.145070 systemd-networkd[1246]: lxc_health: Gained carrier Nov 12 20:57:48.231081 systemd[1]: Started sshd@7-10.0.0.153:22-10.0.0.1:45902.service - OpenSSH per-connection server daemon (10.0.0.1:45902). Nov 12 20:57:48.288993 sshd[3933]: Accepted publickey for core from 10.0.0.1 port 45902 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:57:48.290724 sshd[3933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:48.295157 systemd-logind[1540]: New session 8 of user core. Nov 12 20:57:48.302416 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:57:48.406818 systemd-networkd[1246]: lxc6dc54caf15a2: Link UP Nov 12 20:57:48.414649 kernel: eth0: renamed from tmp3b690 Nov 12 20:57:48.423923 systemd-networkd[1246]: lxc18772e24624a: Link UP Nov 12 20:57:48.435206 kernel: eth0: renamed from tmp74f65 Nov 12 20:57:48.449548 systemd-networkd[1246]: lxc6dc54caf15a2: Gained carrier Nov 12 20:57:48.454474 systemd-networkd[1246]: lxc18772e24624a: Gained carrier Nov 12 20:57:48.480727 sshd[3933]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:48.485761 systemd[1]: sshd@7-10.0.0.153:22-10.0.0.1:45902.service: Deactivated successfully. Nov 12 20:57:48.490391 systemd-logind[1540]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:57:48.490504 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:57:48.492202 systemd-logind[1540]: Removed session 8. Nov 12 20:57:48.745342 systemd-networkd[1246]: cilium_vxlan: Gained IPv6LL Nov 12 20:57:49.042533 kubelet[2743]: E1112 20:57:49.042315 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:49.157374 kubelet[2743]: I1112 20:57:49.157334 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6jn48" podStartSLOduration=11.126905252 podStartE2EDuration="19.157293393s" podCreationTimestamp="2024-11-12 20:57:30 +0000 UTC" firstStartedPulling="2024-11-12 20:57:31.949216466 +0000 UTC m=+14.311344298" lastFinishedPulling="2024-11-12 20:57:39.979604607 +0000 UTC m=+22.341732439" observedRunningTime="2024-11-12 20:57:45.685769653 +0000 UTC m=+28.047897485" watchObservedRunningTime="2024-11-12 20:57:49.157293393 +0000 UTC m=+31.519421215" Nov 12 20:57:49.257408 systemd-networkd[1246]: lxc_health: Gained IPv6LL Nov 12 20:57:49.961303 systemd-networkd[1246]: lxc18772e24624a: Gained IPv6LL Nov 12 20:57:50.281348 systemd-networkd[1246]: lxc6dc54caf15a2: Gained IPv6LL Nov 12 20:57:52.118696 containerd[1559]: time="2024-11-12T20:57:52.118581918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:57:52.118696 containerd[1559]: time="2024-11-12T20:57:52.118636320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:57:52.118696 containerd[1559]: time="2024-11-12T20:57:52.118662179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:52.120057 containerd[1559]: time="2024-11-12T20:57:52.119984855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:52.135307 containerd[1559]: time="2024-11-12T20:57:52.135010426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:57:52.135307 containerd[1559]: time="2024-11-12T20:57:52.135079445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:57:52.135307 containerd[1559]: time="2024-11-12T20:57:52.135104773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:52.135307 containerd[1559]: time="2024-11-12T20:57:52.135215552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:52.147564 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:57:52.159795 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:57:52.177894 containerd[1559]: time="2024-11-12T20:57:52.177830484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gd2kd,Uid:aed7128b-d7da-4340-8b7a-db2d987d38e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b6902504ba1768c07c46b13d9f7db7f326fedffa72acb47f18948f5009173aa\"" Nov 12 20:57:52.178846 kubelet[2743]: E1112 20:57:52.178704 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:52.182021 containerd[1559]: time="2024-11-12T20:57:52.181626209Z" level=info msg="CreateContainer within sandbox \"3b6902504ba1768c07c46b13d9f7db7f326fedffa72acb47f18948f5009173aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:57:52.191650 containerd[1559]: time="2024-11-12T20:57:52.191614519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rdswq,Uid:b1eff863-1286-4d77-8f99-bd394706d686,Namespace:kube-system,Attempt:0,} returns sandbox id \"74f656a4032ee50be4b88f9057a468ddb6d2352ca11785676c7cb7d90017842c\"" Nov 12 20:57:52.192370 kubelet[2743]: E1112 20:57:52.192346 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:52.195488 containerd[1559]: time="2024-11-12T20:57:52.195458707Z" level=info msg="CreateContainer within sandbox \"74f656a4032ee50be4b88f9057a468ddb6d2352ca11785676c7cb7d90017842c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:57:52.219245 containerd[1559]: time="2024-11-12T20:57:52.219197990Z" level=info msg="CreateContainer within sandbox \"74f656a4032ee50be4b88f9057a468ddb6d2352ca11785676c7cb7d90017842c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c44a090a2f7ef21f7e9dc37484a8ed4c3063a0f1810c1d3b1e5f006c90a4687\"" Nov 12 20:57:52.219765 containerd[1559]: time="2024-11-12T20:57:52.219721043Z" level=info msg="StartContainer for \"1c44a090a2f7ef21f7e9dc37484a8ed4c3063a0f1810c1d3b1e5f006c90a4687\"" Nov 12 20:57:52.219944 containerd[1559]: time="2024-11-12T20:57:52.219918234Z" level=info msg="CreateContainer within sandbox \"3b6902504ba1768c07c46b13d9f7db7f326fedffa72acb47f18948f5009173aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b850f1d7c62bfdd482728faf14023a2715344b06e0d0e84688e2d4fbd7fb356f\"" Nov 12 20:57:52.220598 containerd[1559]: time="2024-11-12T20:57:52.220553037Z" level=info msg="StartContainer for \"b850f1d7c62bfdd482728faf14023a2715344b06e0d0e84688e2d4fbd7fb356f\"" Nov 12 20:57:52.289096 containerd[1559]: time="2024-11-12T20:57:52.289037898Z" level=info msg="StartContainer for \"b850f1d7c62bfdd482728faf14023a2715344b06e0d0e84688e2d4fbd7fb356f\" returns successfully" Nov 12 20:57:52.289330 containerd[1559]: time="2024-11-12T20:57:52.289050983Z" level=info msg="StartContainer for \"1c44a090a2f7ef21f7e9dc37484a8ed4c3063a0f1810c1d3b1e5f006c90a4687\" returns successfully" Nov 12 20:57:52.682273 kubelet[2743]: E1112 20:57:52.682172 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:52.684230 kubelet[2743]: E1112 20:57:52.684179 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:52.706323 kubelet[2743]: I1112 20:57:52.703725 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-rdswq" podStartSLOduration=22.703679251 podStartE2EDuration="22.703679251s" podCreationTimestamp="2024-11-12 20:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:57:52.692185231 +0000 UTC m=+35.054313053" watchObservedRunningTime="2024-11-12 20:57:52.703679251 +0000 UTC m=+35.065807073" Nov 12 20:57:52.717565 kubelet[2743]: I1112 20:57:52.717527 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-gd2kd" podStartSLOduration=21.717479979 podStartE2EDuration="21.717479979s" podCreationTimestamp="2024-11-12 20:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:57:52.707552353 +0000 UTC m=+35.069680165" watchObservedRunningTime="2024-11-12 20:57:52.717479979 +0000 UTC m=+35.079607791" Nov 12 20:57:53.126032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4146413112.mount: Deactivated successfully. Nov 12 20:57:53.496356 systemd[1]: Started sshd@8-10.0.0.153:22-10.0.0.1:45906.service - OpenSSH per-connection server daemon (10.0.0.1:45906). Nov 12 20:57:53.527347 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 45906 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:57:53.528818 sshd[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:53.532669 systemd-logind[1540]: New session 9 of user core. Nov 12 20:57:53.547410 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:57:53.686757 kubelet[2743]: E1112 20:57:53.686705 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:53.687398 kubelet[2743]: E1112 20:57:53.686998 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:53.700281 sshd[4157]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:53.704160 systemd[1]: sshd@8-10.0.0.153:22-10.0.0.1:45906.service: Deactivated successfully. Nov 12 20:57:53.706378 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:57:53.706999 systemd-logind[1540]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:57:53.707834 systemd-logind[1540]: Removed session 9. Nov 12 20:57:54.688406 kubelet[2743]: E1112 20:57:54.688358 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:54.688406 kubelet[2743]: E1112 20:57:54.688358 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:57:58.713490 systemd[1]: Started sshd@9-10.0.0.153:22-10.0.0.1:47314.service - OpenSSH per-connection server daemon (10.0.0.1:47314). Nov 12 20:57:58.741086 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 47314 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:57:58.742752 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:58.746740 systemd-logind[1540]: New session 10 of user core. Nov 12 20:57:58.764421 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:57:58.871408 sshd[4176]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:58.874832 systemd[1]: sshd@9-10.0.0.153:22-10.0.0.1:47314.service: Deactivated successfully. Nov 12 20:57:58.878175 systemd-logind[1540]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:57:58.878273 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:57:58.879856 systemd-logind[1540]: Removed session 10. Nov 12 20:58:00.040200 kubelet[2743]: I1112 20:58:00.040132 2743 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:58:00.041046 kubelet[2743]: E1112 20:58:00.041018 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:58:00.704032 kubelet[2743]: E1112 20:58:00.703993 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:58:03.883422 systemd[1]: Started sshd@10-10.0.0.153:22-10.0.0.1:47342.service - OpenSSH per-connection server daemon (10.0.0.1:47342). Nov 12 20:58:03.912786 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 47342 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:58:03.914222 sshd[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:03.917699 systemd-logind[1540]: New session 11 of user core. Nov 12 20:58:03.927377 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:58:04.042643 sshd[4194]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:04.047075 systemd[1]: sshd@10-10.0.0.153:22-10.0.0.1:47342.service: Deactivated successfully. Nov 12 20:58:04.049646 systemd-logind[1540]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:58:04.049657 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:58:04.051049 systemd-logind[1540]: Removed session 11. Nov 12 20:58:09.049336 systemd[1]: Started sshd@11-10.0.0.153:22-10.0.0.1:50796.service - OpenSSH per-connection server daemon (10.0.0.1:50796). Nov 12 20:58:09.078096 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 50796 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:58:09.079814 sshd[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:09.083665 systemd-logind[1540]: New session 12 of user core. Nov 12 20:58:09.099394 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:58:09.205989 sshd[4211]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:09.212586 systemd[1]: Started sshd@12-10.0.0.153:22-10.0.0.1:50800.service - OpenSSH per-connection server daemon (10.0.0.1:50800). Nov 12 20:58:09.213271 systemd[1]: sshd@11-10.0.0.153:22-10.0.0.1:50796.service: Deactivated successfully. Nov 12 20:58:09.217782 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:58:09.218106 systemd-logind[1540]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:58:09.220013 systemd-logind[1540]: Removed session 12. Nov 12 20:58:09.244349 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 50800 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:58:09.245791 sshd[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:09.249546 systemd-logind[1540]: New session 13 of user core. Nov 12 20:58:09.256418 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:58:09.413603 sshd[4224]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:09.425529 systemd[1]: Started sshd@13-10.0.0.153:22-10.0.0.1:50812.service - OpenSSH per-connection server daemon (10.0.0.1:50812). Nov 12 20:58:09.428668 systemd[1]: sshd@12-10.0.0.153:22-10.0.0.1:50800.service: Deactivated successfully. Nov 12 20:58:09.432454 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:58:09.434545 systemd-logind[1540]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:58:09.436411 systemd-logind[1540]: Removed session 13. Nov 12 20:58:09.461363 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 50812 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:58:09.462792 sshd[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:09.467787 systemd-logind[1540]: New session 14 of user core. Nov 12 20:58:09.481420 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:58:09.592712 sshd[4237]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:09.596604 systemd[1]: sshd@13-10.0.0.153:22-10.0.0.1:50812.service: Deactivated successfully. Nov 12 20:58:09.598671 systemd-logind[1540]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:58:09.598831 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:58:09.599867 systemd-logind[1540]: Removed session 14. Nov 12 20:58:14.608447 systemd[1]: Started sshd@14-10.0.0.153:22-10.0.0.1:50908.service - OpenSSH per-connection server daemon (10.0.0.1:50908). Nov 12 20:58:14.636186 sshd[4255]: Accepted publickey for core from 10.0.0.1 port 50908 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:58:14.637987 sshd[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:14.642229 systemd-logind[1540]: New session 15 of user core. Nov 12 20:58:14.652404 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 20:58:14.769724 sshd[4255]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:14.774681 systemd[1]: sshd@14-10.0.0.153:22-10.0.0.1:50908.service: Deactivated successfully. Nov 12 20:58:14.778557 systemd-logind[1540]: Session 15 logged out. Waiting for processes to exit. Nov 12 20:58:14.778567 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 20:58:14.779920 systemd-logind[1540]: Removed session 15. Nov 12 20:58:19.779363 systemd[1]: Started sshd@15-10.0.0.153:22-10.0.0.1:54266.service - OpenSSH per-connection server daemon (10.0.0.1:54266). Nov 12 20:58:19.806622 sshd[4272]: Accepted publickey for core from 10.0.0.1 port 54266 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:58:19.808045 sshd[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:19.811997 systemd-logind[1540]: New session 16 of user core. Nov 12 20:58:19.820389 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 20:58:19.924924 sshd[4272]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:19.932366 systemd[1]: Started sshd@16-10.0.0.153:22-10.0.0.1:54280.service - OpenSSH per-connection server daemon (10.0.0.1:54280). Nov 12 20:58:19.932822 systemd[1]: sshd@15-10.0.0.153:22-10.0.0.1:54266.service: Deactivated successfully. Nov 12 20:58:19.936581 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 20:58:19.937321 systemd-logind[1540]: Session 16 logged out. Waiting for processes to exit. Nov 12 20:58:19.938289 systemd-logind[1540]: Removed session 16. Nov 12 20:58:19.961206 sshd[4284]: Accepted publickey for core from 10.0.0.1 port 54280 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:58:19.962809 sshd[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:19.967168 systemd-logind[1540]: New session 17 of user core. Nov 12 20:58:19.973710 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 20:58:20.215422 sshd[4284]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:20.225343 systemd[1]: Started sshd@17-10.0.0.153:22-10.0.0.1:54288.service - OpenSSH per-connection server daemon (10.0.0.1:54288). Nov 12 20:58:20.225793 systemd[1]: sshd@16-10.0.0.153:22-10.0.0.1:54280.service: Deactivated successfully. Nov 12 20:58:20.230168 systemd-logind[1540]: Session 17 logged out. Waiting for processes to exit. Nov 12 20:58:20.230415 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 20:58:20.231461 systemd-logind[1540]: Removed session 17. Nov 12 20:58:20.258109 sshd[4297]: Accepted publickey for core from 10.0.0.1 port 54288 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:58:20.259494 sshd[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:20.263267 systemd-logind[1540]: New session 18 of user core. Nov 12 20:58:20.273403 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 20:58:21.523693 sshd[4297]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:21.532945 systemd[1]: Started sshd@18-10.0.0.153:22-10.0.0.1:54298.service - OpenSSH per-connection server daemon (10.0.0.1:54298). Nov 12 20:58:21.533682 systemd[1]: sshd@17-10.0.0.153:22-10.0.0.1:54288.service: Deactivated successfully. Nov 12 20:58:21.537673 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 20:58:21.540122 systemd-logind[1540]: Session 18 logged out. Waiting for processes to exit. Nov 12 20:58:21.541794 systemd-logind[1540]: Removed session 18. Nov 12 20:58:21.564411 sshd[4319]: Accepted publickey for core from 10.0.0.1 port 54298 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:58:21.565905 sshd[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:21.569945 systemd-logind[1540]: New session 19 of user core. Nov 12 20:58:21.576403 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 20:58:21.806664 sshd[4319]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:21.814398 systemd[1]: Started sshd@19-10.0.0.153:22-10.0.0.1:54306.service - OpenSSH per-connection server daemon (10.0.0.1:54306). Nov 12 20:58:21.814877 systemd[1]: sshd@18-10.0.0.153:22-10.0.0.1:54298.service: Deactivated successfully. Nov 12 20:58:21.819220 systemd-logind[1540]: Session 19 logged out. Waiting for processes to exit. Nov 12 20:58:21.820016 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 20:58:21.821056 systemd-logind[1540]: Removed session 19. Nov 12 20:58:21.842017 sshd[4332]: Accepted publickey for core from 10.0.0.1 port 54306 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:58:21.843460 sshd[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:21.847278 systemd-logind[1540]: New session 20 of user core. Nov 12 20:58:21.861426 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 20:58:21.990356 sshd[4332]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:21.994193 systemd[1]: sshd@19-10.0.0.153:22-10.0.0.1:54306.service: Deactivated successfully. Nov 12 20:58:21.996637 systemd-logind[1540]: Session 20 logged out. Waiting for processes to exit. Nov 12 20:58:21.996746 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 20:58:21.997731 systemd-logind[1540]: Removed session 20. Nov 12 20:58:27.005360 systemd[1]: Started sshd@20-10.0.0.153:22-10.0.0.1:55728.service - OpenSSH per-connection server daemon (10.0.0.1:55728). Nov 12 20:58:27.031837 sshd[4350]: Accepted publickey for core from 10.0.0.1 port 55728 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:58:27.033175 sshd[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:27.036835 systemd-logind[1540]: New session 21 of user core. Nov 12 20:58:27.045381 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 20:58:27.149081 sshd[4350]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:27.153047 systemd[1]: sshd@20-10.0.0.153:22-10.0.0.1:55728.service: Deactivated successfully. Nov 12 20:58:27.155405 systemd-logind[1540]: Session 21 logged out. Waiting for processes to exit. Nov 12 20:58:27.155419 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 20:58:27.156401 systemd-logind[1540]: Removed session 21. Nov 12 20:58:32.158485 systemd[1]: Started sshd@21-10.0.0.153:22-10.0.0.1:55736.service - OpenSSH per-connection server daemon (10.0.0.1:55736). Nov 12 20:58:32.188695 sshd[4370]: Accepted publickey for core from 10.0.0.1 port 55736 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:58:32.190204 sshd[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:32.193888 systemd-logind[1540]: New session 22 of user core. Nov 12 20:58:32.208377 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 20:58:32.315315 sshd[4370]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:32.319617 systemd[1]: sshd@21-10.0.0.153:22-10.0.0.1:55736.service: Deactivated successfully. Nov 12 20:58:32.322009 systemd-logind[1540]: Session 22 logged out. Waiting for processes to exit. Nov 12 20:58:32.322081 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 20:58:32.323210 systemd-logind[1540]: Removed session 22. Nov 12 20:58:37.334370 systemd[1]: Started sshd@22-10.0.0.153:22-10.0.0.1:43696.service - OpenSSH per-connection server daemon (10.0.0.1:43696). Nov 12 20:58:37.361479 sshd[4385]: Accepted publickey for core from 10.0.0.1 port 43696 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:58:37.362830 sshd[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:37.366541 systemd-logind[1540]: New session 23 of user core. Nov 12 20:58:37.382409 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 20:58:37.489095 sshd[4385]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:37.492519 systemd[1]: sshd@22-10.0.0.153:22-10.0.0.1:43696.service: Deactivated successfully. Nov 12 20:58:37.494708 systemd-logind[1540]: Session 23 logged out. Waiting for processes to exit. Nov 12 20:58:37.494752 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 20:58:37.495788 systemd-logind[1540]: Removed session 23. Nov 12 20:58:42.502436 systemd[1]: Started sshd@23-10.0.0.153:22-10.0.0.1:43710.service - OpenSSH per-connection server daemon (10.0.0.1:43710). Nov 12 20:58:42.529453 sshd[4400]: Accepted publickey for core from 10.0.0.1 port 43710 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:58:42.531402 sshd[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:42.535324 systemd-logind[1540]: New session 24 of user core. Nov 12 20:58:42.542395 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 20:58:42.648818 sshd[4400]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:42.655439 systemd[1]: Started sshd@24-10.0.0.153:22-10.0.0.1:43726.service - OpenSSH per-connection server daemon (10.0.0.1:43726). Nov 12 20:58:42.656116 systemd[1]: sshd@23-10.0.0.153:22-10.0.0.1:43710.service: Deactivated successfully. Nov 12 20:58:42.658497 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 20:58:42.660222 systemd-logind[1540]: Session 24 logged out. Waiting for processes to exit. Nov 12 20:58:42.661394 systemd-logind[1540]: Removed session 24. Nov 12 20:58:42.685208 sshd[4412]: Accepted publickey for core from 10.0.0.1 port 43726 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:58:42.686732 sshd[4412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:42.690572 systemd-logind[1540]: New session 25 of user core. Nov 12 20:58:42.700457 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 20:58:44.040912 containerd[1559]: time="2024-11-12T20:58:44.040845976Z" level=info msg="StopContainer for \"8771bfbeb73e1102861562b3c32f4cd1fc8a75d5adb98f3cc31c5b3c82aec561\" with timeout 30 (s)" Nov 12 20:58:44.041513 containerd[1559]: time="2024-11-12T20:58:44.041274952Z" level=info msg="Stop container \"8771bfbeb73e1102861562b3c32f4cd1fc8a75d5adb98f3cc31c5b3c82aec561\" with signal terminated" Nov 12 20:58:44.095469 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8771bfbeb73e1102861562b3c32f4cd1fc8a75d5adb98f3cc31c5b3c82aec561-rootfs.mount: Deactivated successfully. Nov 12 20:58:44.096230 containerd[1559]: time="2024-11-12T20:58:44.095723279Z" level=info msg="StopContainer for \"e0a30a1c3817a4bcf3dd8ce22b2a1cf5b91180ed747ba1d166bf8031a1380cac\" with timeout 2 (s)" Nov 12 20:58:44.096523 containerd[1559]: time="2024-11-12T20:58:44.096469908Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:58:44.096746 containerd[1559]: time="2024-11-12T20:58:44.096604294Z" level=info msg="Stop container \"e0a30a1c3817a4bcf3dd8ce22b2a1cf5b91180ed747ba1d166bf8031a1380cac\" with signal terminated" Nov 12 20:58:44.097233 containerd[1559]: time="2024-11-12T20:58:44.097035624Z" level=info msg="shim disconnected" id=8771bfbeb73e1102861562b3c32f4cd1fc8a75d5adb98f3cc31c5b3c82aec561 namespace=k8s.io Nov 12 20:58:44.097233 containerd[1559]: time="2024-11-12T20:58:44.097099336Z" level=warning msg="cleaning up after shim disconnected" id=8771bfbeb73e1102861562b3c32f4cd1fc8a75d5adb98f3cc31c5b3c82aec561 namespace=k8s.io Nov 12 20:58:44.097233 containerd[1559]: time="2024-11-12T20:58:44.097108042Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:58:44.106621 systemd-networkd[1246]: lxc_health: Link DOWN Nov 12 20:58:44.106655 systemd-networkd[1246]: lxc_health: Lost carrier Nov 12 20:58:44.117718 containerd[1559]: time="2024-11-12T20:58:44.117675632Z" level=info msg="StopContainer for \"8771bfbeb73e1102861562b3c32f4cd1fc8a75d5adb98f3cc31c5b3c82aec561\" returns successfully" Nov 12 20:58:44.118603 containerd[1559]: time="2024-11-12T20:58:44.118575514Z" level=info msg="StopPodSandbox for \"6d05b1d5ad0d9c9502d474c5a34344f2632c97301a44e01fee2c46dfc20ec89e\"" Nov 12 20:58:44.118672 containerd[1559]: time="2024-11-12T20:58:44.118626080Z" level=info msg="Container to stop \"8771bfbeb73e1102861562b3c32f4cd1fc8a75d5adb98f3cc31c5b3c82aec561\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:58:44.122230 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d05b1d5ad0d9c9502d474c5a34344f2632c97301a44e01fee2c46dfc20ec89e-shm.mount: Deactivated successfully. Nov 12 20:58:44.149461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d05b1d5ad0d9c9502d474c5a34344f2632c97301a44e01fee2c46dfc20ec89e-rootfs.mount: Deactivated successfully. Nov 12 20:58:44.153318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0a30a1c3817a4bcf3dd8ce22b2a1cf5b91180ed747ba1d166bf8031a1380cac-rootfs.mount: Deactivated successfully. Nov 12 20:58:44.156189 containerd[1559]: time="2024-11-12T20:58:44.156062227Z" level=info msg="shim disconnected" id=e0a30a1c3817a4bcf3dd8ce22b2a1cf5b91180ed747ba1d166bf8031a1380cac namespace=k8s.io Nov 12 20:58:44.156189 containerd[1559]: time="2024-11-12T20:58:44.156157248Z" level=warning msg="cleaning up after shim disconnected" id=e0a30a1c3817a4bcf3dd8ce22b2a1cf5b91180ed747ba1d166bf8031a1380cac namespace=k8s.io Nov 12 20:58:44.156189 containerd[1559]: time="2024-11-12T20:58:44.156171375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:58:44.156411 containerd[1559]: time="2024-11-12T20:58:44.156377196Z" level=info msg="shim disconnected" id=6d05b1d5ad0d9c9502d474c5a34344f2632c97301a44e01fee2c46dfc20ec89e namespace=k8s.io Nov 12 20:58:44.156648 containerd[1559]: time="2024-11-12T20:58:44.156456907Z" level=warning msg="cleaning up after shim disconnected" id=6d05b1d5ad0d9c9502d474c5a34344f2632c97301a44e01fee2c46dfc20ec89e namespace=k8s.io Nov 12 20:58:44.156648 containerd[1559]: time="2024-11-12T20:58:44.156475644Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:58:44.170924 containerd[1559]: time="2024-11-12T20:58:44.170863192Z" level=warning msg="cleanup warnings time=\"2024-11-12T20:58:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 12 20:58:44.172112 containerd[1559]: time="2024-11-12T20:58:44.172047774Z" level=info msg="TearDown network for sandbox \"6d05b1d5ad0d9c9502d474c5a34344f2632c97301a44e01fee2c46dfc20ec89e\" successfully" Nov 12 20:58:44.172112 containerd[1559]: time="2024-11-12T20:58:44.172101016Z" level=info msg="StopPodSandbox for \"6d05b1d5ad0d9c9502d474c5a34344f2632c97301a44e01fee2c46dfc20ec89e\" returns successfully" Nov 12 20:58:44.175548 containerd[1559]: time="2024-11-12T20:58:44.175456447Z" level=info msg="StopContainer for \"e0a30a1c3817a4bcf3dd8ce22b2a1cf5b91180ed747ba1d166bf8031a1380cac\" returns successfully" Nov 12 20:58:44.175809 containerd[1559]: time="2024-11-12T20:58:44.175778499Z" level=info msg="StopPodSandbox for \"d7307d07d717cd80153343195bb12291eceb35a8a1c8a634c8aab591941fcf05\"" Nov 12 20:58:44.175863 containerd[1559]: time="2024-11-12T20:58:44.175822923Z" level=info msg="Container to stop \"0393e0bb84eb1a51ebfeab886ba89f66556407a8adc7df715b8665af42cbf661\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:58:44.175863 containerd[1559]: time="2024-11-12T20:58:44.175839555Z" level=info msg="Container to stop \"ee0dc09275c2ad3a24cf55a92bd320a88fc1bcea5ff74817165527362307b1d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:58:44.175863 containerd[1559]: time="2024-11-12T20:58:44.175851819Z" level=info msg="Container to stop \"cdd596274c4fee47faa7a510e08a07018b1c714fa8db79d142a5bd4254ecfae3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:58:44.176057 containerd[1559]: time="2024-11-12T20:58:44.175864422Z" level=info msg="Container to stop \"e0a30a1c3817a4bcf3dd8ce22b2a1cf5b91180ed747ba1d166bf8031a1380cac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:58:44.176057 containerd[1559]: time="2024-11-12T20:58:44.175876545Z" level=info msg="Container to stop \"ac07d8bc9c3471f52f1b004c790aa577709e53a8b371347ac2a765bde92f943b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:58:44.206036 containerd[1559]: time="2024-11-12T20:58:44.205948078Z" level=info msg="shim disconnected" id=d7307d07d717cd80153343195bb12291eceb35a8a1c8a634c8aab591941fcf05 namespace=k8s.io Nov 12 20:58:44.206036 containerd[1559]: time="2024-11-12T20:58:44.206020096Z" level=warning msg="cleaning up after shim disconnected" id=d7307d07d717cd80153343195bb12291eceb35a8a1c8a634c8aab591941fcf05 namespace=k8s.io Nov 12 20:58:44.206036 containerd[1559]: time="2024-11-12T20:58:44.206028762Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:58:44.220228 containerd[1559]: time="2024-11-12T20:58:44.220109457Z" level=info msg="TearDown network for sandbox \"d7307d07d717cd80153343195bb12291eceb35a8a1c8a634c8aab591941fcf05\" successfully" Nov 12 20:58:44.220228 containerd[1559]: time="2024-11-12T20:58:44.220161787Z" level=info msg="StopPodSandbox for \"d7307d07d717cd80153343195bb12291eceb35a8a1c8a634c8aab591941fcf05\" returns successfully" Nov 12 20:58:44.251202 kubelet[2743]: I1112 20:58:44.251167 2743 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-hostproc\") pod \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " Nov 12 20:58:44.251202 kubelet[2743]: I1112 20:58:44.251218 2743 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffnwn\" (UniqueName: \"kubernetes.io/projected/46b473b4-a8bc-43d6-82fe-829b7f9fc0c6-kube-api-access-ffnwn\") pod \"46b473b4-a8bc-43d6-82fe-829b7f9fc0c6\" (UID: \"46b473b4-a8bc-43d6-82fe-829b7f9fc0c6\") " Nov 12 20:58:44.251734 kubelet[2743]: I1112 20:58:44.251239 2743 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-bpf-maps\") pod \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " Nov 12 20:58:44.251734 kubelet[2743]: I1112 20:58:44.251229 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-hostproc" (OuterVolumeSpecName: "hostproc") pod "87e6ecd2-8e03-4e1d-b346-58c0b0524c41" (UID: "87e6ecd2-8e03-4e1d-b346-58c0b0524c41"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:58:44.251734 kubelet[2743]: I1112 20:58:44.251256 2743 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-xtables-lock\") pod \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " Nov 12 20:58:44.251734 kubelet[2743]: I1112 20:58:44.251285 2743 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-clustermesh-secrets\") pod \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " Nov 12 20:58:44.251734 kubelet[2743]: I1112 20:58:44.251286 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "87e6ecd2-8e03-4e1d-b346-58c0b0524c41" (UID: "87e6ecd2-8e03-4e1d-b346-58c0b0524c41"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:58:44.251734 kubelet[2743]: I1112 20:58:44.251302 2743 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-cilium-run\") pod \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " Nov 12 20:58:44.251887 kubelet[2743]: I1112 20:58:44.251307 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "87e6ecd2-8e03-4e1d-b346-58c0b0524c41" (UID: "87e6ecd2-8e03-4e1d-b346-58c0b0524c41"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:58:44.251887 kubelet[2743]: I1112 20:58:44.251322 2743 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-cni-path\") pod \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " Nov 12 20:58:44.251887 kubelet[2743]: I1112 20:58:44.251343 2743 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-cilium-config-path\") pod \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " Nov 12 20:58:44.251887 kubelet[2743]: I1112 20:58:44.251361 2743 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-lib-modules\") pod \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " Nov 12 20:58:44.251887 kubelet[2743]: I1112 20:58:44.251393 2743 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-host-proc-sys-kernel\") pod \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " Nov 12 20:58:44.251887 kubelet[2743]: I1112 20:58:44.251409 2743 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-cilium-cgroup\") pod \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " Nov 12 20:58:44.252032 kubelet[2743]: I1112 20:58:44.251432 2743 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-host-proc-sys-net\") pod \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " Nov 12 20:58:44.252032 kubelet[2743]: I1112 20:58:44.251452 2743 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjp68\" (UniqueName: \"kubernetes.io/projected/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-kube-api-access-cjp68\") pod \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " Nov 12 20:58:44.252032 kubelet[2743]: I1112 20:58:44.251471 2743 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-hubble-tls\") pod \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " Nov 12 20:58:44.252032 kubelet[2743]: I1112 20:58:44.251488 2743 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-etc-cni-netd\") pod \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\" (UID: \"87e6ecd2-8e03-4e1d-b346-58c0b0524c41\") " Nov 12 20:58:44.252032 kubelet[2743]: I1112 20:58:44.251509 2743 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46b473b4-a8bc-43d6-82fe-829b7f9fc0c6-cilium-config-path\") pod \"46b473b4-a8bc-43d6-82fe-829b7f9fc0c6\" (UID: \"46b473b4-a8bc-43d6-82fe-829b7f9fc0c6\") " Nov 12 20:58:44.252032 kubelet[2743]: I1112 20:58:44.251543 2743 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 12 20:58:44.252032 kubelet[2743]: I1112 20:58:44.251557 2743 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 12 20:58:44.252223 kubelet[2743]: I1112 20:58:44.251568 2743 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 12 20:58:44.252223 kubelet[2743]: I1112 20:58:44.251688 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "87e6ecd2-8e03-4e1d-b346-58c0b0524c41" (UID: "87e6ecd2-8e03-4e1d-b346-58c0b0524c41"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:58:44.252223 kubelet[2743]: I1112 20:58:44.251716 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "87e6ecd2-8e03-4e1d-b346-58c0b0524c41" (UID: "87e6ecd2-8e03-4e1d-b346-58c0b0524c41"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:58:44.252223 kubelet[2743]: I1112 20:58:44.251734 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-cni-path" (OuterVolumeSpecName: "cni-path") pod "87e6ecd2-8e03-4e1d-b346-58c0b0524c41" (UID: "87e6ecd2-8e03-4e1d-b346-58c0b0524c41"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:58:44.253324 kubelet[2743]: I1112 20:58:44.253016 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "87e6ecd2-8e03-4e1d-b346-58c0b0524c41" (UID: "87e6ecd2-8e03-4e1d-b346-58c0b0524c41"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:58:44.255846 kubelet[2743]: I1112 20:58:44.255558 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "87e6ecd2-8e03-4e1d-b346-58c0b0524c41" (UID: "87e6ecd2-8e03-4e1d-b346-58c0b0524c41"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:58:44.255846 kubelet[2743]: I1112 20:58:44.255598 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "87e6ecd2-8e03-4e1d-b346-58c0b0524c41" (UID: "87e6ecd2-8e03-4e1d-b346-58c0b0524c41"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:58:44.255846 kubelet[2743]: I1112 20:58:44.255809 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46b473b4-a8bc-43d6-82fe-829b7f9fc0c6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "46b473b4-a8bc-43d6-82fe-829b7f9fc0c6" (UID: "46b473b4-a8bc-43d6-82fe-829b7f9fc0c6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 20:58:44.255846 kubelet[2743]: I1112 20:58:44.255836 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "87e6ecd2-8e03-4e1d-b346-58c0b0524c41" (UID: "87e6ecd2-8e03-4e1d-b346-58c0b0524c41"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 20:58:44.255981 kubelet[2743]: I1112 20:58:44.255851 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "87e6ecd2-8e03-4e1d-b346-58c0b0524c41" (UID: "87e6ecd2-8e03-4e1d-b346-58c0b0524c41"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:58:44.256089 kubelet[2743]: I1112 20:58:44.256056 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46b473b4-a8bc-43d6-82fe-829b7f9fc0c6-kube-api-access-ffnwn" (OuterVolumeSpecName: "kube-api-access-ffnwn") pod "46b473b4-a8bc-43d6-82fe-829b7f9fc0c6" (UID: "46b473b4-a8bc-43d6-82fe-829b7f9fc0c6"). InnerVolumeSpecName "kube-api-access-ffnwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 20:58:44.256615 kubelet[2743]: I1112 20:58:44.256591 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "87e6ecd2-8e03-4e1d-b346-58c0b0524c41" (UID: "87e6ecd2-8e03-4e1d-b346-58c0b0524c41"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 12 20:58:44.256823 kubelet[2743]: I1112 20:58:44.256794 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-kube-api-access-cjp68" (OuterVolumeSpecName: "kube-api-access-cjp68") pod "87e6ecd2-8e03-4e1d-b346-58c0b0524c41" (UID: "87e6ecd2-8e03-4e1d-b346-58c0b0524c41"). InnerVolumeSpecName "kube-api-access-cjp68". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 20:58:44.258573 kubelet[2743]: I1112 20:58:44.258548 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "87e6ecd2-8e03-4e1d-b346-58c0b0524c41" (UID: "87e6ecd2-8e03-4e1d-b346-58c0b0524c41"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 20:58:44.352069 kubelet[2743]: I1112 20:58:44.351947 2743 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 12 20:58:44.352069 kubelet[2743]: I1112 20:58:44.351983 2743 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 12 20:58:44.352069 kubelet[2743]: I1112 20:58:44.351997 2743 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 12 20:58:44.352069 kubelet[2743]: I1112 20:58:44.352006 2743 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 20:58:44.352069 kubelet[2743]: I1112 20:58:44.352016 2743 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 12 20:58:44.352069 kubelet[2743]: I1112 20:58:44.352026 2743 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 12 20:58:44.352069 kubelet[2743]: I1112 20:58:44.352035 2743 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 12 20:58:44.352069 kubelet[2743]: I1112 20:58:44.352047 2743 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 12 20:58:44.352369 kubelet[2743]: I1112 20:58:44.352058 2743 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cjp68\" (UniqueName: \"kubernetes.io/projected/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-kube-api-access-cjp68\") on node \"localhost\" DevicePath \"\"" Nov 12 20:58:44.352369 kubelet[2743]: I1112 20:58:44.352068 2743 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 12 20:58:44.352369 kubelet[2743]: I1112 20:58:44.352087 2743 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87e6ecd2-8e03-4e1d-b346-58c0b0524c41-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 12 20:58:44.352369 kubelet[2743]: I1112 20:58:44.352097 2743 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46b473b4-a8bc-43d6-82fe-829b7f9fc0c6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 20:58:44.352369 kubelet[2743]: I1112 20:58:44.352166 2743 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ffnwn\" (UniqueName: \"kubernetes.io/projected/46b473b4-a8bc-43d6-82fe-829b7f9fc0c6-kube-api-access-ffnwn\") on node \"localhost\" DevicePath \"\"" Nov 12 20:58:44.795554 kubelet[2743]: I1112 20:58:44.795341 2743 scope.go:117] "RemoveContainer" containerID="e0a30a1c3817a4bcf3dd8ce22b2a1cf5b91180ed747ba1d166bf8031a1380cac" Nov 12 20:58:44.798819 containerd[1559]: time="2024-11-12T20:58:44.797959013Z" level=info msg="RemoveContainer for \"e0a30a1c3817a4bcf3dd8ce22b2a1cf5b91180ed747ba1d166bf8031a1380cac\"" Nov 12 20:58:44.873720 containerd[1559]: time="2024-11-12T20:58:44.873666116Z" level=info msg="RemoveContainer for \"e0a30a1c3817a4bcf3dd8ce22b2a1cf5b91180ed747ba1d166bf8031a1380cac\" returns successfully" Nov 12 20:58:44.874034 kubelet[2743]: I1112 20:58:44.874000 2743 scope.go:117] "RemoveContainer" containerID="cdd596274c4fee47faa7a510e08a07018b1c714fa8db79d142a5bd4254ecfae3" Nov 12 20:58:44.875155 containerd[1559]: time="2024-11-12T20:58:44.875105002Z" level=info msg="RemoveContainer for \"cdd596274c4fee47faa7a510e08a07018b1c714fa8db79d142a5bd4254ecfae3\"" Nov 12 20:58:45.004848 containerd[1559]: time="2024-11-12T20:58:45.004807948Z" level=info msg="RemoveContainer for \"cdd596274c4fee47faa7a510e08a07018b1c714fa8db79d142a5bd4254ecfae3\" returns successfully" Nov 12 20:58:45.005189 kubelet[2743]: I1112 20:58:45.005161 2743 scope.go:117] "RemoveContainer" containerID="ee0dc09275c2ad3a24cf55a92bd320a88fc1bcea5ff74817165527362307b1d6" Nov 12 20:58:45.006427 containerd[1559]: time="2024-11-12T20:58:45.006399113Z" level=info msg="RemoveContainer for \"ee0dc09275c2ad3a24cf55a92bd320a88fc1bcea5ff74817165527362307b1d6\"" Nov 12 20:58:45.070093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7307d07d717cd80153343195bb12291eceb35a8a1c8a634c8aab591941fcf05-rootfs.mount: Deactivated successfully. Nov 12 20:58:45.070325 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7307d07d717cd80153343195bb12291eceb35a8a1c8a634c8aab591941fcf05-shm.mount: Deactivated successfully. Nov 12 20:58:45.070474 systemd[1]: var-lib-kubelet-pods-46b473b4\x2da8bc\x2d43d6\x2d82fe\x2d829b7f9fc0c6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dffnwn.mount: Deactivated successfully. Nov 12 20:58:45.070632 systemd[1]: var-lib-kubelet-pods-87e6ecd2\x2d8e03\x2d4e1d\x2db346\x2d58c0b0524c41-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcjp68.mount: Deactivated successfully. Nov 12 20:58:45.070774 systemd[1]: var-lib-kubelet-pods-87e6ecd2\x2d8e03\x2d4e1d\x2db346\x2d58c0b0524c41-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 12 20:58:45.070918 systemd[1]: var-lib-kubelet-pods-87e6ecd2\x2d8e03\x2d4e1d\x2db346\x2d58c0b0524c41-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 12 20:58:45.084743 containerd[1559]: time="2024-11-12T20:58:45.084701024Z" level=info msg="RemoveContainer for \"ee0dc09275c2ad3a24cf55a92bd320a88fc1bcea5ff74817165527362307b1d6\" returns successfully" Nov 12 20:58:45.085123 kubelet[2743]: I1112 20:58:45.084986 2743 scope.go:117] "RemoveContainer" containerID="0393e0bb84eb1a51ebfeab886ba89f66556407a8adc7df715b8665af42cbf661" Nov 12 20:58:45.088177 containerd[1559]: time="2024-11-12T20:58:45.088118189Z" level=info msg="RemoveContainer for \"0393e0bb84eb1a51ebfeab886ba89f66556407a8adc7df715b8665af42cbf661\"" Nov 12 20:58:45.092007 containerd[1559]: time="2024-11-12T20:58:45.091971756Z" level=info msg="RemoveContainer for \"0393e0bb84eb1a51ebfeab886ba89f66556407a8adc7df715b8665af42cbf661\" returns successfully" Nov 12 20:58:45.092248 kubelet[2743]: I1112 20:58:45.092213 2743 scope.go:117] "RemoveContainer" containerID="ac07d8bc9c3471f52f1b004c790aa577709e53a8b371347ac2a765bde92f943b" Nov 12 20:58:45.099374 containerd[1559]: time="2024-11-12T20:58:45.099321247Z" level=info msg="RemoveContainer for \"ac07d8bc9c3471f52f1b004c790aa577709e53a8b371347ac2a765bde92f943b\"" Nov 12 20:58:45.113101 containerd[1559]: time="2024-11-12T20:58:45.113051097Z" level=info msg="RemoveContainer for \"ac07d8bc9c3471f52f1b004c790aa577709e53a8b371347ac2a765bde92f943b\" returns successfully" Nov 12 20:58:45.113383 kubelet[2743]: I1112 20:58:45.113355 2743 scope.go:117] "RemoveContainer" containerID="e0a30a1c3817a4bcf3dd8ce22b2a1cf5b91180ed747ba1d166bf8031a1380cac" Nov 12 20:58:45.113660 containerd[1559]: time="2024-11-12T20:58:45.113621221Z" level=error msg="ContainerStatus for \"e0a30a1c3817a4bcf3dd8ce22b2a1cf5b91180ed747ba1d166bf8031a1380cac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0a30a1c3817a4bcf3dd8ce22b2a1cf5b91180ed747ba1d166bf8031a1380cac\": not found" Nov 12 20:58:45.113788 kubelet[2743]: E1112 20:58:45.113770 2743 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e0a30a1c3817a4bcf3dd8ce22b2a1cf5b91180ed747ba1d166bf8031a1380cac\": not found" containerID="e0a30a1c3817a4bcf3dd8ce22b2a1cf5b91180ed747ba1d166bf8031a1380cac" Nov 12 20:58:45.113882 kubelet[2743]: I1112 20:58:45.113866 2743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e0a30a1c3817a4bcf3dd8ce22b2a1cf5b91180ed747ba1d166bf8031a1380cac"} err="failed to get container status \"e0a30a1c3817a4bcf3dd8ce22b2a1cf5b91180ed747ba1d166bf8031a1380cac\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0a30a1c3817a4bcf3dd8ce22b2a1cf5b91180ed747ba1d166bf8031a1380cac\": not found" Nov 12 20:58:45.113882 kubelet[2743]: I1112 20:58:45.113880 2743 scope.go:117] "RemoveContainer" containerID="cdd596274c4fee47faa7a510e08a07018b1c714fa8db79d142a5bd4254ecfae3" Nov 12 20:58:45.114058 containerd[1559]: time="2024-11-12T20:58:45.114023185Z" level=error msg="ContainerStatus for \"cdd596274c4fee47faa7a510e08a07018b1c714fa8db79d142a5bd4254ecfae3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cdd596274c4fee47faa7a510e08a07018b1c714fa8db79d142a5bd4254ecfae3\": not found" Nov 12 20:58:45.114271 kubelet[2743]: E1112 20:58:45.114242 2743 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cdd596274c4fee47faa7a510e08a07018b1c714fa8db79d142a5bd4254ecfae3\": not found" containerID="cdd596274c4fee47faa7a510e08a07018b1c714fa8db79d142a5bd4254ecfae3" Nov 12 20:58:45.114271 kubelet[2743]: I1112 20:58:45.114270 2743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cdd596274c4fee47faa7a510e08a07018b1c714fa8db79d142a5bd4254ecfae3"} err="failed to get container status \"cdd596274c4fee47faa7a510e08a07018b1c714fa8db79d142a5bd4254ecfae3\": rpc error: code = NotFound desc = an error occurred when try to find container \"cdd596274c4fee47faa7a510e08a07018b1c714fa8db79d142a5bd4254ecfae3\": not found" Nov 12 20:58:45.114271 kubelet[2743]: I1112 20:58:45.114279 2743 scope.go:117] "RemoveContainer" containerID="ee0dc09275c2ad3a24cf55a92bd320a88fc1bcea5ff74817165527362307b1d6" Nov 12 20:58:45.114517 containerd[1559]: time="2024-11-12T20:58:45.114485314Z" level=error msg="ContainerStatus for \"ee0dc09275c2ad3a24cf55a92bd320a88fc1bcea5ff74817165527362307b1d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee0dc09275c2ad3a24cf55a92bd320a88fc1bcea5ff74817165527362307b1d6\": not found" Nov 12 20:58:45.114637 kubelet[2743]: E1112 20:58:45.114613 2743 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee0dc09275c2ad3a24cf55a92bd320a88fc1bcea5ff74817165527362307b1d6\": not found" containerID="ee0dc09275c2ad3a24cf55a92bd320a88fc1bcea5ff74817165527362307b1d6" Nov 12 20:58:45.114673 kubelet[2743]: I1112 20:58:45.114641 2743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ee0dc09275c2ad3a24cf55a92bd320a88fc1bcea5ff74817165527362307b1d6"} err="failed to get container status \"ee0dc09275c2ad3a24cf55a92bd320a88fc1bcea5ff74817165527362307b1d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee0dc09275c2ad3a24cf55a92bd320a88fc1bcea5ff74817165527362307b1d6\": not found" Nov 12 20:58:45.114673 kubelet[2743]: I1112 20:58:45.114651 2743 scope.go:117] "RemoveContainer" containerID="0393e0bb84eb1a51ebfeab886ba89f66556407a8adc7df715b8665af42cbf661" Nov 12 20:58:45.114856 containerd[1559]: time="2024-11-12T20:58:45.114827865Z" level=error msg="ContainerStatus for \"0393e0bb84eb1a51ebfeab886ba89f66556407a8adc7df715b8665af42cbf661\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0393e0bb84eb1a51ebfeab886ba89f66556407a8adc7df715b8665af42cbf661\": not found" Nov 12 20:58:45.114957 kubelet[2743]: E1112 20:58:45.114940 2743 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0393e0bb84eb1a51ebfeab886ba89f66556407a8adc7df715b8665af42cbf661\": not found" containerID="0393e0bb84eb1a51ebfeab886ba89f66556407a8adc7df715b8665af42cbf661" Nov 12 20:58:45.115005 kubelet[2743]: I1112 20:58:45.114972 2743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0393e0bb84eb1a51ebfeab886ba89f66556407a8adc7df715b8665af42cbf661"} err="failed to get container status \"0393e0bb84eb1a51ebfeab886ba89f66556407a8adc7df715b8665af42cbf661\": rpc error: code = NotFound desc = an error occurred when try to find container \"0393e0bb84eb1a51ebfeab886ba89f66556407a8adc7df715b8665af42cbf661\": not found" Nov 12 20:58:45.115005 kubelet[2743]: I1112 20:58:45.114983 2743 scope.go:117] "RemoveContainer" containerID="ac07d8bc9c3471f52f1b004c790aa577709e53a8b371347ac2a765bde92f943b" Nov 12 20:58:45.115174 containerd[1559]: time="2024-11-12T20:58:45.115129578Z" level=error msg="ContainerStatus for \"ac07d8bc9c3471f52f1b004c790aa577709e53a8b371347ac2a765bde92f943b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac07d8bc9c3471f52f1b004c790aa577709e53a8b371347ac2a765bde92f943b\": not found" Nov 12 20:58:45.115280 kubelet[2743]: E1112 20:58:45.115265 2743 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac07d8bc9c3471f52f1b004c790aa577709e53a8b371347ac2a765bde92f943b\": not found" containerID="ac07d8bc9c3471f52f1b004c790aa577709e53a8b371347ac2a765bde92f943b" Nov 12 20:58:45.115326 kubelet[2743]: I1112 20:58:45.115287 2743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac07d8bc9c3471f52f1b004c790aa577709e53a8b371347ac2a765bde92f943b"} err="failed to get container status \"ac07d8bc9c3471f52f1b004c790aa577709e53a8b371347ac2a765bde92f943b\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac07d8bc9c3471f52f1b004c790aa577709e53a8b371347ac2a765bde92f943b\": not found" Nov 12 20:58:45.115326 kubelet[2743]: I1112 20:58:45.115296 2743 scope.go:117] "RemoveContainer" containerID="8771bfbeb73e1102861562b3c32f4cd1fc8a75d5adb98f3cc31c5b3c82aec561" Nov 12 20:58:45.116194 containerd[1559]: time="2024-11-12T20:58:45.116167742Z" level=info msg="RemoveContainer for \"8771bfbeb73e1102861562b3c32f4cd1fc8a75d5adb98f3cc31c5b3c82aec561\"" Nov 12 20:58:45.119288 containerd[1559]: time="2024-11-12T20:58:45.119261413Z" level=info msg="RemoveContainer for \"8771bfbeb73e1102861562b3c32f4cd1fc8a75d5adb98f3cc31c5b3c82aec561\" returns successfully" Nov 12 20:58:45.119440 kubelet[2743]: I1112 20:58:45.119382 2743 scope.go:117] "RemoveContainer" containerID="8771bfbeb73e1102861562b3c32f4cd1fc8a75d5adb98f3cc31c5b3c82aec561" Nov 12 20:58:45.119568 containerd[1559]: time="2024-11-12T20:58:45.119538289Z" level=error msg="ContainerStatus for \"8771bfbeb73e1102861562b3c32f4cd1fc8a75d5adb98f3cc31c5b3c82aec561\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8771bfbeb73e1102861562b3c32f4cd1fc8a75d5adb98f3cc31c5b3c82aec561\": not found" Nov 12 20:58:45.119661 kubelet[2743]: E1112 20:58:45.119649 2743 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8771bfbeb73e1102861562b3c32f4cd1fc8a75d5adb98f3cc31c5b3c82aec561\": not found" containerID="8771bfbeb73e1102861562b3c32f4cd1fc8a75d5adb98f3cc31c5b3c82aec561" Nov 12 20:58:45.119717 kubelet[2743]: I1112 20:58:45.119671 2743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8771bfbeb73e1102861562b3c32f4cd1fc8a75d5adb98f3cc31c5b3c82aec561"} err="failed to get container status \"8771bfbeb73e1102861562b3c32f4cd1fc8a75d5adb98f3cc31c5b3c82aec561\": rpc error: code = NotFound desc = an error occurred when try to find container \"8771bfbeb73e1102861562b3c32f4cd1fc8a75d5adb98f3cc31c5b3c82aec561\": not found" Nov 12 20:58:46.001487 sshd[4412]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:46.009365 systemd[1]: Started sshd@25-10.0.0.153:22-10.0.0.1:60256.service - OpenSSH per-connection server daemon (10.0.0.1:60256). Nov 12 20:58:46.010387 systemd[1]: sshd@24-10.0.0.153:22-10.0.0.1:43726.service: Deactivated successfully. Nov 12 20:58:46.013883 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 20:58:46.015690 systemd-logind[1540]: Session 25 logged out. Waiting for processes to exit. Nov 12 20:58:46.016557 systemd-logind[1540]: Removed session 25. Nov 12 20:58:46.042068 sshd[4580]: Accepted publickey for core from 10.0.0.1 port 60256 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:58:46.043671 sshd[4580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:46.047495 systemd-logind[1540]: New session 26 of user core. Nov 12 20:58:46.056400 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 20:58:46.066131 kubelet[2743]: I1112 20:58:46.066109 2743 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="46b473b4-a8bc-43d6-82fe-829b7f9fc0c6" path="/var/lib/kubelet/pods/46b473b4-a8bc-43d6-82fe-829b7f9fc0c6/volumes" Nov 12 20:58:46.066702 kubelet[2743]: I1112 20:58:46.066681 2743 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="87e6ecd2-8e03-4e1d-b346-58c0b0524c41" path="/var/lib/kubelet/pods/87e6ecd2-8e03-4e1d-b346-58c0b0524c41/volumes" Nov 12 20:58:46.629503 sshd[4580]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:46.640411 systemd[1]: Started sshd@26-10.0.0.153:22-10.0.0.1:60262.service - OpenSSH per-connection server daemon (10.0.0.1:60262). Nov 12 20:58:46.641964 kubelet[2743]: I1112 20:58:46.640814 2743 topology_manager.go:215] "Topology Admit Handler" podUID="fa9d4f86-b223-4017-b216-0fab35348ae9" podNamespace="kube-system" podName="cilium-jhk4p" Nov 12 20:58:46.641964 kubelet[2743]: E1112 20:58:46.640886 2743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="46b473b4-a8bc-43d6-82fe-829b7f9fc0c6" containerName="cilium-operator" Nov 12 20:58:46.641964 kubelet[2743]: E1112 20:58:46.640898 2743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="87e6ecd2-8e03-4e1d-b346-58c0b0524c41" containerName="clean-cilium-state" Nov 12 20:58:46.641964 kubelet[2743]: E1112 20:58:46.640906 2743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="87e6ecd2-8e03-4e1d-b346-58c0b0524c41" containerName="mount-cgroup" Nov 12 20:58:46.641964 kubelet[2743]: E1112 20:58:46.640913 2743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="87e6ecd2-8e03-4e1d-b346-58c0b0524c41" containerName="apply-sysctl-overwrites" Nov 12 20:58:46.641964 kubelet[2743]: E1112 20:58:46.640922 2743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="87e6ecd2-8e03-4e1d-b346-58c0b0524c41" containerName="mount-bpf-fs" Nov 12 20:58:46.641964 kubelet[2743]: E1112 20:58:46.640930 2743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="87e6ecd2-8e03-4e1d-b346-58c0b0524c41" containerName="cilium-agent" Nov 12 20:58:46.641964 kubelet[2743]: I1112 20:58:46.640965 2743 memory_manager.go:354] "RemoveStaleState removing state" podUID="87e6ecd2-8e03-4e1d-b346-58c0b0524c41" containerName="cilium-agent" Nov 12 20:58:46.641964 kubelet[2743]: I1112 20:58:46.640971 2743 memory_manager.go:354] "RemoveStaleState removing state" podUID="46b473b4-a8bc-43d6-82fe-829b7f9fc0c6" containerName="cilium-operator" Nov 12 20:58:46.645853 systemd[1]: sshd@25-10.0.0.153:22-10.0.0.1:60256.service: Deactivated successfully. Nov 12 20:58:46.649329 systemd-logind[1540]: Session 26 logged out. Waiting for processes to exit. Nov 12 20:58:46.650636 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 20:58:46.654355 systemd-logind[1540]: Removed session 26. Nov 12 20:58:46.670691 kubelet[2743]: I1112 20:58:46.667509 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa9d4f86-b223-4017-b216-0fab35348ae9-etc-cni-netd\") pod \"cilium-jhk4p\" (UID: \"fa9d4f86-b223-4017-b216-0fab35348ae9\") " pod="kube-system/cilium-jhk4p" Nov 12 20:58:46.670691 kubelet[2743]: I1112 20:58:46.667557 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fa9d4f86-b223-4017-b216-0fab35348ae9-cilium-run\") pod \"cilium-jhk4p\" (UID: \"fa9d4f86-b223-4017-b216-0fab35348ae9\") " pod="kube-system/cilium-jhk4p" Nov 12 20:58:46.670691 kubelet[2743]: I1112 20:58:46.667582 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fa9d4f86-b223-4017-b216-0fab35348ae9-cilium-cgroup\") pod \"cilium-jhk4p\" (UID: \"fa9d4f86-b223-4017-b216-0fab35348ae9\") " pod="kube-system/cilium-jhk4p" Nov 12 20:58:46.670691 kubelet[2743]: I1112 20:58:46.667605 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fa9d4f86-b223-4017-b216-0fab35348ae9-cni-path\") pod \"cilium-jhk4p\" (UID: \"fa9d4f86-b223-4017-b216-0fab35348ae9\") " pod="kube-system/cilium-jhk4p" Nov 12 20:58:46.670691 kubelet[2743]: I1112 20:58:46.667644 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fa9d4f86-b223-4017-b216-0fab35348ae9-bpf-maps\") pod \"cilium-jhk4p\" (UID: \"fa9d4f86-b223-4017-b216-0fab35348ae9\") " pod="kube-system/cilium-jhk4p" Nov 12 20:58:46.670691 kubelet[2743]: I1112 20:58:46.667667 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fa9d4f86-b223-4017-b216-0fab35348ae9-host-proc-sys-net\") pod \"cilium-jhk4p\" (UID: \"fa9d4f86-b223-4017-b216-0fab35348ae9\") " pod="kube-system/cilium-jhk4p" Nov 12 20:58:46.674050 kubelet[2743]: I1112 20:58:46.671468 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fa9d4f86-b223-4017-b216-0fab35348ae9-hubble-tls\") pod \"cilium-jhk4p\" (UID: \"fa9d4f86-b223-4017-b216-0fab35348ae9\") " pod="kube-system/cilium-jhk4p" Nov 12 20:58:46.674050 kubelet[2743]: I1112 20:58:46.672920 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8st7\" (UniqueName: \"kubernetes.io/projected/fa9d4f86-b223-4017-b216-0fab35348ae9-kube-api-access-f8st7\") pod \"cilium-jhk4p\" (UID: \"fa9d4f86-b223-4017-b216-0fab35348ae9\") " pod="kube-system/cilium-jhk4p" Nov 12 20:58:46.675374 kubelet[2743]: I1112 20:58:46.674186 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fa9d4f86-b223-4017-b216-0fab35348ae9-cilium-ipsec-secrets\") pod \"cilium-jhk4p\" (UID: \"fa9d4f86-b223-4017-b216-0fab35348ae9\") " pod="kube-system/cilium-jhk4p" Nov 12 20:58:46.678093 kubelet[2743]: I1112 20:58:46.678054 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fa9d4f86-b223-4017-b216-0fab35348ae9-host-proc-sys-kernel\") pod \"cilium-jhk4p\" (UID: \"fa9d4f86-b223-4017-b216-0fab35348ae9\") " pod="kube-system/cilium-jhk4p" Nov 12 20:58:46.682083 kubelet[2743]: I1112 20:58:46.681209 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fa9d4f86-b223-4017-b216-0fab35348ae9-clustermesh-secrets\") pod \"cilium-jhk4p\" (UID: \"fa9d4f86-b223-4017-b216-0fab35348ae9\") " pod="kube-system/cilium-jhk4p" Nov 12 20:58:46.682083 kubelet[2743]: I1112 20:58:46.681258 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fa9d4f86-b223-4017-b216-0fab35348ae9-hostproc\") pod \"cilium-jhk4p\" (UID: \"fa9d4f86-b223-4017-b216-0fab35348ae9\") " pod="kube-system/cilium-jhk4p" Nov 12 20:58:46.682083 kubelet[2743]: I1112 20:58:46.681281 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa9d4f86-b223-4017-b216-0fab35348ae9-lib-modules\") pod \"cilium-jhk4p\" (UID: \"fa9d4f86-b223-4017-b216-0fab35348ae9\") " pod="kube-system/cilium-jhk4p" Nov 12 20:58:46.682083 kubelet[2743]: I1112 20:58:46.681297 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa9d4f86-b223-4017-b216-0fab35348ae9-xtables-lock\") pod \"cilium-jhk4p\" (UID: \"fa9d4f86-b223-4017-b216-0fab35348ae9\") " pod="kube-system/cilium-jhk4p" Nov 12 20:58:46.682083 kubelet[2743]: I1112 20:58:46.681321 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa9d4f86-b223-4017-b216-0fab35348ae9-cilium-config-path\") pod \"cilium-jhk4p\" (UID: \"fa9d4f86-b223-4017-b216-0fab35348ae9\") " pod="kube-system/cilium-jhk4p" Nov 12 20:58:46.694676 sshd[4595]: Accepted publickey for core from 10.0.0.1 port 60262 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:58:46.696668 sshd[4595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:46.701034 systemd-logind[1540]: New session 27 of user core. Nov 12 20:58:46.705432 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 12 20:58:46.757276 sshd[4595]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:46.769482 systemd[1]: Started sshd@27-10.0.0.153:22-10.0.0.1:60278.service - OpenSSH per-connection server daemon (10.0.0.1:60278). Nov 12 20:58:46.770171 systemd[1]: sshd@26-10.0.0.153:22-10.0.0.1:60262.service: Deactivated successfully. Nov 12 20:58:46.773677 systemd[1]: session-27.scope: Deactivated successfully. Nov 12 20:58:46.774648 systemd-logind[1540]: Session 27 logged out. Waiting for processes to exit. Nov 12 20:58:46.775740 systemd-logind[1540]: Removed session 27. Nov 12 20:58:46.810694 sshd[4604]: Accepted publickey for core from 10.0.0.1 port 60278 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:58:46.812320 sshd[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:46.816208 systemd-logind[1540]: New session 28 of user core. Nov 12 20:58:46.834423 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 12 20:58:46.956952 kubelet[2743]: E1112 20:58:46.956893 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:58:46.957528 containerd[1559]: time="2024-11-12T20:58:46.957487294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jhk4p,Uid:fa9d4f86-b223-4017-b216-0fab35348ae9,Namespace:kube-system,Attempt:0,}" Nov 12 20:58:46.980175 containerd[1559]: time="2024-11-12T20:58:46.979494737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:58:46.980175 containerd[1559]: time="2024-11-12T20:58:46.980124685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:58:46.980319 containerd[1559]: time="2024-11-12T20:58:46.980207733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:58:46.980439 containerd[1559]: time="2024-11-12T20:58:46.980368378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:58:47.024389 containerd[1559]: time="2024-11-12T20:58:47.024346613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jhk4p,Uid:fa9d4f86-b223-4017-b216-0fab35348ae9,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc6fd87bf09c9a6d175a51ffceb7851ebb10605d381bf6be40a7a0b1750fa812\"" Nov 12 20:58:47.025539 kubelet[2743]: E1112 20:58:47.025492 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:58:47.027959 containerd[1559]: time="2024-11-12T20:58:47.027923930Z" level=info msg="CreateContainer within sandbox \"bc6fd87bf09c9a6d175a51ffceb7851ebb10605d381bf6be40a7a0b1750fa812\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 20:58:47.043244 containerd[1559]: time="2024-11-12T20:58:47.043194752Z" level=info msg="CreateContainer within sandbox \"bc6fd87bf09c9a6d175a51ffceb7851ebb10605d381bf6be40a7a0b1750fa812\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"10e7741c75c768b418219e56419848d81e8efc69e7e3a3449eb50577d5c64ff4\"" Nov 12 20:58:47.043843 containerd[1559]: time="2024-11-12T20:58:47.043810543Z" level=info msg="StartContainer for \"10e7741c75c768b418219e56419848d81e8efc69e7e3a3449eb50577d5c64ff4\"" Nov 12 20:58:47.096971 containerd[1559]: time="2024-11-12T20:58:47.096922569Z" level=info msg="StartContainer for \"10e7741c75c768b418219e56419848d81e8efc69e7e3a3449eb50577d5c64ff4\" returns successfully" Nov 12 20:58:47.137127 containerd[1559]: time="2024-11-12T20:58:47.137039337Z" level=info msg="shim disconnected" id=10e7741c75c768b418219e56419848d81e8efc69e7e3a3449eb50577d5c64ff4 namespace=k8s.io Nov 12 20:58:47.137127 containerd[1559]: time="2024-11-12T20:58:47.137113718Z" level=warning msg="cleaning up after shim disconnected" id=10e7741c75c768b418219e56419848d81e8efc69e7e3a3449eb50577d5c64ff4 namespace=k8s.io Nov 12 20:58:47.140213 containerd[1559]: time="2024-11-12T20:58:47.140186586Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:58:47.805623 kubelet[2743]: E1112 20:58:47.805580 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:58:47.823108 containerd[1559]: time="2024-11-12T20:58:47.823042257Z" level=info msg="CreateContainer within sandbox \"bc6fd87bf09c9a6d175a51ffceb7851ebb10605d381bf6be40a7a0b1750fa812\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 20:58:47.833459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2045275974.mount: Deactivated successfully. Nov 12 20:58:47.834475 containerd[1559]: time="2024-11-12T20:58:47.834445141Z" level=info msg="CreateContainer within sandbox \"bc6fd87bf09c9a6d175a51ffceb7851ebb10605d381bf6be40a7a0b1750fa812\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3c8e93b0f5d52d055efeddb3833ce074eff8e9ffd07c97e39b90be9de4bd19e6\"" Nov 12 20:58:47.835048 containerd[1559]: time="2024-11-12T20:58:47.835005175Z" level=info msg="StartContainer for \"3c8e93b0f5d52d055efeddb3833ce074eff8e9ffd07c97e39b90be9de4bd19e6\"" Nov 12 20:58:47.887326 containerd[1559]: time="2024-11-12T20:58:47.887266356Z" level=info msg="StartContainer for \"3c8e93b0f5d52d055efeddb3833ce074eff8e9ffd07c97e39b90be9de4bd19e6\" returns successfully" Nov 12 20:58:47.915528 containerd[1559]: time="2024-11-12T20:58:47.914952987Z" level=info msg="shim disconnected" id=3c8e93b0f5d52d055efeddb3833ce074eff8e9ffd07c97e39b90be9de4bd19e6 namespace=k8s.io Nov 12 20:58:47.915528 containerd[1559]: time="2024-11-12T20:58:47.915015145Z" level=warning msg="cleaning up after shim disconnected" id=3c8e93b0f5d52d055efeddb3833ce074eff8e9ffd07c97e39b90be9de4bd19e6 namespace=k8s.io Nov 12 20:58:47.915528 containerd[1559]: time="2024-11-12T20:58:47.915026127Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:58:48.064622 kubelet[2743]: E1112 20:58:48.064484 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:58:48.138161 kubelet[2743]: E1112 20:58:48.138101 2743 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 12 20:58:48.787431 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c8e93b0f5d52d055efeddb3833ce074eff8e9ffd07c97e39b90be9de4bd19e6-rootfs.mount: Deactivated successfully. Nov 12 20:58:48.808396 kubelet[2743]: E1112 20:58:48.808372 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:58:48.809989 containerd[1559]: time="2024-11-12T20:58:48.809906477Z" level=info msg="CreateContainer within sandbox \"bc6fd87bf09c9a6d175a51ffceb7851ebb10605d381bf6be40a7a0b1750fa812\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 20:58:49.029349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1528298901.mount: Deactivated successfully. Nov 12 20:58:49.064050 kubelet[2743]: E1112 20:58:49.063935 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:58:49.189619 containerd[1559]: time="2024-11-12T20:58:49.189555414Z" level=info msg="CreateContainer within sandbox \"bc6fd87bf09c9a6d175a51ffceb7851ebb10605d381bf6be40a7a0b1750fa812\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"00226b7675e94696f1088b172e81b2a6c24107edb83e1b6f0a58df0c82267976\"" Nov 12 20:58:49.190213 containerd[1559]: time="2024-11-12T20:58:49.190180239Z" level=info msg="StartContainer for \"00226b7675e94696f1088b172e81b2a6c24107edb83e1b6f0a58df0c82267976\"" Nov 12 20:58:49.260411 containerd[1559]: time="2024-11-12T20:58:49.260366053Z" level=info msg="StartContainer for \"00226b7675e94696f1088b172e81b2a6c24107edb83e1b6f0a58df0c82267976\" returns successfully" Nov 12 20:58:49.288532 containerd[1559]: time="2024-11-12T20:58:49.288439650Z" level=info msg="shim disconnected" id=00226b7675e94696f1088b172e81b2a6c24107edb83e1b6f0a58df0c82267976 namespace=k8s.io Nov 12 20:58:49.288532 containerd[1559]: time="2024-11-12T20:58:49.288515063Z" level=warning msg="cleaning up after shim disconnected" id=00226b7675e94696f1088b172e81b2a6c24107edb83e1b6f0a58df0c82267976 namespace=k8s.io Nov 12 20:58:49.288532 containerd[1559]: time="2024-11-12T20:58:49.288527056Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:58:49.787464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00226b7675e94696f1088b172e81b2a6c24107edb83e1b6f0a58df0c82267976-rootfs.mount: Deactivated successfully. Nov 12 20:58:49.819545 kubelet[2743]: E1112 20:58:49.819509 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:58:49.821683 containerd[1559]: time="2024-11-12T20:58:49.821642217Z" level=info msg="CreateContainer within sandbox \"bc6fd87bf09c9a6d175a51ffceb7851ebb10605d381bf6be40a7a0b1750fa812\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 20:58:49.837939 containerd[1559]: time="2024-11-12T20:58:49.837886032Z" level=info msg="CreateContainer within sandbox \"bc6fd87bf09c9a6d175a51ffceb7851ebb10605d381bf6be40a7a0b1750fa812\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6cd5c7e08e2d1f76f42c6c93e78039024f4be36a478469e6ca5c8db379ce3772\"" Nov 12 20:58:49.838437 containerd[1559]: time="2024-11-12T20:58:49.838406750Z" level=info msg="StartContainer for \"6cd5c7e08e2d1f76f42c6c93e78039024f4be36a478469e6ca5c8db379ce3772\"" Nov 12 20:58:49.888437 containerd[1559]: time="2024-11-12T20:58:49.888393799Z" level=info msg="StartContainer for \"6cd5c7e08e2d1f76f42c6c93e78039024f4be36a478469e6ca5c8db379ce3772\" returns successfully" Nov 12 20:58:49.908755 containerd[1559]: time="2024-11-12T20:58:49.908669680Z" level=info msg="shim disconnected" id=6cd5c7e08e2d1f76f42c6c93e78039024f4be36a478469e6ca5c8db379ce3772 namespace=k8s.io Nov 12 20:58:49.908755 containerd[1559]: time="2024-11-12T20:58:49.908752818Z" level=warning msg="cleaning up after shim disconnected" id=6cd5c7e08e2d1f76f42c6c93e78039024f4be36a478469e6ca5c8db379ce3772 namespace=k8s.io Nov 12 20:58:49.908971 containerd[1559]: time="2024-11-12T20:58:49.908765372Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:58:50.788006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6cd5c7e08e2d1f76f42c6c93e78039024f4be36a478469e6ca5c8db379ce3772-rootfs.mount: Deactivated successfully. Nov 12 20:58:50.826570 kubelet[2743]: E1112 20:58:50.825311 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:58:50.831865 containerd[1559]: time="2024-11-12T20:58:50.831811589Z" level=info msg="CreateContainer within sandbox \"bc6fd87bf09c9a6d175a51ffceb7851ebb10605d381bf6be40a7a0b1750fa812\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 20:58:50.847214 containerd[1559]: time="2024-11-12T20:58:50.847158864Z" level=info msg="CreateContainer within sandbox \"bc6fd87bf09c9a6d175a51ffceb7851ebb10605d381bf6be40a7a0b1750fa812\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"181b3eef36c49d8ebc2fd896b4d94ca5d16379610d58ce63fc33ec5acbaff176\"" Nov 12 20:58:50.847810 containerd[1559]: time="2024-11-12T20:58:50.847774452Z" level=info msg="StartContainer for \"181b3eef36c49d8ebc2fd896b4d94ca5d16379610d58ce63fc33ec5acbaff176\"" Nov 12 20:58:50.915483 containerd[1559]: time="2024-11-12T20:58:50.915433133Z" level=info msg="StartContainer for \"181b3eef36c49d8ebc2fd896b4d94ca5d16379610d58ce63fc33ec5acbaff176\" returns successfully" Nov 12 20:58:50.967694 kubelet[2743]: I1112 20:58:50.967659 2743 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-11-12T20:58:50Z","lastTransitionTime":"2024-11-12T20:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 12 20:58:51.064683 kubelet[2743]: E1112 20:58:51.064494 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:58:51.323185 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 12 20:58:51.830801 kubelet[2743]: E1112 20:58:51.830766 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:58:51.844490 kubelet[2743]: I1112 20:58:51.844432 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jhk4p" podStartSLOduration=5.844372922 podStartE2EDuration="5.844372922s" podCreationTimestamp="2024-11-12 20:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:58:51.844283883 +0000 UTC m=+94.206411705" watchObservedRunningTime="2024-11-12 20:58:51.844372922 +0000 UTC m=+94.206500744" Nov 12 20:58:52.064692 kubelet[2743]: E1112 20:58:52.064645 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:58:52.958819 kubelet[2743]: E1112 20:58:52.958775 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:58:54.599283 systemd-networkd[1246]: lxc_health: Link UP Nov 12 20:58:54.607300 systemd-networkd[1246]: lxc_health: Gained carrier Nov 12 20:58:54.958844 kubelet[2743]: E1112 20:58:54.958806 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:58:55.841949 kubelet[2743]: E1112 20:58:55.841913 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:58:56.393387 systemd-networkd[1246]: lxc_health: Gained IPv6LL Nov 12 20:58:56.844339 kubelet[2743]: E1112 20:58:56.844197 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:58:59.910559 sshd[4604]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:59.914486 systemd[1]: sshd@27-10.0.0.153:22-10.0.0.1:60278.service: Deactivated successfully. Nov 12 20:58:59.916963 systemd[1]: session-28.scope: Deactivated successfully. Nov 12 20:58:59.917654 systemd-logind[1540]: Session 28 logged out. Waiting for processes to exit. Nov 12 20:58:59.918538 systemd-logind[1540]: Removed session 28.