Oct 9 07:23:30.863297 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:19:34 -00 2024 Oct 9 07:23:30.863317 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:23:30.863328 kernel: BIOS-provided physical RAM map: Oct 9 07:23:30.863334 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 9 07:23:30.863362 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 9 07:23:30.863368 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 9 07:23:30.863375 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 9 07:23:30.863382 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 9 07:23:30.863388 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 9 07:23:30.863394 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 9 07:23:30.863403 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Oct 9 07:23:30.863409 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Oct 9 07:23:30.863415 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Oct 9 07:23:30.863422 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Oct 9 07:23:30.863429 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 9 07:23:30.863438 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 9 07:23:30.863445 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 9 07:23:30.863451 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 9 07:23:30.863458 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 9 07:23:30.863465 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 9 07:23:30.863471 kernel: NX (Execute Disable) protection: active Oct 9 07:23:30.863478 kernel: APIC: Static calls initialized Oct 9 07:23:30.863484 kernel: efi: EFI v2.7 by EDK II Oct 9 07:23:30.863491 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Oct 9 07:23:30.863498 kernel: SMBIOS 2.8 present. Oct 9 07:23:30.863504 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Oct 9 07:23:30.863513 kernel: Hypervisor detected: KVM Oct 9 07:23:30.863520 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 07:23:30.863526 kernel: kvm-clock: using sched offset of 3963950836 cycles Oct 9 07:23:30.863533 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 07:23:30.863540 kernel: tsc: Detected 2794.748 MHz processor Oct 9 07:23:30.863547 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 07:23:30.863555 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 07:23:30.863561 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Oct 9 07:23:30.863568 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 9 07:23:30.863575 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 07:23:30.863584 kernel: Using GB pages for direct mapping Oct 9 07:23:30.863591 kernel: Secure boot disabled Oct 9 07:23:30.863597 kernel: ACPI: Early table checksum verification disabled Oct 9 07:23:30.863604 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 9 07:23:30.863614 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 9 07:23:30.863622 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:23:30.863629 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:23:30.863638 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 9 07:23:30.863645 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:23:30.863652 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:23:30.863659 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:23:30.863666 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:23:30.863673 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 9 07:23:30.863681 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 9 07:23:30.863690 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Oct 9 07:23:30.863697 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 9 07:23:30.863704 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 9 07:23:30.863711 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 9 07:23:30.863718 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 9 07:23:30.863725 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 9 07:23:30.863732 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 9 07:23:30.863739 kernel: No NUMA configuration found Oct 9 07:23:30.863746 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Oct 9 07:23:30.863755 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Oct 9 07:23:30.863762 kernel: Zone ranges: Oct 9 07:23:30.863769 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 07:23:30.863776 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Oct 9 07:23:30.863783 kernel: Normal empty Oct 9 07:23:30.863790 kernel: Movable zone start for each node Oct 9 07:23:30.863797 kernel: Early memory node ranges Oct 9 07:23:30.863804 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 9 07:23:30.863811 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 9 07:23:30.863818 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 9 07:23:30.863827 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Oct 9 07:23:30.863834 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Oct 9 07:23:30.863841 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Oct 9 07:23:30.863848 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Oct 9 07:23:30.863855 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 07:23:30.863863 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 9 07:23:30.863870 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 9 07:23:30.863877 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 07:23:30.863884 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Oct 9 07:23:30.863893 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Oct 9 07:23:30.863900 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Oct 9 07:23:30.863907 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 07:23:30.863914 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 07:23:30.863921 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 07:23:30.863928 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 07:23:30.863936 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 07:23:30.863945 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 07:23:30.863952 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 07:23:30.863963 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 07:23:30.863970 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 07:23:30.863977 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 07:23:30.863984 kernel: TSC deadline timer available Oct 9 07:23:30.863992 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 9 07:23:30.863999 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 07:23:30.864006 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 9 07:23:30.864013 kernel: kvm-guest: setup PV sched yield Oct 9 07:23:30.864020 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Oct 9 07:23:30.864027 kernel: Booting paravirtualized kernel on KVM Oct 9 07:23:30.864037 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 07:23:30.864044 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 9 07:23:30.864051 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Oct 9 07:23:30.864058 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Oct 9 07:23:30.864065 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 9 07:23:30.864072 kernel: kvm-guest: PV spinlocks enabled Oct 9 07:23:30.864079 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 9 07:23:30.864088 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:23:30.864098 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 07:23:30.864105 kernel: random: crng init done Oct 9 07:23:30.864112 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 9 07:23:30.864119 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 07:23:30.864126 kernel: Fallback order for Node 0: 0 Oct 9 07:23:30.864134 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Oct 9 07:23:30.864141 kernel: Policy zone: DMA32 Oct 9 07:23:30.864148 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 07:23:30.864155 kernel: Memory: 2389472K/2567000K available (12288K kernel code, 2304K rwdata, 22648K rodata, 49452K init, 1888K bss, 177268K reserved, 0K cma-reserved) Oct 9 07:23:30.864165 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 9 07:23:30.864172 kernel: ftrace: allocating 37706 entries in 148 pages Oct 9 07:23:30.864179 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 07:23:30.864186 kernel: Dynamic Preempt: voluntary Oct 9 07:23:30.864200 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 07:23:30.864210 kernel: rcu: RCU event tracing is enabled. Oct 9 07:23:30.864218 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 9 07:23:30.864226 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 07:23:30.864233 kernel: Rude variant of Tasks RCU enabled. Oct 9 07:23:30.864241 kernel: Tracing variant of Tasks RCU enabled. Oct 9 07:23:30.864248 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 07:23:30.864256 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 9 07:23:30.864266 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 9 07:23:30.864280 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 07:23:30.864287 kernel: Console: colour dummy device 80x25 Oct 9 07:23:30.864295 kernel: printk: console [ttyS0] enabled Oct 9 07:23:30.864302 kernel: ACPI: Core revision 20230628 Oct 9 07:23:30.864312 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 9 07:23:30.864320 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 07:23:30.864327 kernel: x2apic enabled Oct 9 07:23:30.864335 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 07:23:30.864416 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 9 07:23:30.864423 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 9 07:23:30.864431 kernel: kvm-guest: setup PV IPIs Oct 9 07:23:30.864438 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 07:23:30.864446 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 9 07:23:30.864456 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 9 07:23:30.864464 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 9 07:23:30.864471 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 9 07:23:30.864478 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 9 07:23:30.864486 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 07:23:30.864494 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 07:23:30.864501 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 07:23:30.864508 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 07:23:30.864516 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 9 07:23:30.864525 kernel: RETBleed: Mitigation: untrained return thunk Oct 9 07:23:30.864533 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 07:23:30.864540 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 07:23:30.864548 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 9 07:23:30.864556 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 9 07:23:30.864564 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 9 07:23:30.864571 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 07:23:30.864579 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 07:23:30.864588 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 07:23:30.864596 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 07:23:30.864603 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 9 07:23:30.864611 kernel: Freeing SMP alternatives memory: 32K Oct 9 07:23:30.864618 kernel: pid_max: default: 32768 minimum: 301 Oct 9 07:23:30.864625 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Oct 9 07:23:30.864633 kernel: SELinux: Initializing. Oct 9 07:23:30.864640 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 07:23:30.864648 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 07:23:30.864657 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 9 07:23:30.864665 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:23:30.864672 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:23:30.864680 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:23:30.864687 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 9 07:23:30.864695 kernel: ... version: 0 Oct 9 07:23:30.864702 kernel: ... bit width: 48 Oct 9 07:23:30.864709 kernel: ... generic registers: 6 Oct 9 07:23:30.864717 kernel: ... value mask: 0000ffffffffffff Oct 9 07:23:30.864726 kernel: ... max period: 00007fffffffffff Oct 9 07:23:30.864734 kernel: ... fixed-purpose events: 0 Oct 9 07:23:30.864741 kernel: ... event mask: 000000000000003f Oct 9 07:23:30.864749 kernel: signal: max sigframe size: 1776 Oct 9 07:23:30.864756 kernel: rcu: Hierarchical SRCU implementation. Oct 9 07:23:30.864763 kernel: rcu: Max phase no-delay instances is 400. Oct 9 07:23:30.864771 kernel: smp: Bringing up secondary CPUs ... Oct 9 07:23:30.864778 kernel: smpboot: x86: Booting SMP configuration: Oct 9 07:23:30.864786 kernel: .... node #0, CPUs: #1 #2 #3 Oct 9 07:23:30.864795 kernel: smp: Brought up 1 node, 4 CPUs Oct 9 07:23:30.864802 kernel: smpboot: Max logical packages: 1 Oct 9 07:23:30.864810 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 9 07:23:30.864817 kernel: devtmpfs: initialized Oct 9 07:23:30.864824 kernel: x86/mm: Memory block size: 128MB Oct 9 07:23:30.864832 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 9 07:23:30.864840 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 9 07:23:30.864847 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Oct 9 07:23:30.864855 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 9 07:23:30.864864 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 9 07:23:30.864872 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 07:23:30.864879 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 9 07:23:30.864887 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 07:23:30.864894 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 07:23:30.864902 kernel: audit: initializing netlink subsys (disabled) Oct 9 07:23:30.864909 kernel: audit: type=2000 audit(1728458610.873:1): state=initialized audit_enabled=0 res=1 Oct 9 07:23:30.864916 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 07:23:30.864924 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 07:23:30.864934 kernel: cpuidle: using governor menu Oct 9 07:23:30.864941 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 07:23:30.864948 kernel: dca service started, version 1.12.1 Oct 9 07:23:30.864956 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 9 07:23:30.864964 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 9 07:23:30.864971 kernel: PCI: Using configuration type 1 for base access Oct 9 07:23:30.864979 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 07:23:30.864986 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 07:23:30.864994 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 07:23:30.865003 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 07:23:30.865011 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 07:23:30.865018 kernel: ACPI: Added _OSI(Module Device) Oct 9 07:23:30.865025 kernel: ACPI: Added _OSI(Processor Device) Oct 9 07:23:30.865033 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 07:23:30.865040 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 07:23:30.865048 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 07:23:30.865055 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 07:23:30.865062 kernel: ACPI: Interpreter enabled Oct 9 07:23:30.865072 kernel: ACPI: PM: (supports S0 S3 S5) Oct 9 07:23:30.865079 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 07:23:30.865087 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 07:23:30.865094 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 07:23:30.865102 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 9 07:23:30.865109 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 07:23:30.865282 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 9 07:23:30.865436 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 9 07:23:30.865561 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 9 07:23:30.865571 kernel: PCI host bridge to bus 0000:00 Oct 9 07:23:30.865692 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 07:23:30.865802 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 07:23:30.865909 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 07:23:30.866018 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 9 07:23:30.866126 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 9 07:23:30.866238 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Oct 9 07:23:30.866388 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 07:23:30.866527 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 9 07:23:30.866655 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 9 07:23:30.866774 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Oct 9 07:23:30.866891 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Oct 9 07:23:30.867013 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Oct 9 07:23:30.867133 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Oct 9 07:23:30.867253 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 07:23:30.867406 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 9 07:23:30.867529 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Oct 9 07:23:30.867650 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Oct 9 07:23:30.867771 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Oct 9 07:23:30.867911 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 9 07:23:30.868035 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Oct 9 07:23:30.868154 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Oct 9 07:23:30.868272 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Oct 9 07:23:30.868420 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 9 07:23:30.868542 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Oct 9 07:23:30.868665 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Oct 9 07:23:30.868794 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Oct 9 07:23:30.868915 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Oct 9 07:23:30.869041 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 9 07:23:30.869160 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 9 07:23:30.869293 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 9 07:23:30.869474 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Oct 9 07:23:30.869630 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Oct 9 07:23:30.869756 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 9 07:23:30.869875 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Oct 9 07:23:30.869886 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 07:23:30.869893 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 07:23:30.869901 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 07:23:30.869909 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 07:23:30.869916 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 9 07:23:30.869927 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 9 07:23:30.869934 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 9 07:23:30.869942 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 9 07:23:30.869949 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 9 07:23:30.869957 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 9 07:23:30.869964 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 9 07:23:30.869971 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 9 07:23:30.869979 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 9 07:23:30.869986 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 9 07:23:30.869996 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 9 07:23:30.870003 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 9 07:23:30.870011 kernel: iommu: Default domain type: Translated Oct 9 07:23:30.870018 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 07:23:30.870026 kernel: efivars: Registered efivars operations Oct 9 07:23:30.870033 kernel: PCI: Using ACPI for IRQ routing Oct 9 07:23:30.870041 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 07:23:30.870048 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 9 07:23:30.870056 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Oct 9 07:23:30.870065 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Oct 9 07:23:30.870072 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Oct 9 07:23:30.870192 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 9 07:23:30.870320 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 9 07:23:30.870486 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 07:23:30.870497 kernel: vgaarb: loaded Oct 9 07:23:30.870505 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 9 07:23:30.870512 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 9 07:23:30.870520 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 07:23:30.870531 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 07:23:30.870539 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 07:23:30.870547 kernel: pnp: PnP ACPI init Oct 9 07:23:30.870676 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 9 07:23:30.870698 kernel: pnp: PnP ACPI: found 6 devices Oct 9 07:23:30.870711 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 07:23:30.870726 kernel: NET: Registered PF_INET protocol family Oct 9 07:23:30.870736 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 9 07:23:30.870755 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 9 07:23:30.870767 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 07:23:30.870782 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 07:23:30.870795 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 9 07:23:30.870808 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 9 07:23:30.870820 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 07:23:30.870833 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 07:23:30.870843 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 07:23:30.870850 kernel: NET: Registered PF_XDP protocol family Oct 9 07:23:30.871004 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Oct 9 07:23:30.871152 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Oct 9 07:23:30.871266 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 07:23:30.871396 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 07:23:30.871506 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 07:23:30.871615 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 9 07:23:30.871725 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 9 07:23:30.871835 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Oct 9 07:23:30.871849 kernel: PCI: CLS 0 bytes, default 64 Oct 9 07:23:30.871856 kernel: Initialise system trusted keyrings Oct 9 07:23:30.871864 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 9 07:23:30.871872 kernel: Key type asymmetric registered Oct 9 07:23:30.871880 kernel: Asymmetric key parser 'x509' registered Oct 9 07:23:30.871887 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 07:23:30.871895 kernel: io scheduler mq-deadline registered Oct 9 07:23:30.871902 kernel: io scheduler kyber registered Oct 9 07:23:30.871910 kernel: io scheduler bfq registered Oct 9 07:23:30.871919 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 07:23:30.871927 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 9 07:23:30.871935 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 9 07:23:30.871943 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 9 07:23:30.871950 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 07:23:30.871958 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 07:23:30.871966 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 07:23:30.871973 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 07:23:30.871981 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 07:23:30.872106 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 9 07:23:30.872220 kernel: rtc_cmos 00:04: registered as rtc0 Oct 9 07:23:30.872231 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 07:23:30.872396 kernel: rtc_cmos 00:04: setting system clock to 2024-10-09T07:23:30 UTC (1728458610) Oct 9 07:23:30.872512 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 9 07:23:30.872523 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 9 07:23:30.872530 kernel: efifb: probing for efifb Oct 9 07:23:30.872539 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Oct 9 07:23:30.872550 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Oct 9 07:23:30.872557 kernel: efifb: scrolling: redraw Oct 9 07:23:30.872573 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Oct 9 07:23:30.872581 kernel: Console: switching to colour frame buffer device 100x37 Oct 9 07:23:30.872601 kernel: fb0: EFI VGA frame buffer device Oct 9 07:23:30.872639 kernel: pstore: Using crash dump compression: deflate Oct 9 07:23:30.872649 kernel: pstore: Registered efi_pstore as persistent store backend Oct 9 07:23:30.872657 kernel: NET: Registered PF_INET6 protocol family Oct 9 07:23:30.872665 kernel: Segment Routing with IPv6 Oct 9 07:23:30.872674 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 07:23:30.872682 kernel: NET: Registered PF_PACKET protocol family Oct 9 07:23:30.872690 kernel: Key type dns_resolver registered Oct 9 07:23:30.872698 kernel: IPI shorthand broadcast: enabled Oct 9 07:23:30.872709 kernel: sched_clock: Marking stable (593002652, 113833520)->(721094720, -14258548) Oct 9 07:23:30.872717 kernel: registered taskstats version 1 Oct 9 07:23:30.872725 kernel: Loading compiled-in X.509 certificates Oct 9 07:23:30.872732 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 0b7ba59a46acf969bcd97270f441857501641c76' Oct 9 07:23:30.872740 kernel: Key type .fscrypt registered Oct 9 07:23:30.872750 kernel: Key type fscrypt-provisioning registered Oct 9 07:23:30.872758 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 07:23:30.872765 kernel: ima: Allocated hash algorithm: sha1 Oct 9 07:23:30.872773 kernel: ima: No architecture policies found Oct 9 07:23:30.872781 kernel: clk: Disabling unused clocks Oct 9 07:23:30.872788 kernel: Freeing unused kernel image (initmem) memory: 49452K Oct 9 07:23:30.872796 kernel: Write protecting the kernel read-only data: 36864k Oct 9 07:23:30.872804 kernel: Freeing unused kernel image (rodata/data gap) memory: 1928K Oct 9 07:23:30.872814 kernel: Run /init as init process Oct 9 07:23:30.872822 kernel: with arguments: Oct 9 07:23:30.872829 kernel: /init Oct 9 07:23:30.872837 kernel: with environment: Oct 9 07:23:30.872845 kernel: HOME=/ Oct 9 07:23:30.872852 kernel: TERM=linux Oct 9 07:23:30.872860 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 07:23:30.872870 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:23:30.872882 systemd[1]: Detected virtualization kvm. Oct 9 07:23:30.872890 systemd[1]: Detected architecture x86-64. Oct 9 07:23:30.872898 systemd[1]: Running in initrd. Oct 9 07:23:30.872906 systemd[1]: No hostname configured, using default hostname. Oct 9 07:23:30.872916 systemd[1]: Hostname set to . Oct 9 07:23:30.872927 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:23:30.872935 systemd[1]: Queued start job for default target initrd.target. Oct 9 07:23:30.872943 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:23:30.872952 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:23:30.872961 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 07:23:30.872969 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:23:30.872977 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 07:23:30.872986 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 07:23:30.873004 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 07:23:30.873016 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 07:23:30.873027 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:23:30.873039 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:23:30.873049 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:23:30.873057 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:23:30.873065 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:23:30.873076 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:23:30.873084 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:23:30.873093 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:23:30.873101 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 07:23:30.873109 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 07:23:30.873118 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:23:30.873126 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:23:30.873134 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:23:30.873143 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:23:30.873153 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 07:23:30.873161 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:23:30.873170 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 07:23:30.873178 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 07:23:30.873186 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:23:30.873194 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:23:30.873203 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:23:30.873211 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 07:23:30.873222 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:23:30.873230 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 07:23:30.873257 systemd-journald[191]: Collecting audit messages is disabled. Oct 9 07:23:30.873287 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:23:30.873297 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:23:30.873305 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:23:30.873314 systemd-journald[191]: Journal started Oct 9 07:23:30.873334 systemd-journald[191]: Runtime Journal (/run/log/journal/18b38f7649ed483d9a93a4025b48fb5f) is 6.0M, max 48.3M, 42.3M free. Oct 9 07:23:30.871124 systemd-modules-load[194]: Inserted module 'overlay' Oct 9 07:23:30.876373 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:23:30.878124 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:23:30.885508 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:23:30.889504 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 9 07:23:30.893770 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:23:30.905359 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 07:23:30.905674 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:23:30.905928 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:23:30.910997 kernel: Bridge firewalling registered Oct 9 07:23:30.911002 systemd-modules-load[194]: Inserted module 'br_netfilter' Oct 9 07:23:30.912003 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 07:23:30.914077 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:23:30.915327 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:23:30.927700 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:23:30.931236 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:23:30.933370 dracut-cmdline[222]: dracut-dracut-053 Oct 9 07:23:30.934981 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:23:30.966889 systemd-resolved[235]: Positive Trust Anchors: Oct 9 07:23:30.966903 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:23:30.966934 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 9 07:23:30.969476 systemd-resolved[235]: Defaulting to hostname 'linux'. Oct 9 07:23:30.970503 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:23:30.976430 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:23:31.012367 kernel: SCSI subsystem initialized Oct 9 07:23:31.024359 kernel: Loading iSCSI transport class v2.0-870. Oct 9 07:23:31.036365 kernel: iscsi: registered transport (tcp) Oct 9 07:23:31.060705 kernel: iscsi: registered transport (qla4xxx) Oct 9 07:23:31.060730 kernel: QLogic iSCSI HBA Driver Oct 9 07:23:31.106675 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 07:23:31.114554 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 07:23:31.140934 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 07:23:31.140971 kernel: device-mapper: uevent: version 1.0.3 Oct 9 07:23:31.140990 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 07:23:31.184369 kernel: raid6: avx2x4 gen() 30739 MB/s Oct 9 07:23:31.201371 kernel: raid6: avx2x2 gen() 31542 MB/s Oct 9 07:23:31.218437 kernel: raid6: avx2x1 gen() 26094 MB/s Oct 9 07:23:31.218457 kernel: raid6: using algorithm avx2x2 gen() 31542 MB/s Oct 9 07:23:31.236458 kernel: raid6: .... xor() 19853 MB/s, rmw enabled Oct 9 07:23:31.236482 kernel: raid6: using avx2x2 recovery algorithm Oct 9 07:23:31.261363 kernel: xor: automatically using best checksumming function avx Oct 9 07:23:31.433377 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 07:23:31.446084 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:23:31.454450 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:23:31.465887 systemd-udevd[411]: Using default interface naming scheme 'v255'. Oct 9 07:23:31.470291 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:23:31.480485 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 07:23:31.494846 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Oct 9 07:23:31.524544 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:23:31.538451 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:23:31.600993 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:23:31.610486 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 07:23:31.622658 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 07:23:31.624297 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:23:31.627318 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:23:31.629731 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:23:31.642633 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 07:23:31.641491 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 07:23:31.649525 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 9 07:23:31.657691 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 9 07:23:31.656461 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:23:31.661077 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:23:31.661231 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:23:31.673903 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 07:23:31.673951 kernel: AES CTR mode by8 optimization enabled Oct 9 07:23:31.673969 kernel: libata version 3.00 loaded. Oct 9 07:23:31.673989 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 07:23:31.674007 kernel: GPT:9289727 != 19775487 Oct 9 07:23:31.674024 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 07:23:31.674043 kernel: GPT:9289727 != 19775487 Oct 9 07:23:31.674061 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 07:23:31.674078 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:23:31.663097 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:23:31.666477 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:23:31.666643 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:23:31.667851 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:23:31.681627 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:23:31.685781 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:23:31.688379 kernel: ahci 0000:00:1f.2: version 3.0 Oct 9 07:23:31.688872 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 9 07:23:31.688888 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 9 07:23:31.689037 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 9 07:23:31.685904 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:23:31.695369 kernel: scsi host0: ahci Oct 9 07:23:31.700356 kernel: BTRFS: device fsid a442e753-4749-4732-ba27-ea845965fe4a devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (469) Oct 9 07:23:31.704031 kernel: scsi host1: ahci Oct 9 07:23:31.704212 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (462) Oct 9 07:23:31.704413 kernel: scsi host2: ahci Oct 9 07:23:31.706353 kernel: scsi host3: ahci Oct 9 07:23:31.706517 kernel: scsi host4: ahci Oct 9 07:23:31.707736 kernel: scsi host5: ahci Oct 9 07:23:31.707987 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Oct 9 07:23:31.710125 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Oct 9 07:23:31.710156 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Oct 9 07:23:31.710171 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Oct 9 07:23:31.711551 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Oct 9 07:23:31.711577 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Oct 9 07:23:31.726778 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 07:23:31.731668 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 07:23:31.740765 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:23:31.746360 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 07:23:31.750131 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 07:23:31.762490 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 07:23:31.764887 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:23:31.770191 disk-uuid[557]: Primary Header is updated. Oct 9 07:23:31.770191 disk-uuid[557]: Secondary Entries is updated. Oct 9 07:23:31.770191 disk-uuid[557]: Secondary Header is updated. Oct 9 07:23:31.775398 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:23:31.779358 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:23:31.779656 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:23:31.786782 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:23:31.800765 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:23:32.025222 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 9 07:23:32.025295 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 9 07:23:32.025306 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 9 07:23:32.025994 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 9 07:23:32.026056 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 9 07:23:32.027362 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 9 07:23:32.028366 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 9 07:23:32.028390 kernel: ata3.00: applying bridge limits Oct 9 07:23:32.029367 kernel: ata3.00: configured for UDMA/100 Oct 9 07:23:32.029389 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 9 07:23:32.073890 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 9 07:23:32.074148 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 9 07:23:32.086363 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 9 07:23:32.780364 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:23:32.780897 disk-uuid[559]: The operation has completed successfully. Oct 9 07:23:32.801426 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 07:23:32.801547 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 07:23:32.831491 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 07:23:32.834260 sh[595]: Success Oct 9 07:23:32.847391 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 9 07:23:32.877079 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 07:23:32.888656 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 07:23:32.893360 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 07:23:32.901897 kernel: BTRFS info (device dm-0): first mount of filesystem a442e753-4749-4732-ba27-ea845965fe4a Oct 9 07:23:32.901928 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:23:32.901939 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 07:23:32.902916 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 07:23:32.903655 kernel: BTRFS info (device dm-0): using free space tree Oct 9 07:23:32.907706 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 07:23:32.910024 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 07:23:32.921455 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 07:23:32.923944 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 07:23:32.932905 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:23:32.932927 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:23:32.932939 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:23:32.935431 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:23:32.943589 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 07:23:32.945304 kernel: BTRFS info (device vda6): last unmount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:23:32.955158 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 07:23:32.963484 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 07:23:33.012449 ignition[694]: Ignition 2.18.0 Oct 9 07:23:33.012462 ignition[694]: Stage: fetch-offline Oct 9 07:23:33.012497 ignition[694]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:23:33.012507 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:23:33.012701 ignition[694]: parsed url from cmdline: "" Oct 9 07:23:33.012705 ignition[694]: no config URL provided Oct 9 07:23:33.012711 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:23:33.012723 ignition[694]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:23:33.012747 ignition[694]: op(1): [started] loading QEMU firmware config module Oct 9 07:23:33.012753 ignition[694]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 9 07:23:33.020958 ignition[694]: op(1): [finished] loading QEMU firmware config module Oct 9 07:23:33.035586 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:23:33.049471 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:23:33.063933 ignition[694]: parsing config with SHA512: 909cfe87fc7497fd70753e62b916e7ae77ffcf1169d6fdd2d293c1bea5236a533a28e2a4ae7291eb5e5495e687246dc420e7ef2dd386a0ebd4094e946611f2ce Oct 9 07:23:33.068721 unknown[694]: fetched base config from "system" Oct 9 07:23:33.068731 unknown[694]: fetched user config from "qemu" Oct 9 07:23:33.069093 ignition[694]: fetch-offline: fetch-offline passed Oct 9 07:23:33.069153 ignition[694]: Ignition finished successfully Oct 9 07:23:33.071563 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:23:33.072229 systemd-networkd[783]: lo: Link UP Oct 9 07:23:33.072235 systemd-networkd[783]: lo: Gained carrier Oct 9 07:23:33.073786 systemd-networkd[783]: Enumeration completed Oct 9 07:23:33.073876 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:23:33.074173 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:23:33.074177 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:23:33.075977 systemd[1]: Reached target network.target - Network. Oct 9 07:23:33.076122 systemd-networkd[783]: eth0: Link UP Oct 9 07:23:33.076127 systemd-networkd[783]: eth0: Gained carrier Oct 9 07:23:33.076135 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:23:33.077524 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 9 07:23:33.082625 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 07:23:33.094553 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.107/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 07:23:33.097946 ignition[786]: Ignition 2.18.0 Oct 9 07:23:33.098593 ignition[786]: Stage: kargs Oct 9 07:23:33.098825 ignition[786]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:23:33.098837 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:23:33.099673 ignition[786]: kargs: kargs passed Oct 9 07:23:33.099723 ignition[786]: Ignition finished successfully Oct 9 07:23:33.102913 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 07:23:33.111486 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 07:23:33.123837 ignition[795]: Ignition 2.18.0 Oct 9 07:23:33.123847 ignition[795]: Stage: disks Oct 9 07:23:33.124009 ignition[795]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:23:33.124020 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:23:33.124847 ignition[795]: disks: disks passed Oct 9 07:23:33.124889 ignition[795]: Ignition finished successfully Oct 9 07:23:33.130661 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 07:23:33.132868 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 07:23:33.132946 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 07:23:33.136304 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:23:33.138395 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:23:33.140282 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:23:33.152481 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 07:23:33.162852 systemd-resolved[235]: Detected conflict on linux IN A 10.0.0.107 Oct 9 07:23:33.162872 systemd-resolved[235]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Oct 9 07:23:33.164387 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 07:23:33.170569 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 07:23:33.180444 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 07:23:33.280360 kernel: EXT4-fs (vda9): mounted filesystem ef891253-2811-499a-a9aa-02f0764c1b95 r/w with ordered data mode. Quota mode: none. Oct 9 07:23:33.281328 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 07:23:33.283518 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 07:23:33.293436 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:23:33.295075 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 07:23:33.296293 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 9 07:23:33.301496 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) Oct 9 07:23:33.301514 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:23:33.296330 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 07:23:33.307967 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:23:33.307984 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:23:33.307995 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:23:33.296361 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:23:33.304781 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 07:23:33.309134 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:23:33.312032 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 07:23:33.347236 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 07:23:33.351830 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Oct 9 07:23:33.356434 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 07:23:33.359969 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 07:23:33.441168 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 07:23:33.459418 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 07:23:33.461164 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 07:23:33.467363 kernel: BTRFS info (device vda6): last unmount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:23:33.488749 ignition[928]: INFO : Ignition 2.18.0 Oct 9 07:23:33.488749 ignition[928]: INFO : Stage: mount Oct 9 07:23:33.490663 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:23:33.490663 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:23:33.490663 ignition[928]: INFO : mount: mount passed Oct 9 07:23:33.490663 ignition[928]: INFO : Ignition finished successfully Oct 9 07:23:33.490649 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 07:23:33.496992 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 07:23:33.508460 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 07:23:33.901405 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 07:23:33.912522 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:23:33.919990 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (942) Oct 9 07:23:33.920018 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:23:33.920029 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:23:33.921497 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:23:33.924357 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:23:33.925313 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:23:33.949336 ignition[959]: INFO : Ignition 2.18.0 Oct 9 07:23:33.949336 ignition[959]: INFO : Stage: files Oct 9 07:23:33.951273 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:23:33.951273 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:23:33.951273 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Oct 9 07:23:33.954845 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 07:23:33.954845 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 07:23:33.954845 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 07:23:33.958999 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 07:23:33.958999 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 07:23:33.957921 unknown[959]: wrote ssh authorized keys file for user: core Oct 9 07:23:33.963013 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:23:33.963013 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 07:23:33.999156 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 07:23:34.087704 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:23:34.090049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 9 07:23:34.090049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 07:23:34.090049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:23:34.090049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:23:34.090049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:23:34.090049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:23:34.090049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:23:34.090049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:23:34.090049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:23:34.090049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:23:34.090049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:23:34.090049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:23:34.090049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:23:34.090049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Oct 9 07:23:34.248012 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 9 07:23:34.576358 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:23:34.576358 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 9 07:23:34.580355 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:23:34.580355 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:23:34.580355 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 9 07:23:34.580355 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 9 07:23:34.580355 ignition[959]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 07:23:34.580355 ignition[959]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 07:23:34.580355 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 9 07:23:34.580355 ignition[959]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 9 07:23:34.600029 ignition[959]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 07:23:34.604323 ignition[959]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 07:23:34.606026 ignition[959]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 9 07:23:34.607455 ignition[959]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 9 07:23:34.608895 ignition[959]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 07:23:34.610418 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:23:34.612221 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:23:34.614016 ignition[959]: INFO : files: files passed Oct 9 07:23:34.614864 ignition[959]: INFO : Ignition finished successfully Oct 9 07:23:34.618616 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 07:23:34.630568 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 07:23:34.632447 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 07:23:34.634292 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 07:23:34.634416 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 07:23:34.642239 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Oct 9 07:23:34.644761 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:23:34.646437 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:23:34.649477 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:23:34.647119 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:23:34.650062 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 07:23:34.660491 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 07:23:34.686967 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 07:23:34.687098 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 07:23:34.688278 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 07:23:34.690428 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 07:23:34.692371 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 07:23:34.693164 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 07:23:34.710732 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:23:34.719533 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 07:23:34.730779 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:23:34.732200 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:23:34.734515 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 07:23:34.736480 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 07:23:34.736597 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:23:34.739463 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 07:23:34.740625 systemd[1]: Stopped target basic.target - Basic System. Oct 9 07:23:34.740957 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 07:23:34.741294 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:23:34.741787 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 07:23:34.742130 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 07:23:34.742633 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:23:34.742978 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 07:23:34.743312 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 07:23:34.743801 systemd[1]: Stopped target swap.target - Swaps. Oct 9 07:23:34.744117 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 07:23:34.744230 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:23:34.760978 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:23:34.762006 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:23:34.762320 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 07:23:34.762425 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:23:34.762821 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 07:23:34.762919 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 07:23:34.763638 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 07:23:34.763739 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:23:34.772238 systemd[1]: Stopped target paths.target - Path Units. Oct 9 07:23:34.774095 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 07:23:34.778400 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:23:34.778559 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 07:23:34.781965 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 07:23:34.783795 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 07:23:34.783880 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:23:34.785659 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 07:23:34.785737 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:23:34.786529 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 07:23:34.786632 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:23:34.790265 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 07:23:34.790381 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 07:23:34.808530 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 07:23:34.809472 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 07:23:34.809587 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:23:34.812507 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 07:23:34.813519 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 07:23:34.813648 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:23:34.816896 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 07:23:34.819926 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:23:34.822372 ignition[1014]: INFO : Ignition 2.18.0 Oct 9 07:23:34.822372 ignition[1014]: INFO : Stage: umount Oct 9 07:23:34.822372 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:23:34.822372 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:23:34.822372 ignition[1014]: INFO : umount: umount passed Oct 9 07:23:34.822372 ignition[1014]: INFO : Ignition finished successfully Oct 9 07:23:34.829266 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 07:23:34.830327 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 07:23:34.833978 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 07:23:34.834964 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 07:23:34.837983 systemd[1]: Stopped target network.target - Network. Oct 9 07:23:34.839698 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 07:23:34.840885 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 07:23:34.843207 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 07:23:34.844160 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 07:23:34.846213 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 07:23:34.846269 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 07:23:34.849434 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 07:23:34.849491 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 07:23:34.852647 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 07:23:34.854989 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 07:23:34.858467 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 07:23:34.865966 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 07:23:34.866135 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 07:23:34.868194 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 07:23:34.868271 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:23:34.871806 systemd-networkd[783]: eth0: DHCPv6 lease lost Oct 9 07:23:34.874520 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 07:23:34.874678 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 07:23:34.878135 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 07:23:34.878202 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:23:34.891518 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 07:23:34.893442 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 07:23:34.893527 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:23:34.895790 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 07:23:34.895845 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:23:34.898113 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 07:23:34.898163 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 07:23:34.899163 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:23:34.912232 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 07:23:34.912433 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:23:34.913566 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 07:23:34.913671 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 07:23:34.916140 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 07:23:34.916221 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 07:23:34.917489 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 07:23:34.917534 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:23:34.919461 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 07:23:34.919511 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:23:34.923983 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 07:23:34.924030 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 07:23:34.927743 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:23:34.927796 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:23:34.937546 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 07:23:34.937628 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 07:23:34.937693 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:23:34.940985 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:23:34.941041 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:23:34.947312 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 07:23:34.947473 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 07:23:35.030971 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 07:23:35.031120 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 07:23:35.033142 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 07:23:35.034897 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 07:23:35.034960 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 07:23:35.056627 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 07:23:35.065398 systemd[1]: Switching root. Oct 9 07:23:35.097984 systemd-journald[191]: Journal stopped Oct 9 07:23:36.264824 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Oct 9 07:23:36.264878 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 07:23:36.264905 kernel: SELinux: policy capability open_perms=1 Oct 9 07:23:36.264916 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 07:23:36.264928 kernel: SELinux: policy capability always_check_network=0 Oct 9 07:23:36.264943 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 07:23:36.264954 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 07:23:36.264965 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 07:23:36.264982 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 07:23:36.264994 kernel: audit: type=1403 audit(1728458615.521:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 07:23:36.265012 systemd[1]: Successfully loaded SELinux policy in 38.181ms. Oct 9 07:23:36.265028 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.089ms. Oct 9 07:23:36.265042 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:23:36.265054 systemd[1]: Detected virtualization kvm. Oct 9 07:23:36.265066 systemd[1]: Detected architecture x86-64. Oct 9 07:23:36.265078 systemd[1]: Detected first boot. Oct 9 07:23:36.265090 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:23:36.265101 zram_generator::config[1057]: No configuration found. Oct 9 07:23:36.265115 systemd[1]: Populated /etc with preset unit settings. Oct 9 07:23:36.265139 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 07:23:36.265151 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 07:23:36.265163 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 07:23:36.265175 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 07:23:36.265188 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 07:23:36.265200 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 07:23:36.265212 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 07:23:36.265224 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 07:23:36.265235 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 07:23:36.265250 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 07:23:36.265262 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 07:23:36.265274 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:23:36.265287 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:23:36.265299 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 07:23:36.265310 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 07:23:36.265322 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 07:23:36.265334 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:23:36.265360 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 07:23:36.265372 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:23:36.265384 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 07:23:36.265401 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 07:23:36.265415 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 07:23:36.265427 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 07:23:36.265439 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:23:36.265451 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:23:36.265465 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:23:36.265477 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:23:36.265489 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 07:23:36.265501 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 07:23:36.265513 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:23:36.265525 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:23:36.265537 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:23:36.265549 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 07:23:36.265560 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 07:23:36.265575 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 07:23:36.265586 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 07:23:36.265599 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:23:36.265610 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 07:23:36.265622 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 07:23:36.265634 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 07:23:36.265646 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 07:23:36.265658 systemd[1]: Reached target machines.target - Containers. Oct 9 07:23:36.265671 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 07:23:36.265685 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:23:36.265697 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:23:36.265709 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 07:23:36.265721 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:23:36.265733 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:23:36.265745 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:23:36.265757 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 07:23:36.265773 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:23:36.265787 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 07:23:36.265799 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 07:23:36.265811 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 07:23:36.265823 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 07:23:36.265835 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 07:23:36.265851 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:23:36.265864 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:23:36.265876 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 07:23:36.265905 systemd-journald[1120]: Collecting audit messages is disabled. Oct 9 07:23:36.265929 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 07:23:36.265941 kernel: loop: module loaded Oct 9 07:23:36.265953 systemd-journald[1120]: Journal started Oct 9 07:23:36.265986 systemd-journald[1120]: Runtime Journal (/run/log/journal/18b38f7649ed483d9a93a4025b48fb5f) is 6.0M, max 48.3M, 42.3M free. Oct 9 07:23:36.012190 systemd[1]: Queued start job for default target multi-user.target. Oct 9 07:23:36.268660 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:23:36.268694 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 07:23:36.031857 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 07:23:36.032314 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 07:23:36.269831 systemd[1]: Stopped verity-setup.service. Oct 9 07:23:36.274855 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:23:36.274902 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:23:36.278362 kernel: fuse: init (API version 7.39) Oct 9 07:23:36.291880 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 07:23:36.293103 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 07:23:36.294431 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 07:23:36.295552 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 07:23:36.296793 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 07:23:36.298041 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 07:23:36.299298 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:23:36.300899 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 07:23:36.301071 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 07:23:36.302588 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:23:36.302756 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:23:36.304270 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:23:36.304455 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:23:36.305981 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 07:23:36.306160 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 07:23:36.307788 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:23:36.307958 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:23:36.309387 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:23:36.310792 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 07:23:36.312358 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 07:23:36.324091 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 07:23:36.324483 kernel: ACPI: bus type drm_connector registered Oct 9 07:23:36.329826 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 07:23:36.334755 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 07:23:36.335973 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 07:23:36.336010 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:23:36.338172 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 07:23:36.347577 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 07:23:36.352794 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 07:23:36.354358 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:23:36.356974 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 07:23:36.362644 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 07:23:36.364209 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:23:36.369578 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 07:23:36.371427 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:23:36.378814 systemd-journald[1120]: Time spent on flushing to /var/log/journal/18b38f7649ed483d9a93a4025b48fb5f is 20.781ms for 987 entries. Oct 9 07:23:36.378814 systemd-journald[1120]: System Journal (/var/log/journal/18b38f7649ed483d9a93a4025b48fb5f) is 8.0M, max 195.6M, 187.6M free. Oct 9 07:23:36.415938 systemd-journald[1120]: Received client request to flush runtime journal. Oct 9 07:23:36.374542 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:23:36.380555 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 07:23:36.386158 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:23:36.386389 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:23:36.388239 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 07:23:36.389937 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 07:23:36.391704 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 07:23:36.404412 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 07:23:36.406544 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:23:36.409937 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 07:23:36.416179 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 07:23:36.420243 kernel: loop0: detected capacity change from 0 to 80568 Oct 9 07:23:36.420323 kernel: block loop0: the capability attribute has been deprecated. Oct 9 07:23:36.431323 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 07:23:36.435989 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 07:23:36.442362 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 07:23:36.442731 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 07:23:36.445123 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 07:23:36.449409 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:23:36.459601 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 07:23:36.460473 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 07:23:36.465970 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 9 07:23:36.477379 kernel: loop1: detected capacity change from 0 to 139904 Oct 9 07:23:36.488909 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 07:23:36.498510 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:23:36.516372 kernel: loop2: detected capacity change from 0 to 211296 Oct 9 07:23:36.526656 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Oct 9 07:23:36.526682 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Oct 9 07:23:36.533389 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:23:36.556384 kernel: loop3: detected capacity change from 0 to 80568 Oct 9 07:23:36.566362 kernel: loop4: detected capacity change from 0 to 139904 Oct 9 07:23:36.576376 kernel: loop5: detected capacity change from 0 to 211296 Oct 9 07:23:36.581775 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 9 07:23:36.582326 (sd-merge)[1199]: Merged extensions into '/usr'. Oct 9 07:23:36.586044 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 07:23:36.586059 systemd[1]: Reloading... Oct 9 07:23:36.638385 zram_generator::config[1223]: No configuration found. Oct 9 07:23:36.702327 ldconfig[1157]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 07:23:36.756355 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:23:36.804796 systemd[1]: Reloading finished in 218 ms. Oct 9 07:23:36.850084 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 07:23:36.851693 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 07:23:36.863480 systemd[1]: Starting ensure-sysext.service... Oct 9 07:23:36.865800 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 9 07:23:36.871834 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Oct 9 07:23:36.871850 systemd[1]: Reloading... Oct 9 07:23:36.919387 zram_generator::config[1286]: No configuration found. Oct 9 07:23:36.919596 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 07:23:36.919970 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 07:23:36.920986 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 07:23:36.921315 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Oct 9 07:23:36.921426 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Oct 9 07:23:36.924573 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:23:36.924586 systemd-tmpfiles[1261]: Skipping /boot Oct 9 07:23:36.934543 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:23:36.934556 systemd-tmpfiles[1261]: Skipping /boot Oct 9 07:23:37.021908 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:23:37.071719 systemd[1]: Reloading finished in 199 ms. Oct 9 07:23:37.092456 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 07:23:37.108818 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:23:37.118400 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:23:37.121023 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 07:23:37.123592 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 07:23:37.128892 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:23:37.134608 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:23:37.139567 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 07:23:37.144694 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:23:37.144869 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:23:37.146059 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:23:37.148810 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:23:37.152550 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:23:37.153724 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:23:37.161565 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 07:23:37.162656 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:23:37.163729 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 07:23:37.167710 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:23:37.167888 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:23:37.170957 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:23:37.171681 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:23:37.172892 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Oct 9 07:23:37.173700 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:23:37.173911 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:23:37.185231 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 07:23:37.189890 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:23:37.190100 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:23:37.195321 augenrules[1355]: No rules Oct 9 07:23:37.195619 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:23:37.198661 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:23:37.202154 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:23:37.205512 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:23:37.206708 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:23:37.208632 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 07:23:37.210008 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:23:37.210956 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:23:37.213032 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:23:37.214867 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 07:23:37.216862 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:23:37.223381 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:23:37.224856 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 07:23:37.226977 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:23:37.227165 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:23:37.228865 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:23:37.229034 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:23:37.230904 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:23:37.231079 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:23:37.240579 systemd[1]: Finished ensure-sysext.service. Oct 9 07:23:37.241968 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 07:23:37.264521 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1373) Oct 9 07:23:37.264529 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:23:37.265939 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:23:37.266011 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:23:37.269503 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 07:23:37.270919 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 07:23:37.271918 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 9 07:23:37.276356 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1374) Oct 9 07:23:37.311079 systemd-resolved[1329]: Positive Trust Anchors: Oct 9 07:23:37.314373 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:23:37.314406 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 9 07:23:37.319631 systemd-resolved[1329]: Defaulting to hostname 'linux'. Oct 9 07:23:37.322761 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:23:37.327128 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:23:37.328435 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:23:37.336599 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 07:23:37.341363 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 9 07:23:37.349751 systemd-networkd[1397]: lo: Link UP Oct 9 07:23:37.349763 systemd-networkd[1397]: lo: Gained carrier Oct 9 07:23:37.350507 kernel: ACPI: button: Power Button [PWRF] Oct 9 07:23:37.352509 systemd-networkd[1397]: Enumeration completed Oct 9 07:23:37.352947 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:23:37.352959 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:23:37.353265 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 07:23:37.353760 systemd-networkd[1397]: eth0: Link UP Oct 9 07:23:37.353772 systemd-networkd[1397]: eth0: Gained carrier Oct 9 07:23:37.353784 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:23:37.355046 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:23:37.356431 systemd[1]: Reached target network.target - Network. Oct 9 07:23:37.362721 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 07:23:37.365917 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 07:23:37.367299 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 07:23:37.370502 systemd-networkd[1397]: eth0: DHCPv4 address 10.0.0.107/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 07:23:37.371161 systemd-timesyncd[1398]: Network configuration changed, trying to establish connection. Oct 9 07:23:38.331329 systemd-timesyncd[1398]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 9 07:23:38.331394 systemd-timesyncd[1398]: Initial clock synchronization to Wed 2024-10-09 07:23:38.331234 UTC. Oct 9 07:23:38.331507 systemd-resolved[1329]: Clock change detected. Flushing caches. Oct 9 07:23:38.339403 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:23:38.342720 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 9 07:23:38.365845 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 9 07:23:38.366089 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 9 07:23:38.366246 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 9 07:23:38.366421 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 9 07:23:38.382814 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:23:38.391881 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 07:23:38.391412 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:23:38.391639 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:23:38.400033 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:23:38.445172 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:23:38.480896 kernel: kvm_amd: TSC scaling supported Oct 9 07:23:38.480945 kernel: kvm_amd: Nested Virtualization enabled Oct 9 07:23:38.480959 kernel: kvm_amd: Nested Paging enabled Oct 9 07:23:38.481871 kernel: kvm_amd: LBR virtualization supported Oct 9 07:23:38.481888 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 9 07:23:38.482873 kernel: kvm_amd: Virtual GIF supported Oct 9 07:23:38.502714 kernel: EDAC MC: Ver: 3.0.0 Oct 9 07:23:38.532020 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 07:23:38.543026 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 07:23:38.550114 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:23:38.581968 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 07:23:38.583565 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:23:38.584743 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:23:38.585982 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 07:23:38.587293 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 07:23:38.588831 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 07:23:38.590074 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 07:23:38.591354 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 07:23:38.592616 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 07:23:38.592646 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:23:38.593571 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:23:38.595280 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 07:23:38.598015 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 07:23:38.608207 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 07:23:38.610613 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 07:23:38.612203 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 07:23:38.613390 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:23:38.614382 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:23:38.615393 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:23:38.615419 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:23:38.616335 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 07:23:38.618387 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 07:23:38.620809 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 07:23:38.623400 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:23:38.626021 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 07:23:38.627105 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 07:23:38.628331 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 07:23:38.630791 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 07:23:38.632059 jq[1433]: false Oct 9 07:23:38.634743 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 07:23:38.637039 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 07:23:38.640924 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 07:23:38.642452 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 07:23:38.642891 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 07:23:38.643845 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 07:23:38.647800 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 07:23:38.650413 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 07:23:38.650635 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 07:23:38.651417 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 07:23:38.651655 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 07:23:38.661398 extend-filesystems[1434]: Found loop3 Oct 9 07:23:38.665806 extend-filesystems[1434]: Found loop4 Oct 9 07:23:38.665806 extend-filesystems[1434]: Found loop5 Oct 9 07:23:38.665806 extend-filesystems[1434]: Found sr0 Oct 9 07:23:38.665806 extend-filesystems[1434]: Found vda Oct 9 07:23:38.665806 extend-filesystems[1434]: Found vda1 Oct 9 07:23:38.665806 extend-filesystems[1434]: Found vda2 Oct 9 07:23:38.665806 extend-filesystems[1434]: Found vda3 Oct 9 07:23:38.665806 extend-filesystems[1434]: Found usr Oct 9 07:23:38.665806 extend-filesystems[1434]: Found vda4 Oct 9 07:23:38.665806 extend-filesystems[1434]: Found vda6 Oct 9 07:23:38.665806 extend-filesystems[1434]: Found vda7 Oct 9 07:23:38.665806 extend-filesystems[1434]: Found vda9 Oct 9 07:23:38.665806 extend-filesystems[1434]: Checking size of /dev/vda9 Oct 9 07:23:38.711754 jq[1443]: true Oct 9 07:23:38.700372 dbus-daemon[1432]: [system] SELinux support is enabled Oct 9 07:23:38.697107 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 07:23:38.712205 update_engine[1442]: I1009 07:23:38.677260 1442 main.cc:92] Flatcar Update Engine starting Oct 9 07:23:38.712205 update_engine[1442]: I1009 07:23:38.705515 1442 update_check_scheduler.cc:74] Next update check in 7m4s Oct 9 07:23:38.697467 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 07:23:38.712506 jq[1460]: true Oct 9 07:23:38.701022 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 07:23:38.711582 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 07:23:38.711621 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 07:23:38.713822 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 07:23:38.713844 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 07:23:38.715333 tar[1445]: linux-amd64/helm Oct 9 07:23:38.715411 systemd-logind[1441]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 07:23:38.715443 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 07:23:38.716422 systemd-logind[1441]: New seat seat0. Oct 9 07:23:38.717392 systemd[1]: Started update-engine.service - Update Engine. Oct 9 07:23:38.726176 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 07:23:38.726475 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 07:23:38.727023 extend-filesystems[1434]: Resized partition /dev/vda9 Oct 9 07:23:38.731823 extend-filesystems[1469]: resize2fs 1.47.0 (5-Feb-2023) Oct 9 07:23:38.738289 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 07:23:38.740841 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 07:23:38.744710 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1369) Oct 9 07:23:38.762763 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 9 07:23:38.843159 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 07:23:38.849577 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 07:23:38.854695 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 9 07:23:38.876717 extend-filesystems[1469]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 07:23:38.876717 extend-filesystems[1469]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 9 07:23:38.876717 extend-filesystems[1469]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 9 07:23:38.882271 extend-filesystems[1434]: Resized filesystem in /dev/vda9 Oct 9 07:23:38.878447 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 07:23:38.878661 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 07:23:38.885995 bash[1486]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:23:38.887182 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 07:23:38.890489 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 9 07:23:38.919642 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 07:23:38.945187 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 07:23:38.960087 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 07:23:38.963917 systemd[1]: Started sshd@0-10.0.0.107:22-10.0.0.1:34610.service - OpenSSH per-connection server daemon (10.0.0.1:34610). Oct 9 07:23:38.969892 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 07:23:38.970126 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 07:23:38.973649 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 07:23:38.989325 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 07:23:38.996606 containerd[1462]: time="2024-10-09T07:23:38.996523909Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Oct 9 07:23:38.998028 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 07:23:39.000595 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 07:23:39.002070 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 07:23:39.025943 containerd[1462]: time="2024-10-09T07:23:39.025892987Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 07:23:39.025943 containerd[1462]: time="2024-10-09T07:23:39.025939123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:23:39.027965 containerd[1462]: time="2024-10-09T07:23:39.027776880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:23:39.027965 containerd[1462]: time="2024-10-09T07:23:39.027813368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:23:39.028082 containerd[1462]: time="2024-10-09T07:23:39.028067535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:23:39.028103 containerd[1462]: time="2024-10-09T07:23:39.028084767Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 07:23:39.028200 containerd[1462]: time="2024-10-09T07:23:39.028179855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 07:23:39.028326 containerd[1462]: time="2024-10-09T07:23:39.028309498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:23:39.028376 containerd[1462]: time="2024-10-09T07:23:39.028362969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 07:23:39.028554 containerd[1462]: time="2024-10-09T07:23:39.028533078Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:23:39.029329 containerd[1462]: time="2024-10-09T07:23:39.028932757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 07:23:39.029329 containerd[1462]: time="2024-10-09T07:23:39.028955951Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 9 07:23:39.029329 containerd[1462]: time="2024-10-09T07:23:39.028966000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:23:39.029329 containerd[1462]: time="2024-10-09T07:23:39.029086506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:23:39.029329 containerd[1462]: time="2024-10-09T07:23:39.029099851Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 07:23:39.029329 containerd[1462]: time="2024-10-09T07:23:39.029158010Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 9 07:23:39.029329 containerd[1462]: time="2024-10-09T07:23:39.029170313Z" level=info msg="metadata content store policy set" policy=shared Oct 9 07:23:39.029542 sshd[1511]: Accepted publickey for core from 10.0.0.1 port 34610 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:39.032082 sshd[1511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:39.036340 containerd[1462]: time="2024-10-09T07:23:39.036312968Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 07:23:39.036387 containerd[1462]: time="2024-10-09T07:23:39.036345158Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 07:23:39.036387 containerd[1462]: time="2024-10-09T07:23:39.036363412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 07:23:39.036469 containerd[1462]: time="2024-10-09T07:23:39.036400692Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 07:23:39.036469 containerd[1462]: time="2024-10-09T07:23:39.036421932Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 07:23:39.036469 containerd[1462]: time="2024-10-09T07:23:39.036445707Z" level=info msg="NRI interface is disabled by configuration." Oct 9 07:23:39.036469 containerd[1462]: time="2024-10-09T07:23:39.036461737Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 07:23:39.036616 containerd[1462]: time="2024-10-09T07:23:39.036593804Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 07:23:39.036667 containerd[1462]: time="2024-10-09T07:23:39.036625865Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 07:23:39.036667 containerd[1462]: time="2024-10-09T07:23:39.036645802Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 07:23:39.036811 containerd[1462]: time="2024-10-09T07:23:39.036664257Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 07:23:39.036811 containerd[1462]: time="2024-10-09T07:23:39.036699533Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 07:23:39.036811 containerd[1462]: time="2024-10-09T07:23:39.036722967Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 07:23:39.036811 containerd[1462]: time="2024-10-09T07:23:39.036741141Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 07:23:39.036811 containerd[1462]: time="2024-10-09T07:23:39.036758674Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 07:23:39.036811 containerd[1462]: time="2024-10-09T07:23:39.036776287Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 07:23:39.036811 containerd[1462]: time="2024-10-09T07:23:39.036794060Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 07:23:39.036811 containerd[1462]: time="2024-10-09T07:23:39.036810501Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 07:23:39.037016 containerd[1462]: time="2024-10-09T07:23:39.036824908Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 07:23:39.037016 containerd[1462]: time="2024-10-09T07:23:39.036967806Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 07:23:39.039332 containerd[1462]: time="2024-10-09T07:23:39.039278850Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 07:23:39.039332 containerd[1462]: time="2024-10-09T07:23:39.039326209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 07:23:39.039438 containerd[1462]: time="2024-10-09T07:23:39.039342189Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 07:23:39.039438 containerd[1462]: time="2024-10-09T07:23:39.039370001Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 07:23:39.041806 containerd[1462]: time="2024-10-09T07:23:39.039454129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 07:23:39.041806 containerd[1462]: time="2024-10-09T07:23:39.039508410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 07:23:39.041806 containerd[1462]: time="2024-10-09T07:23:39.039528558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 07:23:39.041806 containerd[1462]: time="2024-10-09T07:23:39.039545370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 07:23:39.041806 containerd[1462]: time="2024-10-09T07:23:39.039562913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 07:23:39.041806 containerd[1462]: time="2024-10-09T07:23:39.039582780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 07:23:39.041806 containerd[1462]: time="2024-10-09T07:23:39.039598870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 07:23:39.041806 containerd[1462]: time="2024-10-09T07:23:39.039614309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 07:23:39.041806 containerd[1462]: time="2024-10-09T07:23:39.039633265Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 07:23:39.041806 containerd[1462]: time="2024-10-09T07:23:39.039844501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 07:23:39.041806 containerd[1462]: time="2024-10-09T07:23:39.039866833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 07:23:39.041806 containerd[1462]: time="2024-10-09T07:23:39.039882021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 07:23:39.041806 containerd[1462]: time="2024-10-09T07:23:39.039897500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 07:23:39.041806 containerd[1462]: time="2024-10-09T07:23:39.039913630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 07:23:39.041806 containerd[1462]: time="2024-10-09T07:23:39.039940160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 07:23:39.042166 containerd[1462]: time="2024-10-09T07:23:39.039956390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 07:23:39.042166 containerd[1462]: time="2024-10-09T07:23:39.039971238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 07:23:39.042220 containerd[1462]: time="2024-10-09T07:23:39.040301197Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 07:23:39.042220 containerd[1462]: time="2024-10-09T07:23:39.040376007Z" level=info msg="Connect containerd service" Oct 9 07:23:39.042220 containerd[1462]: time="2024-10-09T07:23:39.040415862Z" level=info msg="using legacy CRI server" Oct 9 07:23:39.042220 containerd[1462]: time="2024-10-09T07:23:39.040435689Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 07:23:39.042220 containerd[1462]: time="2024-10-09T07:23:39.040526139Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 07:23:39.042220 containerd[1462]: time="2024-10-09T07:23:39.041227234Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 07:23:39.042220 containerd[1462]: time="2024-10-09T07:23:39.041264834Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 07:23:39.042220 containerd[1462]: time="2024-10-09T07:23:39.041283569Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 07:23:39.042220 containerd[1462]: time="2024-10-09T07:23:39.041296834Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 07:23:39.042220 containerd[1462]: time="2024-10-09T07:23:39.041311502Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 07:23:39.042220 containerd[1462]: time="2024-10-09T07:23:39.041567702Z" level=info msg="Start subscribing containerd event" Oct 9 07:23:39.042220 containerd[1462]: time="2024-10-09T07:23:39.041613047Z" level=info msg="Start recovering state" Oct 9 07:23:39.042220 containerd[1462]: time="2024-10-09T07:23:39.041622475Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 07:23:39.042532 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 07:23:39.044099 containerd[1462]: time="2024-10-09T07:23:39.044077128Z" level=info msg="Start event monitor" Oct 9 07:23:39.044173 containerd[1462]: time="2024-10-09T07:23:39.044157409Z" level=info msg="Start snapshots syncer" Oct 9 07:23:39.044234 containerd[1462]: time="2024-10-09T07:23:39.044220267Z" level=info msg="Start cni network conf syncer for default" Oct 9 07:23:39.044291 containerd[1462]: time="2024-10-09T07:23:39.044276612Z" level=info msg="Start streaming server" Oct 9 07:23:39.044565 containerd[1462]: time="2024-10-09T07:23:39.044528144Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 07:23:39.045175 containerd[1462]: time="2024-10-09T07:23:39.044614115Z" level=info msg="containerd successfully booted in 0.050086s" Oct 9 07:23:39.050946 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 07:23:39.052403 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 07:23:39.055617 systemd-logind[1441]: New session 1 of user core. Oct 9 07:23:39.062511 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 07:23:39.069933 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 07:23:39.076608 (systemd)[1527]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:39.168134 tar[1445]: linux-amd64/LICENSE Oct 9 07:23:39.168134 tar[1445]: linux-amd64/README.md Oct 9 07:23:39.181009 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 07:23:39.191831 systemd[1527]: Queued start job for default target default.target. Oct 9 07:23:39.193159 systemd[1527]: Created slice app.slice - User Application Slice. Oct 9 07:23:39.193183 systemd[1527]: Reached target paths.target - Paths. Oct 9 07:23:39.193196 systemd[1527]: Reached target timers.target - Timers. Oct 9 07:23:39.194748 systemd[1527]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 07:23:39.206358 systemd[1527]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 07:23:39.206503 systemd[1527]: Reached target sockets.target - Sockets. Oct 9 07:23:39.206524 systemd[1527]: Reached target basic.target - Basic System. Oct 9 07:23:39.206565 systemd[1527]: Reached target default.target - Main User Target. Oct 9 07:23:39.206604 systemd[1527]: Startup finished in 123ms. Oct 9 07:23:39.207099 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 07:23:39.209694 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 07:23:39.274076 systemd[1]: Started sshd@1-10.0.0.107:22-10.0.0.1:34624.service - OpenSSH per-connection server daemon (10.0.0.1:34624). Oct 9 07:23:39.309670 sshd[1541]: Accepted publickey for core from 10.0.0.1 port 34624 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:39.311134 sshd[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:39.315252 systemd-logind[1441]: New session 2 of user core. Oct 9 07:23:39.324813 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 07:23:39.381230 sshd[1541]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:39.397772 systemd[1]: sshd@1-10.0.0.107:22-10.0.0.1:34624.service: Deactivated successfully. Oct 9 07:23:39.399410 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 07:23:39.400857 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Oct 9 07:23:39.409963 systemd[1]: Started sshd@2-10.0.0.107:22-10.0.0.1:34632.service - OpenSSH per-connection server daemon (10.0.0.1:34632). Oct 9 07:23:39.412117 systemd-logind[1441]: Removed session 2. Oct 9 07:23:39.442073 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 34632 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:39.443427 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:39.446930 systemd-logind[1441]: New session 3 of user core. Oct 9 07:23:39.453796 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 07:23:39.465780 systemd-networkd[1397]: eth0: Gained IPv6LL Oct 9 07:23:39.468744 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 07:23:39.470645 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 07:23:39.479915 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 9 07:23:39.482275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:23:39.484434 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 07:23:39.503606 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 9 07:23:39.503874 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 9 07:23:39.505502 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 07:23:39.506445 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 07:23:39.510047 sshd[1548]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:39.513731 systemd[1]: sshd@2-10.0.0.107:22-10.0.0.1:34632.service: Deactivated successfully. Oct 9 07:23:39.515384 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 07:23:39.515926 systemd-logind[1441]: Session 3 logged out. Waiting for processes to exit. Oct 9 07:23:39.516826 systemd-logind[1441]: Removed session 3. Oct 9 07:23:40.114796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:23:40.116457 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 07:23:40.117778 systemd[1]: Startup finished in 722ms (kernel) + 4.823s (initrd) + 3.674s (userspace) = 9.220s. Oct 9 07:23:40.120530 (kubelet)[1576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:23:40.628191 kubelet[1576]: E1009 07:23:40.628100 1576 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:23:40.634247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:23:40.634525 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:23:40.634864 systemd[1]: kubelet.service: Consumed 1.030s CPU time. Oct 9 07:23:49.521415 systemd[1]: Started sshd@3-10.0.0.107:22-10.0.0.1:39430.service - OpenSSH per-connection server daemon (10.0.0.1:39430). Oct 9 07:23:49.559448 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 39430 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:49.561297 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:49.565204 systemd-logind[1441]: New session 4 of user core. Oct 9 07:23:49.574815 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 07:23:49.629709 sshd[1590]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:49.644568 systemd[1]: sshd@3-10.0.0.107:22-10.0.0.1:39430.service: Deactivated successfully. Oct 9 07:23:49.646186 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 07:23:49.647755 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. Oct 9 07:23:49.648903 systemd[1]: Started sshd@4-10.0.0.107:22-10.0.0.1:39438.service - OpenSSH per-connection server daemon (10.0.0.1:39438). Oct 9 07:23:49.649650 systemd-logind[1441]: Removed session 4. Oct 9 07:23:49.684119 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 39438 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:49.685806 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:49.689754 systemd-logind[1441]: New session 5 of user core. Oct 9 07:23:49.699782 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 07:23:49.749338 sshd[1597]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:49.758587 systemd[1]: sshd@4-10.0.0.107:22-10.0.0.1:39438.service: Deactivated successfully. Oct 9 07:23:49.760471 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 07:23:49.762135 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Oct 9 07:23:49.770894 systemd[1]: Started sshd@5-10.0.0.107:22-10.0.0.1:39446.service - OpenSSH per-connection server daemon (10.0.0.1:39446). Oct 9 07:23:49.772384 systemd-logind[1441]: Removed session 5. Oct 9 07:23:49.802575 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 39446 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:49.804239 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:49.808408 systemd-logind[1441]: New session 6 of user core. Oct 9 07:23:49.817821 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 07:23:49.874809 sshd[1604]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:49.885981 systemd[1]: sshd@5-10.0.0.107:22-10.0.0.1:39446.service: Deactivated successfully. Oct 9 07:23:49.888085 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 07:23:49.890195 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Oct 9 07:23:49.905940 systemd[1]: Started sshd@6-10.0.0.107:22-10.0.0.1:39454.service - OpenSSH per-connection server daemon (10.0.0.1:39454). Oct 9 07:23:49.906950 systemd-logind[1441]: Removed session 6. Oct 9 07:23:49.939915 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 39454 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:49.941662 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:49.946229 systemd-logind[1441]: New session 7 of user core. Oct 9 07:23:49.961816 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 07:23:50.019380 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 07:23:50.019673 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:23:50.043193 sudo[1616]: pam_unix(sudo:session): session closed for user root Oct 9 07:23:50.045198 sshd[1612]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:50.065715 systemd[1]: sshd@6-10.0.0.107:22-10.0.0.1:39454.service: Deactivated successfully. Oct 9 07:23:50.067511 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 07:23:50.069237 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Oct 9 07:23:50.085912 systemd[1]: Started sshd@7-10.0.0.107:22-10.0.0.1:39468.service - OpenSSH per-connection server daemon (10.0.0.1:39468). Oct 9 07:23:50.086910 systemd-logind[1441]: Removed session 7. Oct 9 07:23:50.117492 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 39468 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:50.118943 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:50.122877 systemd-logind[1441]: New session 8 of user core. Oct 9 07:23:50.132800 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 07:23:50.187019 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 07:23:50.187320 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:23:50.190813 sudo[1625]: pam_unix(sudo:session): session closed for user root Oct 9 07:23:50.198248 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 9 07:23:50.198620 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:23:50.227067 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 9 07:23:50.228559 auditctl[1628]: No rules Oct 9 07:23:50.229049 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 07:23:50.229340 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 9 07:23:50.232747 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:23:50.263028 augenrules[1646]: No rules Oct 9 07:23:50.265062 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:23:50.266380 sudo[1624]: pam_unix(sudo:session): session closed for user root Oct 9 07:23:50.268183 sshd[1621]: pam_unix(sshd:session): session closed for user core Oct 9 07:23:50.283522 systemd[1]: sshd@7-10.0.0.107:22-10.0.0.1:39468.service: Deactivated successfully. Oct 9 07:23:50.285363 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 07:23:50.286996 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Oct 9 07:23:50.295905 systemd[1]: Started sshd@8-10.0.0.107:22-10.0.0.1:39484.service - OpenSSH per-connection server daemon (10.0.0.1:39484). Oct 9 07:23:50.296888 systemd-logind[1441]: Removed session 8. Oct 9 07:23:50.326942 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 39484 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:23:50.328467 sshd[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:23:50.332077 systemd-logind[1441]: New session 9 of user core. Oct 9 07:23:50.341813 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 07:23:50.394404 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 07:23:50.394714 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:23:50.500900 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 07:23:50.501037 (dockerd)[1667]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 07:23:50.729461 dockerd[1667]: time="2024-10-09T07:23:50.729312736Z" level=info msg="Starting up" Oct 9 07:23:50.730505 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 07:23:50.736996 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:23:50.882578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:23:50.886784 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:23:50.945389 kubelet[1687]: E1009 07:23:50.945316 1687 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:23:50.953263 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:23:50.953471 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:23:51.901918 dockerd[1667]: time="2024-10-09T07:23:51.901849982Z" level=info msg="Loading containers: start." Oct 9 07:23:52.020713 kernel: Initializing XFRM netlink socket Oct 9 07:23:52.108594 systemd-networkd[1397]: docker0: Link UP Oct 9 07:23:52.132330 dockerd[1667]: time="2024-10-09T07:23:52.132279991Z" level=info msg="Loading containers: done." Oct 9 07:23:52.181171 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2411420075-merged.mount: Deactivated successfully. Oct 9 07:23:52.238921 dockerd[1667]: time="2024-10-09T07:23:52.238858553Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 07:23:52.239114 dockerd[1667]: time="2024-10-09T07:23:52.239072996Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Oct 9 07:23:52.239266 dockerd[1667]: time="2024-10-09T07:23:52.239211656Z" level=info msg="Daemon has completed initialization" Oct 9 07:23:52.597696 dockerd[1667]: time="2024-10-09T07:23:52.597531503Z" level=info msg="API listen on /run/docker.sock" Oct 9 07:23:52.597834 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 07:23:53.454875 containerd[1462]: time="2024-10-09T07:23:53.454827860Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 9 07:23:54.981943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1610578321.mount: Deactivated successfully. Oct 9 07:23:56.291544 containerd[1462]: time="2024-10-09T07:23:56.291478873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:56.292237 containerd[1462]: time="2024-10-09T07:23:56.292188985Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=35213841" Oct 9 07:23:56.293373 containerd[1462]: time="2024-10-09T07:23:56.293339763Z" level=info msg="ImageCreate event name:\"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:56.296212 containerd[1462]: time="2024-10-09T07:23:56.296171804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:56.297087 containerd[1462]: time="2024-10-09T07:23:56.297058677Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"35210641\" in 2.842187296s" Oct 9 07:23:56.297128 containerd[1462]: time="2024-10-09T07:23:56.297088764Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\"" Oct 9 07:23:56.318618 containerd[1462]: time="2024-10-09T07:23:56.318380784Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 9 07:23:58.300530 containerd[1462]: time="2024-10-09T07:23:58.300454620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:58.301281 containerd[1462]: time="2024-10-09T07:23:58.301206880Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=32208673" Oct 9 07:23:58.302566 containerd[1462]: time="2024-10-09T07:23:58.302532136Z" level=info msg="ImageCreate event name:\"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:58.305110 containerd[1462]: time="2024-10-09T07:23:58.305070256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:58.306261 containerd[1462]: time="2024-10-09T07:23:58.306218118Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"33739229\" in 1.987797801s" Oct 9 07:23:58.306322 containerd[1462]: time="2024-10-09T07:23:58.306262562Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\"" Oct 9 07:23:58.329757 containerd[1462]: time="2024-10-09T07:23:58.329712158Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 9 07:23:59.263729 containerd[1462]: time="2024-10-09T07:23:59.263661937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:59.264447 containerd[1462]: time="2024-10-09T07:23:59.264383280Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=17320456" Oct 9 07:23:59.265518 containerd[1462]: time="2024-10-09T07:23:59.265469758Z" level=info msg="ImageCreate event name:\"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:59.270577 containerd[1462]: time="2024-10-09T07:23:59.270538874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:23:59.271522 containerd[1462]: time="2024-10-09T07:23:59.271482003Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"18851030\" in 941.727986ms" Oct 9 07:23:59.271561 containerd[1462]: time="2024-10-09T07:23:59.271518722Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\"" Oct 9 07:23:59.292673 containerd[1462]: time="2024-10-09T07:23:59.292642747Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 9 07:24:00.296124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2074051340.mount: Deactivated successfully. Oct 9 07:24:00.967980 containerd[1462]: time="2024-10-09T07:24:00.967924057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:00.968741 containerd[1462]: time="2024-10-09T07:24:00.968703389Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=28601750" Oct 9 07:24:00.969901 containerd[1462]: time="2024-10-09T07:24:00.969877671Z" level=info msg="ImageCreate event name:\"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:00.972080 containerd[1462]: time="2024-10-09T07:24:00.972034576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:00.972551 containerd[1462]: time="2024-10-09T07:24:00.972520367Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"28600769\" in 1.679847443s" Oct 9 07:24:00.972587 containerd[1462]: time="2024-10-09T07:24:00.972550153Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\"" Oct 9 07:24:00.994513 containerd[1462]: time="2024-10-09T07:24:00.994468467Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 07:24:00.996022 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 07:24:01.003830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:24:01.140918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:24:01.146553 (kubelet)[1928]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:24:01.206547 kubelet[1928]: E1009 07:24:01.206464 1928 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:24:01.211213 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:24:01.211416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:24:01.766376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3429875653.mount: Deactivated successfully. Oct 9 07:24:02.416916 containerd[1462]: time="2024-10-09T07:24:02.416857605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:02.417714 containerd[1462]: time="2024-10-09T07:24:02.417649991Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 9 07:24:02.418961 containerd[1462]: time="2024-10-09T07:24:02.418934460Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:02.421779 containerd[1462]: time="2024-10-09T07:24:02.421715575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:02.424873 containerd[1462]: time="2024-10-09T07:24:02.424808395Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.430304562s" Oct 9 07:24:02.424933 containerd[1462]: time="2024-10-09T07:24:02.424869540Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 07:24:02.448043 containerd[1462]: time="2024-10-09T07:24:02.447999377Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 07:24:02.960464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount154342136.mount: Deactivated successfully. Oct 9 07:24:02.967377 containerd[1462]: time="2024-10-09T07:24:02.967324567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:02.968010 containerd[1462]: time="2024-10-09T07:24:02.967939059Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 9 07:24:02.969020 containerd[1462]: time="2024-10-09T07:24:02.968983628Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:02.971094 containerd[1462]: time="2024-10-09T07:24:02.971053440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:02.971823 containerd[1462]: time="2024-10-09T07:24:02.971778209Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 523.739789ms" Oct 9 07:24:02.971823 containerd[1462]: time="2024-10-09T07:24:02.971817172Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 9 07:24:02.993335 containerd[1462]: time="2024-10-09T07:24:02.993289590Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 9 07:24:03.886853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4215835389.mount: Deactivated successfully. Oct 9 07:24:06.148917 containerd[1462]: time="2024-10-09T07:24:06.148843060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:06.149614 containerd[1462]: time="2024-10-09T07:24:06.149524477Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Oct 9 07:24:06.150972 containerd[1462]: time="2024-10-09T07:24:06.150938239Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:06.154186 containerd[1462]: time="2024-10-09T07:24:06.154148770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:06.155223 containerd[1462]: time="2024-10-09T07:24:06.155174984Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.161848876s" Oct 9 07:24:06.155223 containerd[1462]: time="2024-10-09T07:24:06.155219688Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Oct 9 07:24:08.946815 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:24:08.954888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:24:08.974937 systemd[1]: Reloading requested from client PID 2121 ('systemctl') (unit session-9.scope)... Oct 9 07:24:08.974954 systemd[1]: Reloading... Oct 9 07:24:09.053797 zram_generator::config[2158]: No configuration found. Oct 9 07:24:09.329494 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:24:09.408706 systemd[1]: Reloading finished in 433 ms. Oct 9 07:24:09.467959 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 07:24:09.468056 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 07:24:09.468315 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:24:09.478894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:24:09.618150 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:24:09.622539 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:24:09.664707 kubelet[2206]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:24:09.664707 kubelet[2206]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:24:09.664707 kubelet[2206]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:24:09.664707 kubelet[2206]: I1009 07:24:09.663947 2206 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:24:09.824884 kubelet[2206]: I1009 07:24:09.824858 2206 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 07:24:09.824884 kubelet[2206]: I1009 07:24:09.824885 2206 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:24:09.825117 kubelet[2206]: I1009 07:24:09.825094 2206 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 07:24:09.838275 kubelet[2206]: E1009 07:24:09.838240 2206 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.107:6443: connect: connection refused Oct 9 07:24:09.840212 kubelet[2206]: I1009 07:24:09.840182 2206 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:24:09.852214 kubelet[2206]: I1009 07:24:09.852175 2206 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:24:09.853245 kubelet[2206]: I1009 07:24:09.853217 2206 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:24:09.853409 kubelet[2206]: I1009 07:24:09.853384 2206 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 07:24:09.853511 kubelet[2206]: I1009 07:24:09.853410 2206 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:24:09.853511 kubelet[2206]: I1009 07:24:09.853420 2206 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 07:24:09.853568 kubelet[2206]: I1009 07:24:09.853529 2206 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:24:09.853649 kubelet[2206]: I1009 07:24:09.853624 2206 kubelet.go:396] "Attempting to sync node with API server" Oct 9 07:24:09.853649 kubelet[2206]: I1009 07:24:09.853642 2206 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:24:09.853721 kubelet[2206]: I1009 07:24:09.853666 2206 kubelet.go:312] "Adding apiserver pod source" Oct 9 07:24:09.853721 kubelet[2206]: I1009 07:24:09.853690 2206 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:24:09.854759 kubelet[2206]: I1009 07:24:09.854728 2206 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 9 07:24:09.855497 kubelet[2206]: W1009 07:24:09.855452 2206 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.107:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Oct 9 07:24:09.855544 kubelet[2206]: E1009 07:24:09.855503 2206 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.107:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Oct 9 07:24:09.855646 kubelet[2206]: W1009 07:24:09.855589 2206 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Oct 9 07:24:09.855759 kubelet[2206]: E1009 07:24:09.855669 2206 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Oct 9 07:24:09.857098 kubelet[2206]: I1009 07:24:09.857073 2206 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:24:09.857931 kubelet[2206]: W1009 07:24:09.857909 2206 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 07:24:09.858479 kubelet[2206]: I1009 07:24:09.858458 2206 server.go:1256] "Started kubelet" Oct 9 07:24:09.858540 kubelet[2206]: I1009 07:24:09.858525 2206 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:24:09.858783 kubelet[2206]: I1009 07:24:09.858758 2206 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:24:09.859026 kubelet[2206]: I1009 07:24:09.859002 2206 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:24:09.859339 kubelet[2206]: I1009 07:24:09.859288 2206 server.go:461] "Adding debug handlers to kubelet server" Oct 9 07:24:09.860846 kubelet[2206]: I1009 07:24:09.860821 2206 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:24:09.862528 kubelet[2206]: E1009 07:24:09.862426 2206 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:24:09.862528 kubelet[2206]: I1009 07:24:09.862463 2206 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 07:24:09.862625 kubelet[2206]: I1009 07:24:09.862565 2206 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 07:24:09.862654 kubelet[2206]: I1009 07:24:09.862638 2206 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 07:24:09.862911 kubelet[2206]: W1009 07:24:09.862856 2206 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Oct 9 07:24:09.862911 kubelet[2206]: E1009 07:24:09.862891 2206 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Oct 9 07:24:09.863989 kubelet[2206]: E1009 07:24:09.863433 2206 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:24:09.863989 kubelet[2206]: E1009 07:24:09.863465 2206 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.107:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.107:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fcb7fbf9c37613 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 07:24:09.858438675 +0000 UTC m=+0.231807463,LastTimestamp:2024-10-09 07:24:09.858438675 +0000 UTC m=+0.231807463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 07:24:09.863989 kubelet[2206]: I1009 07:24:09.863546 2206 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:24:09.863989 kubelet[2206]: I1009 07:24:09.863625 2206 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:24:09.863989 kubelet[2206]: E1009 07:24:09.863967 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="200ms" Oct 9 07:24:09.864595 kubelet[2206]: I1009 07:24:09.864574 2206 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:24:09.878977 kubelet[2206]: I1009 07:24:09.878835 2206 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:24:09.880360 kubelet[2206]: I1009 07:24:09.880237 2206 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:24:09.880360 kubelet[2206]: I1009 07:24:09.880264 2206 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:24:09.880360 kubelet[2206]: I1009 07:24:09.880280 2206 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 07:24:09.880360 kubelet[2206]: E1009 07:24:09.880340 2206 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:24:09.880655 kubelet[2206]: W1009 07:24:09.880626 2206 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Oct 9 07:24:09.880655 kubelet[2206]: I1009 07:24:09.880634 2206 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:24:09.880655 kubelet[2206]: I1009 07:24:09.880655 2206 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:24:09.880655 kubelet[2206]: E1009 07:24:09.880656 2206 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Oct 9 07:24:09.880814 kubelet[2206]: I1009 07:24:09.880754 2206 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:24:09.964023 kubelet[2206]: I1009 07:24:09.963978 2206 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 07:24:09.964379 kubelet[2206]: E1009 07:24:09.964353 2206 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Oct 9 07:24:09.980443 kubelet[2206]: E1009 07:24:09.980410 2206 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 07:24:10.065051 kubelet[2206]: E1009 07:24:10.065013 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="400ms" Oct 9 07:24:10.166550 kubelet[2206]: I1009 07:24:10.166423 2206 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 07:24:10.166807 kubelet[2206]: E1009 07:24:10.166780 2206 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Oct 9 07:24:10.180959 kubelet[2206]: E1009 07:24:10.180916 2206 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 07:24:10.240514 kubelet[2206]: I1009 07:24:10.240473 2206 policy_none.go:49] "None policy: Start" Oct 9 07:24:10.241329 kubelet[2206]: I1009 07:24:10.241295 2206 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:24:10.241329 kubelet[2206]: I1009 07:24:10.241319 2206 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:24:10.247996 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 07:24:10.269560 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 07:24:10.272886 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 07:24:10.286853 kubelet[2206]: I1009 07:24:10.286811 2206 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:24:10.287138 kubelet[2206]: I1009 07:24:10.287116 2206 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:24:10.288009 kubelet[2206]: E1009 07:24:10.287980 2206 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 9 07:24:10.466514 kubelet[2206]: E1009 07:24:10.466408 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="800ms" Oct 9 07:24:10.568796 kubelet[2206]: I1009 07:24:10.568760 2206 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 07:24:10.569131 kubelet[2206]: E1009 07:24:10.569101 2206 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Oct 9 07:24:10.581181 kubelet[2206]: I1009 07:24:10.581147 2206 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 07:24:10.582092 kubelet[2206]: I1009 07:24:10.582061 2206 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 07:24:10.583049 kubelet[2206]: I1009 07:24:10.583018 2206 topology_manager.go:215] "Topology Admit Handler" podUID="dbd09bcaa5474e47383264e90cc4dba9" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 07:24:10.588206 systemd[1]: Created slice kubepods-burstable-podb21621a72929ad4d87bc59a877761c7f.slice - libcontainer container kubepods-burstable-podb21621a72929ad4d87bc59a877761c7f.slice. Oct 9 07:24:10.615181 systemd[1]: Created slice kubepods-burstable-podf13040d390753ac4a1fef67bb9676230.slice - libcontainer container kubepods-burstable-podf13040d390753ac4a1fef67bb9676230.slice. Oct 9 07:24:10.624231 systemd[1]: Created slice kubepods-burstable-poddbd09bcaa5474e47383264e90cc4dba9.slice - libcontainer container kubepods-burstable-poddbd09bcaa5474e47383264e90cc4dba9.slice. Oct 9 07:24:10.667100 kubelet[2206]: I1009 07:24:10.667067 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 9 07:24:10.667100 kubelet[2206]: I1009 07:24:10.667099 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dbd09bcaa5474e47383264e90cc4dba9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dbd09bcaa5474e47383264e90cc4dba9\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:24:10.667100 kubelet[2206]: I1009 07:24:10.667119 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:24:10.667673 kubelet[2206]: I1009 07:24:10.667136 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:24:10.667673 kubelet[2206]: I1009 07:24:10.667155 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:24:10.667673 kubelet[2206]: I1009 07:24:10.667176 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:24:10.667673 kubelet[2206]: I1009 07:24:10.667229 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dbd09bcaa5474e47383264e90cc4dba9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dbd09bcaa5474e47383264e90cc4dba9\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:24:10.667673 kubelet[2206]: I1009 07:24:10.667249 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dbd09bcaa5474e47383264e90cc4dba9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dbd09bcaa5474e47383264e90cc4dba9\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:24:10.667822 kubelet[2206]: I1009 07:24:10.667313 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:24:10.912291 kubelet[2206]: E1009 07:24:10.912248 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:10.913005 containerd[1462]: time="2024-10-09T07:24:10.912951163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,}" Oct 9 07:24:10.923103 kubelet[2206]: E1009 07:24:10.923078 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:10.923597 containerd[1462]: time="2024-10-09T07:24:10.923541904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,}" Oct 9 07:24:10.926757 kubelet[2206]: E1009 07:24:10.926721 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:10.927029 containerd[1462]: time="2024-10-09T07:24:10.926998737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dbd09bcaa5474e47383264e90cc4dba9,Namespace:kube-system,Attempt:0,}" Oct 9 07:24:10.939404 kubelet[2206]: W1009 07:24:10.939371 2206 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Oct 9 07:24:10.939504 kubelet[2206]: E1009 07:24:10.939413 2206 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Oct 9 07:24:11.019249 kubelet[2206]: W1009 07:24:11.019178 2206 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Oct 9 07:24:11.019249 kubelet[2206]: E1009 07:24:11.019240 2206 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Oct 9 07:24:11.267669 kubelet[2206]: E1009 07:24:11.267624 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="1.6s" Oct 9 07:24:11.314564 kubelet[2206]: W1009 07:24:11.314486 2206 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Oct 9 07:24:11.314564 kubelet[2206]: E1009 07:24:11.314563 2206 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Oct 9 07:24:11.371188 kubelet[2206]: I1009 07:24:11.371141 2206 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 07:24:11.371649 kubelet[2206]: E1009 07:24:11.371617 2206 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Oct 9 07:24:11.441163 kubelet[2206]: W1009 07:24:11.441095 2206 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.107:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Oct 9 07:24:11.441163 kubelet[2206]: E1009 07:24:11.441161 2206 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.107:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Oct 9 07:24:11.996795 kubelet[2206]: E1009 07:24:11.996747 2206 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.107:6443: connect: connection refused Oct 9 07:24:12.168488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1252404185.mount: Deactivated successfully. Oct 9 07:24:12.176449 containerd[1462]: time="2024-10-09T07:24:12.176400467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:24:12.177392 containerd[1462]: time="2024-10-09T07:24:12.177351135Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:24:12.178127 containerd[1462]: time="2024-10-09T07:24:12.178088492Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:24:12.179209 containerd[1462]: time="2024-10-09T07:24:12.179173398Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:24:12.180207 containerd[1462]: time="2024-10-09T07:24:12.180163832Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 9 07:24:12.181022 containerd[1462]: time="2024-10-09T07:24:12.180968739Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:24:12.182326 containerd[1462]: time="2024-10-09T07:24:12.182294188Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:24:12.186239 containerd[1462]: time="2024-10-09T07:24:12.186206809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:24:12.187193 containerd[1462]: time="2024-10-09T07:24:12.187153118Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.260075242s" Oct 9 07:24:12.188662 containerd[1462]: time="2024-10-09T07:24:12.188635248Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.27557532s" Oct 9 07:24:12.190062 containerd[1462]: time="2024-10-09T07:24:12.190030991Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.266374331s" Oct 9 07:24:12.303662 containerd[1462]: time="2024-10-09T07:24:12.303381218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:24:12.303662 containerd[1462]: time="2024-10-09T07:24:12.303431486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:24:12.303662 containerd[1462]: time="2024-10-09T07:24:12.303448247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:24:12.303662 containerd[1462]: time="2024-10-09T07:24:12.303460762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:24:12.305824 containerd[1462]: time="2024-10-09T07:24:12.305303604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:24:12.306064 containerd[1462]: time="2024-10-09T07:24:12.306009601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:24:12.306064 containerd[1462]: time="2024-10-09T07:24:12.306032435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:24:12.306064 containerd[1462]: time="2024-10-09T07:24:12.306043978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:24:12.306461 containerd[1462]: time="2024-10-09T07:24:12.306364263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:24:12.306614 containerd[1462]: time="2024-10-09T07:24:12.306528730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:24:12.306614 containerd[1462]: time="2024-10-09T07:24:12.306546874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:24:12.306614 containerd[1462]: time="2024-10-09T07:24:12.306556783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:24:12.331833 systemd[1]: Started cri-containerd-16673de9ba81c2627ed3a36d8c07f093187f9cfdfbea344d73d75a520f7a4ee8.scope - libcontainer container 16673de9ba81c2627ed3a36d8c07f093187f9cfdfbea344d73d75a520f7a4ee8. Oct 9 07:24:12.333488 systemd[1]: Started cri-containerd-2e228969a4d894688fa4459b4d7bb0acda0481da6c041d2241a5a00eb7a661bd.scope - libcontainer container 2e228969a4d894688fa4459b4d7bb0acda0481da6c041d2241a5a00eb7a661bd. Oct 9 07:24:12.335494 systemd[1]: Started cri-containerd-e63234c18f0b8d2c26a5b9538a20af782426bae9c08edf3cea43c15c543153ca.scope - libcontainer container e63234c18f0b8d2c26a5b9538a20af782426bae9c08edf3cea43c15c543153ca. Oct 9 07:24:12.372057 containerd[1462]: time="2024-10-09T07:24:12.371986518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,} returns sandbox id \"e63234c18f0b8d2c26a5b9538a20af782426bae9c08edf3cea43c15c543153ca\"" Oct 9 07:24:12.374192 kubelet[2206]: E1009 07:24:12.374072 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:12.377895 containerd[1462]: time="2024-10-09T07:24:12.377408080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dbd09bcaa5474e47383264e90cc4dba9,Namespace:kube-system,Attempt:0,} returns sandbox id \"16673de9ba81c2627ed3a36d8c07f093187f9cfdfbea344d73d75a520f7a4ee8\"" Oct 9 07:24:12.378294 kubelet[2206]: E1009 07:24:12.378272 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:12.378630 containerd[1462]: time="2024-10-09T07:24:12.378507483Z" level=info msg="CreateContainer within sandbox \"e63234c18f0b8d2c26a5b9538a20af782426bae9c08edf3cea43c15c543153ca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 07:24:12.380760 containerd[1462]: time="2024-10-09T07:24:12.380726890Z" level=info msg="CreateContainer within sandbox \"16673de9ba81c2627ed3a36d8c07f093187f9cfdfbea344d73d75a520f7a4ee8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 07:24:12.382860 containerd[1462]: time="2024-10-09T07:24:12.382671860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e228969a4d894688fa4459b4d7bb0acda0481da6c041d2241a5a00eb7a661bd\"" Oct 9 07:24:12.383842 kubelet[2206]: E1009 07:24:12.383797 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:12.386037 containerd[1462]: time="2024-10-09T07:24:12.386007602Z" level=info msg="CreateContainer within sandbox \"2e228969a4d894688fa4459b4d7bb0acda0481da6c041d2241a5a00eb7a661bd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 07:24:12.403184 containerd[1462]: time="2024-10-09T07:24:12.403123927Z" level=info msg="CreateContainer within sandbox \"e63234c18f0b8d2c26a5b9538a20af782426bae9c08edf3cea43c15c543153ca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9706a82b4ec0a4f17cdcdc660c84d34cc2200fbb741d309e47091da922429e7f\"" Oct 9 07:24:12.403754 containerd[1462]: time="2024-10-09T07:24:12.403729060Z" level=info msg="StartContainer for \"9706a82b4ec0a4f17cdcdc660c84d34cc2200fbb741d309e47091da922429e7f\"" Oct 9 07:24:12.408428 containerd[1462]: time="2024-10-09T07:24:12.408391172Z" level=info msg="CreateContainer within sandbox \"16673de9ba81c2627ed3a36d8c07f093187f9cfdfbea344d73d75a520f7a4ee8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7281efc8342381c2e7540154633c9ab1482e053403f6a480b16eab35c532c4a8\"" Oct 9 07:24:12.408951 containerd[1462]: time="2024-10-09T07:24:12.408784278Z" level=info msg="StartContainer for \"7281efc8342381c2e7540154633c9ab1482e053403f6a480b16eab35c532c4a8\"" Oct 9 07:24:12.410274 containerd[1462]: time="2024-10-09T07:24:12.410152047Z" level=info msg="CreateContainer within sandbox \"2e228969a4d894688fa4459b4d7bb0acda0481da6c041d2241a5a00eb7a661bd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"85aa92f8a9336d7ab3028d7ab764d36ebdca084dfd15ddc33a15caab9f2b30a8\"" Oct 9 07:24:12.410587 containerd[1462]: time="2024-10-09T07:24:12.410557377Z" level=info msg="StartContainer for \"85aa92f8a9336d7ab3028d7ab764d36ebdca084dfd15ddc33a15caab9f2b30a8\"" Oct 9 07:24:12.438905 systemd[1]: Started cri-containerd-9706a82b4ec0a4f17cdcdc660c84d34cc2200fbb741d309e47091da922429e7f.scope - libcontainer container 9706a82b4ec0a4f17cdcdc660c84d34cc2200fbb741d309e47091da922429e7f. Oct 9 07:24:12.462992 systemd[1]: Started cri-containerd-7281efc8342381c2e7540154633c9ab1482e053403f6a480b16eab35c532c4a8.scope - libcontainer container 7281efc8342381c2e7540154633c9ab1482e053403f6a480b16eab35c532c4a8. Oct 9 07:24:12.465316 systemd[1]: Started cri-containerd-85aa92f8a9336d7ab3028d7ab764d36ebdca084dfd15ddc33a15caab9f2b30a8.scope - libcontainer container 85aa92f8a9336d7ab3028d7ab764d36ebdca084dfd15ddc33a15caab9f2b30a8. Oct 9 07:24:12.502341 containerd[1462]: time="2024-10-09T07:24:12.502072929Z" level=info msg="StartContainer for \"9706a82b4ec0a4f17cdcdc660c84d34cc2200fbb741d309e47091da922429e7f\" returns successfully" Oct 9 07:24:12.512670 containerd[1462]: time="2024-10-09T07:24:12.512602982Z" level=info msg="StartContainer for \"7281efc8342381c2e7540154633c9ab1482e053403f6a480b16eab35c532c4a8\" returns successfully" Oct 9 07:24:12.521574 containerd[1462]: time="2024-10-09T07:24:12.521523682Z" level=info msg="StartContainer for \"85aa92f8a9336d7ab3028d7ab764d36ebdca084dfd15ddc33a15caab9f2b30a8\" returns successfully" Oct 9 07:24:12.889430 kubelet[2206]: E1009 07:24:12.889398 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:12.892103 kubelet[2206]: E1009 07:24:12.892085 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:12.894671 kubelet[2206]: E1009 07:24:12.894649 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:12.973875 kubelet[2206]: I1009 07:24:12.973840 2206 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 07:24:13.301710 kubelet[2206]: E1009 07:24:13.298766 2206 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 9 07:24:13.397227 kubelet[2206]: I1009 07:24:13.397184 2206 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 07:24:13.856636 kubelet[2206]: I1009 07:24:13.856592 2206 apiserver.go:52] "Watching apiserver" Oct 9 07:24:13.863639 kubelet[2206]: I1009 07:24:13.863612 2206 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 07:24:13.898823 kubelet[2206]: E1009 07:24:13.898792 2206 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 9 07:24:13.898823 kubelet[2206]: E1009 07:24:13.898803 2206 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 9 07:24:13.899206 kubelet[2206]: E1009 07:24:13.899184 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:13.899337 kubelet[2206]: E1009 07:24:13.899221 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:15.711144 systemd[1]: Reloading requested from client PID 2484 ('systemctl') (unit session-9.scope)... Oct 9 07:24:15.711158 systemd[1]: Reloading... Oct 9 07:24:15.780709 zram_generator::config[2521]: No configuration found. Oct 9 07:24:15.882987 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:24:15.970390 systemd[1]: Reloading finished in 258 ms. Oct 9 07:24:16.014173 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:24:16.036889 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 07:24:16.037147 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:24:16.044972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:24:16.174787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:24:16.180106 (kubelet)[2566]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:24:16.218881 kubelet[2566]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:24:16.218881 kubelet[2566]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:24:16.218881 kubelet[2566]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:24:16.219272 kubelet[2566]: I1009 07:24:16.218934 2566 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:24:16.224236 kubelet[2566]: I1009 07:24:16.224136 2566 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 07:24:16.224236 kubelet[2566]: I1009 07:24:16.224165 2566 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:24:16.224428 kubelet[2566]: I1009 07:24:16.224368 2566 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 07:24:16.226117 kubelet[2566]: I1009 07:24:16.225979 2566 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 07:24:16.228938 kubelet[2566]: I1009 07:24:16.228900 2566 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:24:16.236254 kubelet[2566]: I1009 07:24:16.236225 2566 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:24:16.236493 kubelet[2566]: I1009 07:24:16.236477 2566 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:24:16.236713 kubelet[2566]: I1009 07:24:16.236634 2566 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 07:24:16.236713 kubelet[2566]: I1009 07:24:16.236674 2566 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:24:16.236713 kubelet[2566]: I1009 07:24:16.236705 2566 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 07:24:16.239928 kubelet[2566]: I1009 07:24:16.236741 2566 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:24:16.239928 kubelet[2566]: I1009 07:24:16.236832 2566 kubelet.go:396] "Attempting to sync node with API server" Oct 9 07:24:16.239928 kubelet[2566]: I1009 07:24:16.236845 2566 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:24:16.239928 kubelet[2566]: I1009 07:24:16.236870 2566 kubelet.go:312] "Adding apiserver pod source" Oct 9 07:24:16.239928 kubelet[2566]: I1009 07:24:16.236884 2566 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:24:16.239928 kubelet[2566]: I1009 07:24:16.237950 2566 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 9 07:24:16.239928 kubelet[2566]: I1009 07:24:16.238104 2566 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:24:16.239928 kubelet[2566]: I1009 07:24:16.238425 2566 server.go:1256] "Started kubelet" Oct 9 07:24:16.240302 kubelet[2566]: I1009 07:24:16.240223 2566 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:24:16.242176 kubelet[2566]: E1009 07:24:16.242153 2566 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:24:16.246487 kubelet[2566]: I1009 07:24:16.245250 2566 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:24:16.246789 kubelet[2566]: I1009 07:24:16.246773 2566 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 07:24:16.248110 kubelet[2566]: I1009 07:24:16.248085 2566 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 07:24:16.248952 kubelet[2566]: I1009 07:24:16.248216 2566 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 07:24:16.253802 kubelet[2566]: I1009 07:24:16.253378 2566 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:24:16.253802 kubelet[2566]: I1009 07:24:16.253433 2566 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:24:16.253802 kubelet[2566]: I1009 07:24:16.253666 2566 server.go:461] "Adding debug handlers to kubelet server" Oct 9 07:24:16.254098 kubelet[2566]: I1009 07:24:16.254080 2566 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:24:16.254971 kubelet[2566]: I1009 07:24:16.254949 2566 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:24:16.254971 kubelet[2566]: I1009 07:24:16.254969 2566 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:24:16.257970 kubelet[2566]: I1009 07:24:16.257900 2566 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:24:16.259957 kubelet[2566]: I1009 07:24:16.259930 2566 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:24:16.260394 kubelet[2566]: I1009 07:24:16.260071 2566 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:24:16.260394 kubelet[2566]: I1009 07:24:16.260092 2566 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 07:24:16.260394 kubelet[2566]: E1009 07:24:16.260133 2566 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:24:16.289753 kubelet[2566]: I1009 07:24:16.289725 2566 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:24:16.289753 kubelet[2566]: I1009 07:24:16.289748 2566 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:24:16.289753 kubelet[2566]: I1009 07:24:16.289765 2566 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:24:16.289925 kubelet[2566]: I1009 07:24:16.289917 2566 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 07:24:16.289948 kubelet[2566]: I1009 07:24:16.289938 2566 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 07:24:16.289948 kubelet[2566]: I1009 07:24:16.289946 2566 policy_none.go:49] "None policy: Start" Oct 9 07:24:16.290431 kubelet[2566]: I1009 07:24:16.290415 2566 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:24:16.290473 kubelet[2566]: I1009 07:24:16.290437 2566 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:24:16.290573 kubelet[2566]: I1009 07:24:16.290562 2566 state_mem.go:75] "Updated machine memory state" Oct 9 07:24:16.294204 kubelet[2566]: I1009 07:24:16.294185 2566 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:24:16.294537 kubelet[2566]: I1009 07:24:16.294432 2566 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:24:16.351868 kubelet[2566]: I1009 07:24:16.351845 2566 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 07:24:16.359036 kubelet[2566]: I1009 07:24:16.358967 2566 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 9 07:24:16.359036 kubelet[2566]: I1009 07:24:16.359041 2566 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 07:24:16.360728 kubelet[2566]: I1009 07:24:16.360227 2566 topology_manager.go:215] "Topology Admit Handler" podUID="dbd09bcaa5474e47383264e90cc4dba9" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 07:24:16.360728 kubelet[2566]: I1009 07:24:16.360308 2566 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 07:24:16.360728 kubelet[2566]: I1009 07:24:16.360347 2566 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 07:24:16.549859 kubelet[2566]: I1009 07:24:16.549819 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dbd09bcaa5474e47383264e90cc4dba9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dbd09bcaa5474e47383264e90cc4dba9\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:24:16.549859 kubelet[2566]: I1009 07:24:16.549868 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:24:16.550097 kubelet[2566]: I1009 07:24:16.549894 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:24:16.550097 kubelet[2566]: I1009 07:24:16.549918 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:24:16.550097 kubelet[2566]: I1009 07:24:16.549943 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dbd09bcaa5474e47383264e90cc4dba9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dbd09bcaa5474e47383264e90cc4dba9\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:24:16.550097 kubelet[2566]: I1009 07:24:16.549991 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dbd09bcaa5474e47383264e90cc4dba9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dbd09bcaa5474e47383264e90cc4dba9\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:24:16.550097 kubelet[2566]: I1009 07:24:16.550015 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:24:16.550209 kubelet[2566]: I1009 07:24:16.550040 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:24:16.550209 kubelet[2566]: I1009 07:24:16.550063 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 9 07:24:16.666442 kubelet[2566]: E1009 07:24:16.666410 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:16.666731 kubelet[2566]: E1009 07:24:16.666712 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:16.667988 kubelet[2566]: E1009 07:24:16.667968 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:17.240705 kubelet[2566]: I1009 07:24:17.238148 2566 apiserver.go:52] "Watching apiserver" Oct 9 07:24:17.248908 kubelet[2566]: I1009 07:24:17.248859 2566 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 07:24:17.274118 kubelet[2566]: E1009 07:24:17.274037 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:17.274334 kubelet[2566]: E1009 07:24:17.274304 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:17.280260 kubelet[2566]: E1009 07:24:17.280229 2566 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 9 07:24:17.280641 kubelet[2566]: E1009 07:24:17.280620 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:17.300179 kubelet[2566]: I1009 07:24:17.300142 2566 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.3000778419999999 podStartE2EDuration="1.300077842s" podCreationTimestamp="2024-10-09 07:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:24:17.29357173 +0000 UTC m=+1.109065948" watchObservedRunningTime="2024-10-09 07:24:17.300077842 +0000 UTC m=+1.115572060" Oct 9 07:24:17.300331 kubelet[2566]: I1009 07:24:17.300231 2566 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.300215344 podStartE2EDuration="1.300215344s" podCreationTimestamp="2024-10-09 07:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:24:17.300057994 +0000 UTC m=+1.115552212" watchObservedRunningTime="2024-10-09 07:24:17.300215344 +0000 UTC m=+1.115709562" Oct 9 07:24:17.307194 kubelet[2566]: I1009 07:24:17.307160 2566 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.3071192169999999 podStartE2EDuration="1.307119217s" podCreationTimestamp="2024-10-09 07:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:24:17.307048202 +0000 UTC m=+1.122542420" watchObservedRunningTime="2024-10-09 07:24:17.307119217 +0000 UTC m=+1.122613435" Oct 9 07:24:18.275010 kubelet[2566]: E1009 07:24:18.274984 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:18.429938 kubelet[2566]: E1009 07:24:18.429909 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:18.862577 kubelet[2566]: E1009 07:24:18.862546 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:19.275722 kubelet[2566]: E1009 07:24:19.275677 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:20.081000 sudo[1657]: pam_unix(sudo:session): session closed for user root Oct 9 07:24:20.082657 sshd[1654]: pam_unix(sshd:session): session closed for user core Oct 9 07:24:20.086835 systemd[1]: sshd@8-10.0.0.107:22-10.0.0.1:39484.service: Deactivated successfully. Oct 9 07:24:20.088767 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 07:24:20.088945 systemd[1]: session-9.scope: Consumed 4.423s CPU time, 138.9M memory peak, 0B memory swap peak. Oct 9 07:24:20.089405 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Oct 9 07:24:20.090162 systemd-logind[1441]: Removed session 9. Oct 9 07:24:23.902517 update_engine[1442]: I1009 07:24:23.902472 1442 update_attempter.cc:509] Updating boot flags... Oct 9 07:24:23.927714 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2666) Oct 9 07:24:23.957733 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2666) Oct 9 07:24:23.983769 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2666) Oct 9 07:24:28.434111 kubelet[2566]: E1009 07:24:28.434075 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:28.867099 kubelet[2566]: E1009 07:24:28.867058 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:29.215990 kubelet[2566]: E1009 07:24:29.215852 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:29.629283 kubelet[2566]: I1009 07:24:29.629253 2566 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 07:24:29.629709 containerd[1462]: time="2024-10-09T07:24:29.629607466Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 07:24:29.629948 kubelet[2566]: I1009 07:24:29.629829 2566 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 07:24:30.474404 kubelet[2566]: I1009 07:24:30.474352 2566 topology_manager.go:215] "Topology Admit Handler" podUID="e2287c31-2140-4c9c-8bd6-bfc512fddd7d" podNamespace="kube-system" podName="kube-proxy-s56zg" Oct 9 07:24:30.480860 systemd[1]: Created slice kubepods-besteffort-pode2287c31_2140_4c9c_8bd6_bfc512fddd7d.slice - libcontainer container kubepods-besteffort-pode2287c31_2140_4c9c_8bd6_bfc512fddd7d.slice. Oct 9 07:24:30.542967 kubelet[2566]: I1009 07:24:30.542930 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e2287c31-2140-4c9c-8bd6-bfc512fddd7d-kube-proxy\") pod \"kube-proxy-s56zg\" (UID: \"e2287c31-2140-4c9c-8bd6-bfc512fddd7d\") " pod="kube-system/kube-proxy-s56zg" Oct 9 07:24:30.542967 kubelet[2566]: I1009 07:24:30.542972 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjpzp\" (UniqueName: \"kubernetes.io/projected/e2287c31-2140-4c9c-8bd6-bfc512fddd7d-kube-api-access-jjpzp\") pod \"kube-proxy-s56zg\" (UID: \"e2287c31-2140-4c9c-8bd6-bfc512fddd7d\") " pod="kube-system/kube-proxy-s56zg" Oct 9 07:24:30.543145 kubelet[2566]: I1009 07:24:30.542994 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2287c31-2140-4c9c-8bd6-bfc512fddd7d-xtables-lock\") pod \"kube-proxy-s56zg\" (UID: \"e2287c31-2140-4c9c-8bd6-bfc512fddd7d\") " pod="kube-system/kube-proxy-s56zg" Oct 9 07:24:30.543145 kubelet[2566]: I1009 07:24:30.543014 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2287c31-2140-4c9c-8bd6-bfc512fddd7d-lib-modules\") pod \"kube-proxy-s56zg\" (UID: \"e2287c31-2140-4c9c-8bd6-bfc512fddd7d\") " pod="kube-system/kube-proxy-s56zg" Oct 9 07:24:30.587099 kubelet[2566]: I1009 07:24:30.587015 2566 topology_manager.go:215] "Topology Admit Handler" podUID="213bbf2e-a4e4-4f7e-829a-69f2cea437af" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-qkxqn" Oct 9 07:24:30.595985 systemd[1]: Created slice kubepods-besteffort-pod213bbf2e_a4e4_4f7e_829a_69f2cea437af.slice - libcontainer container kubepods-besteffort-pod213bbf2e_a4e4_4f7e_829a_69f2cea437af.slice. Oct 9 07:24:30.643318 kubelet[2566]: I1009 07:24:30.643218 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jtv5\" (UniqueName: \"kubernetes.io/projected/213bbf2e-a4e4-4f7e-829a-69f2cea437af-kube-api-access-6jtv5\") pod \"tigera-operator-5d56685c77-qkxqn\" (UID: \"213bbf2e-a4e4-4f7e-829a-69f2cea437af\") " pod="tigera-operator/tigera-operator-5d56685c77-qkxqn" Oct 9 07:24:30.643318 kubelet[2566]: I1009 07:24:30.643299 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/213bbf2e-a4e4-4f7e-829a-69f2cea437af-var-lib-calico\") pod \"tigera-operator-5d56685c77-qkxqn\" (UID: \"213bbf2e-a4e4-4f7e-829a-69f2cea437af\") " pod="tigera-operator/tigera-operator-5d56685c77-qkxqn" Oct 9 07:24:30.795109 kubelet[2566]: E1009 07:24:30.795059 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:30.795656 containerd[1462]: time="2024-10-09T07:24:30.795589491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s56zg,Uid:e2287c31-2140-4c9c-8bd6-bfc512fddd7d,Namespace:kube-system,Attempt:0,}" Oct 9 07:24:30.819745 containerd[1462]: time="2024-10-09T07:24:30.819606605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:24:30.819745 containerd[1462]: time="2024-10-09T07:24:30.819706424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:24:30.819745 containerd[1462]: time="2024-10-09T07:24:30.819729688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:24:30.819745 containerd[1462]: time="2024-10-09T07:24:30.819744586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:24:30.845831 systemd[1]: Started cri-containerd-2ac70b618cb67da08d0a367c8d3821db00cfd2662c76ff56416eca4c712bab80.scope - libcontainer container 2ac70b618cb67da08d0a367c8d3821db00cfd2662c76ff56416eca4c712bab80. Oct 9 07:24:30.868252 containerd[1462]: time="2024-10-09T07:24:30.868214559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s56zg,Uid:e2287c31-2140-4c9c-8bd6-bfc512fddd7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ac70b618cb67da08d0a367c8d3821db00cfd2662c76ff56416eca4c712bab80\"" Oct 9 07:24:30.868975 kubelet[2566]: E1009 07:24:30.868953 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:30.871169 containerd[1462]: time="2024-10-09T07:24:30.871046983Z" level=info msg="CreateContainer within sandbox \"2ac70b618cb67da08d0a367c8d3821db00cfd2662c76ff56416eca4c712bab80\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 07:24:30.889357 containerd[1462]: time="2024-10-09T07:24:30.889316904Z" level=info msg="CreateContainer within sandbox \"2ac70b618cb67da08d0a367c8d3821db00cfd2662c76ff56416eca4c712bab80\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"15eb2b2898c8dd8a7ef3b20cf4f8e4c231c99f19501368cec09f16edc2a3e977\"" Oct 9 07:24:30.890015 containerd[1462]: time="2024-10-09T07:24:30.889975529Z" level=info msg="StartContainer for \"15eb2b2898c8dd8a7ef3b20cf4f8e4c231c99f19501368cec09f16edc2a3e977\"" Oct 9 07:24:30.899537 containerd[1462]: time="2024-10-09T07:24:30.899505572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-qkxqn,Uid:213bbf2e-a4e4-4f7e-829a-69f2cea437af,Namespace:tigera-operator,Attempt:0,}" Oct 9 07:24:30.919852 systemd[1]: Started cri-containerd-15eb2b2898c8dd8a7ef3b20cf4f8e4c231c99f19501368cec09f16edc2a3e977.scope - libcontainer container 15eb2b2898c8dd8a7ef3b20cf4f8e4c231c99f19501368cec09f16edc2a3e977. Oct 9 07:24:30.929784 containerd[1462]: time="2024-10-09T07:24:30.929619770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:24:30.929784 containerd[1462]: time="2024-10-09T07:24:30.929736922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:24:30.929908 containerd[1462]: time="2024-10-09T07:24:30.929791335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:24:30.929908 containerd[1462]: time="2024-10-09T07:24:30.929817113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:24:30.947862 systemd[1]: Started cri-containerd-45fd44a99871a7bd3b019593695603418cb5f16252e86a7fc3a8d74ac3296825.scope - libcontainer container 45fd44a99871a7bd3b019593695603418cb5f16252e86a7fc3a8d74ac3296825. Oct 9 07:24:30.955396 containerd[1462]: time="2024-10-09T07:24:30.955348480Z" level=info msg="StartContainer for \"15eb2b2898c8dd8a7ef3b20cf4f8e4c231c99f19501368cec09f16edc2a3e977\" returns successfully" Oct 9 07:24:30.985317 containerd[1462]: time="2024-10-09T07:24:30.985284311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-qkxqn,Uid:213bbf2e-a4e4-4f7e-829a-69f2cea437af,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"45fd44a99871a7bd3b019593695603418cb5f16252e86a7fc3a8d74ac3296825\"" Oct 9 07:24:30.987733 containerd[1462]: time="2024-10-09T07:24:30.987640294Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 07:24:31.291443 kubelet[2566]: E1009 07:24:31.291412 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:31.300229 kubelet[2566]: I1009 07:24:31.300196 2566 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-s56zg" podStartSLOduration=1.300156533 podStartE2EDuration="1.300156533s" podCreationTimestamp="2024-10-09 07:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:24:31.299863309 +0000 UTC m=+15.115357527" watchObservedRunningTime="2024-10-09 07:24:31.300156533 +0000 UTC m=+15.115650751" Oct 9 07:24:32.566480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount714850226.mount: Deactivated successfully. Oct 9 07:24:33.551360 containerd[1462]: time="2024-10-09T07:24:33.551309245Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:33.552189 containerd[1462]: time="2024-10-09T07:24:33.552149601Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136509" Oct 9 07:24:33.553432 containerd[1462]: time="2024-10-09T07:24:33.553402966Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:33.557548 containerd[1462]: time="2024-10-09T07:24:33.557504388Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:33.558032 containerd[1462]: time="2024-10-09T07:24:33.557961561Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.570272044s" Oct 9 07:24:33.558074 containerd[1462]: time="2024-10-09T07:24:33.558039879Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 07:24:33.560195 containerd[1462]: time="2024-10-09T07:24:33.560086872Z" level=info msg="CreateContainer within sandbox \"45fd44a99871a7bd3b019593695603418cb5f16252e86a7fc3a8d74ac3296825\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 07:24:33.573089 containerd[1462]: time="2024-10-09T07:24:33.573054298Z" level=info msg="CreateContainer within sandbox \"45fd44a99871a7bd3b019593695603418cb5f16252e86a7fc3a8d74ac3296825\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4db6d14100799e06faf03038937857a605ad4661a4df320a23b964bbffe00ec6\"" Oct 9 07:24:33.573401 containerd[1462]: time="2024-10-09T07:24:33.573373501Z" level=info msg="StartContainer for \"4db6d14100799e06faf03038937857a605ad4661a4df320a23b964bbffe00ec6\"" Oct 9 07:24:33.596063 systemd[1]: run-containerd-runc-k8s.io-4db6d14100799e06faf03038937857a605ad4661a4df320a23b964bbffe00ec6-runc.hT2SC0.mount: Deactivated successfully. Oct 9 07:24:33.610808 systemd[1]: Started cri-containerd-4db6d14100799e06faf03038937857a605ad4661a4df320a23b964bbffe00ec6.scope - libcontainer container 4db6d14100799e06faf03038937857a605ad4661a4df320a23b964bbffe00ec6. Oct 9 07:24:33.635039 containerd[1462]: time="2024-10-09T07:24:33.635000190Z" level=info msg="StartContainer for \"4db6d14100799e06faf03038937857a605ad4661a4df320a23b964bbffe00ec6\" returns successfully" Oct 9 07:24:34.304876 kubelet[2566]: I1009 07:24:34.304842 2566 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-qkxqn" podStartSLOduration=1.7334824960000002 podStartE2EDuration="4.304787811s" podCreationTimestamp="2024-10-09 07:24:30 +0000 UTC" firstStartedPulling="2024-10-09 07:24:30.987286615 +0000 UTC m=+14.802780823" lastFinishedPulling="2024-10-09 07:24:33.55859193 +0000 UTC m=+17.374086138" observedRunningTime="2024-10-09 07:24:34.30455351 +0000 UTC m=+18.120047728" watchObservedRunningTime="2024-10-09 07:24:34.304787811 +0000 UTC m=+18.120282029" Oct 9 07:24:36.445487 kubelet[2566]: I1009 07:24:36.445436 2566 topology_manager.go:215] "Topology Admit Handler" podUID="18dcd355-8ea5-4c08-9c94-4f1eb38d83e3" podNamespace="calico-system" podName="calico-typha-84755ddb6d-2f78t" Oct 9 07:24:36.458880 systemd[1]: Created slice kubepods-besteffort-pod18dcd355_8ea5_4c08_9c94_4f1eb38d83e3.slice - libcontainer container kubepods-besteffort-pod18dcd355_8ea5_4c08_9c94_4f1eb38d83e3.slice. Oct 9 07:24:36.489974 kubelet[2566]: I1009 07:24:36.489784 2566 topology_manager.go:215] "Topology Admit Handler" podUID="8ac1f52d-0ea9-4287-8637-a1c17d55ae85" podNamespace="calico-system" podName="calico-node-pmqjl" Oct 9 07:24:36.497656 systemd[1]: Created slice kubepods-besteffort-pod8ac1f52d_0ea9_4287_8637_a1c17d55ae85.slice - libcontainer container kubepods-besteffort-pod8ac1f52d_0ea9_4287_8637_a1c17d55ae85.slice. Oct 9 07:24:36.576843 kubelet[2566]: I1009 07:24:36.576800 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18dcd355-8ea5-4c08-9c94-4f1eb38d83e3-tigera-ca-bundle\") pod \"calico-typha-84755ddb6d-2f78t\" (UID: \"18dcd355-8ea5-4c08-9c94-4f1eb38d83e3\") " pod="calico-system/calico-typha-84755ddb6d-2f78t" Oct 9 07:24:36.576843 kubelet[2566]: I1009 07:24:36.576840 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnj9g\" (UniqueName: \"kubernetes.io/projected/18dcd355-8ea5-4c08-9c94-4f1eb38d83e3-kube-api-access-wnj9g\") pod \"calico-typha-84755ddb6d-2f78t\" (UID: \"18dcd355-8ea5-4c08-9c94-4f1eb38d83e3\") " pod="calico-system/calico-typha-84755ddb6d-2f78t" Oct 9 07:24:36.577025 kubelet[2566]: I1009 07:24:36.576888 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/18dcd355-8ea5-4c08-9c94-4f1eb38d83e3-typha-certs\") pod \"calico-typha-84755ddb6d-2f78t\" (UID: \"18dcd355-8ea5-4c08-9c94-4f1eb38d83e3\") " pod="calico-system/calico-typha-84755ddb6d-2f78t" Oct 9 07:24:36.601166 kubelet[2566]: I1009 07:24:36.601121 2566 topology_manager.go:215] "Topology Admit Handler" podUID="ebf1fe33-16c6-4476-9371-316390576226" podNamespace="calico-system" podName="csi-node-driver-hsltf" Oct 9 07:24:36.601465 kubelet[2566]: E1009 07:24:36.601413 2566 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hsltf" podUID="ebf1fe33-16c6-4476-9371-316390576226" Oct 9 07:24:36.678074 kubelet[2566]: I1009 07:24:36.678028 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ebf1fe33-16c6-4476-9371-316390576226-registration-dir\") pod \"csi-node-driver-hsltf\" (UID: \"ebf1fe33-16c6-4476-9371-316390576226\") " pod="calico-system/csi-node-driver-hsltf" Oct 9 07:24:36.678208 kubelet[2566]: I1009 07:24:36.678093 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ac1f52d-0ea9-4287-8637-a1c17d55ae85-xtables-lock\") pod \"calico-node-pmqjl\" (UID: \"8ac1f52d-0ea9-4287-8637-a1c17d55ae85\") " pod="calico-system/calico-node-pmqjl" Oct 9 07:24:36.678208 kubelet[2566]: I1009 07:24:36.678126 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8ac1f52d-0ea9-4287-8637-a1c17d55ae85-flexvol-driver-host\") pod \"calico-node-pmqjl\" (UID: \"8ac1f52d-0ea9-4287-8637-a1c17d55ae85\") " pod="calico-system/calico-node-pmqjl" Oct 9 07:24:36.678208 kubelet[2566]: I1009 07:24:36.678152 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5r7c\" (UniqueName: \"kubernetes.io/projected/8ac1f52d-0ea9-4287-8637-a1c17d55ae85-kube-api-access-m5r7c\") pod \"calico-node-pmqjl\" (UID: \"8ac1f52d-0ea9-4287-8637-a1c17d55ae85\") " pod="calico-system/calico-node-pmqjl" Oct 9 07:24:36.678208 kubelet[2566]: I1009 07:24:36.678176 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebf1fe33-16c6-4476-9371-316390576226-kubelet-dir\") pod \"csi-node-driver-hsltf\" (UID: \"ebf1fe33-16c6-4476-9371-316390576226\") " pod="calico-system/csi-node-driver-hsltf" Oct 9 07:24:36.678208 kubelet[2566]: I1009 07:24:36.678202 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ac1f52d-0ea9-4287-8637-a1c17d55ae85-tigera-ca-bundle\") pod \"calico-node-pmqjl\" (UID: \"8ac1f52d-0ea9-4287-8637-a1c17d55ae85\") " pod="calico-system/calico-node-pmqjl" Oct 9 07:24:36.678333 kubelet[2566]: I1009 07:24:36.678238 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8ac1f52d-0ea9-4287-8637-a1c17d55ae85-var-run-calico\") pod \"calico-node-pmqjl\" (UID: \"8ac1f52d-0ea9-4287-8637-a1c17d55ae85\") " pod="calico-system/calico-node-pmqjl" Oct 9 07:24:36.678333 kubelet[2566]: I1009 07:24:36.678294 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ebf1fe33-16c6-4476-9371-316390576226-socket-dir\") pod \"csi-node-driver-hsltf\" (UID: \"ebf1fe33-16c6-4476-9371-316390576226\") " pod="calico-system/csi-node-driver-hsltf" Oct 9 07:24:36.678333 kubelet[2566]: I1009 07:24:36.678329 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8ac1f52d-0ea9-4287-8637-a1c17d55ae85-node-certs\") pod \"calico-node-pmqjl\" (UID: \"8ac1f52d-0ea9-4287-8637-a1c17d55ae85\") " pod="calico-system/calico-node-pmqjl" Oct 9 07:24:36.678711 kubelet[2566]: I1009 07:24:36.678659 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8ac1f52d-0ea9-4287-8637-a1c17d55ae85-var-lib-calico\") pod \"calico-node-pmqjl\" (UID: \"8ac1f52d-0ea9-4287-8637-a1c17d55ae85\") " pod="calico-system/calico-node-pmqjl" Oct 9 07:24:36.678711 kubelet[2566]: I1009 07:24:36.678700 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ebf1fe33-16c6-4476-9371-316390576226-varrun\") pod \"csi-node-driver-hsltf\" (UID: \"ebf1fe33-16c6-4476-9371-316390576226\") " pod="calico-system/csi-node-driver-hsltf" Oct 9 07:24:36.678711 kubelet[2566]: I1009 07:24:36.678719 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ac1f52d-0ea9-4287-8637-a1c17d55ae85-lib-modules\") pod \"calico-node-pmqjl\" (UID: \"8ac1f52d-0ea9-4287-8637-a1c17d55ae85\") " pod="calico-system/calico-node-pmqjl" Oct 9 07:24:36.678911 kubelet[2566]: I1009 07:24:36.678735 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8ac1f52d-0ea9-4287-8637-a1c17d55ae85-policysync\") pod \"calico-node-pmqjl\" (UID: \"8ac1f52d-0ea9-4287-8637-a1c17d55ae85\") " pod="calico-system/calico-node-pmqjl" Oct 9 07:24:36.678911 kubelet[2566]: I1009 07:24:36.678763 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8ac1f52d-0ea9-4287-8637-a1c17d55ae85-cni-net-dir\") pod \"calico-node-pmqjl\" (UID: \"8ac1f52d-0ea9-4287-8637-a1c17d55ae85\") " pod="calico-system/calico-node-pmqjl" Oct 9 07:24:36.678911 kubelet[2566]: I1009 07:24:36.678780 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8ac1f52d-0ea9-4287-8637-a1c17d55ae85-cni-log-dir\") pod \"calico-node-pmqjl\" (UID: \"8ac1f52d-0ea9-4287-8637-a1c17d55ae85\") " pod="calico-system/calico-node-pmqjl" Oct 9 07:24:36.678911 kubelet[2566]: I1009 07:24:36.678798 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8ac1f52d-0ea9-4287-8637-a1c17d55ae85-cni-bin-dir\") pod \"calico-node-pmqjl\" (UID: \"8ac1f52d-0ea9-4287-8637-a1c17d55ae85\") " pod="calico-system/calico-node-pmqjl" Oct 9 07:24:36.678911 kubelet[2566]: I1009 07:24:36.678817 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh5xm\" (UniqueName: \"kubernetes.io/projected/ebf1fe33-16c6-4476-9371-316390576226-kube-api-access-sh5xm\") pod \"csi-node-driver-hsltf\" (UID: \"ebf1fe33-16c6-4476-9371-316390576226\") " pod="calico-system/csi-node-driver-hsltf" Oct 9 07:24:36.766315 kubelet[2566]: E1009 07:24:36.766057 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:36.766726 containerd[1462]: time="2024-10-09T07:24:36.766665682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84755ddb6d-2f78t,Uid:18dcd355-8ea5-4c08-9c94-4f1eb38d83e3,Namespace:calico-system,Attempt:0,}" Oct 9 07:24:36.785262 kubelet[2566]: E1009 07:24:36.785209 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.785262 kubelet[2566]: W1009 07:24:36.785234 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.785532 kubelet[2566]: E1009 07:24:36.785273 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.785693 kubelet[2566]: E1009 07:24:36.785574 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.785693 kubelet[2566]: W1009 07:24:36.785584 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.785693 kubelet[2566]: E1009 07:24:36.785604 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.786001 kubelet[2566]: E1009 07:24:36.785957 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.786001 kubelet[2566]: W1009 07:24:36.785974 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.786001 kubelet[2566]: E1009 07:24:36.785996 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.787124 kubelet[2566]: E1009 07:24:36.787106 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.787124 kubelet[2566]: W1009 07:24:36.787124 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.787208 kubelet[2566]: E1009 07:24:36.787152 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.788705 kubelet[2566]: E1009 07:24:36.787551 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.788705 kubelet[2566]: W1009 07:24:36.787726 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.788705 kubelet[2566]: E1009 07:24:36.787767 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.788903 kubelet[2566]: E1009 07:24:36.788827 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.788903 kubelet[2566]: W1009 07:24:36.788838 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.788972 kubelet[2566]: E1009 07:24:36.788926 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.790018 kubelet[2566]: E1009 07:24:36.789988 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.790018 kubelet[2566]: W1009 07:24:36.790002 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.790352 kubelet[2566]: E1009 07:24:36.790336 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.790352 kubelet[2566]: W1009 07:24:36.790350 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.791337 kubelet[2566]: E1009 07:24:36.791204 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.791337 kubelet[2566]: W1009 07:24:36.791217 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.792527 kubelet[2566]: E1009 07:24:36.792430 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.792527 kubelet[2566]: E1009 07:24:36.792477 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.792527 kubelet[2566]: E1009 07:24:36.792494 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.793358 kubelet[2566]: E1009 07:24:36.792757 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.793358 kubelet[2566]: W1009 07:24:36.792774 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.793638 kubelet[2566]: E1009 07:24:36.793599 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.793779 kubelet[2566]: E1009 07:24:36.793716 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.793779 kubelet[2566]: W1009 07:24:36.793733 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.795164 kubelet[2566]: E1009 07:24:36.794804 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.795164 kubelet[2566]: E1009 07:24:36.794985 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.795164 kubelet[2566]: W1009 07:24:36.794995 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.795164 kubelet[2566]: E1009 07:24:36.795074 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.795991 kubelet[2566]: E1009 07:24:36.795252 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.795991 kubelet[2566]: W1009 07:24:36.795260 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.795991 kubelet[2566]: E1009 07:24:36.795374 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.795991 kubelet[2566]: E1009 07:24:36.795546 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.795991 kubelet[2566]: W1009 07:24:36.795554 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.795991 kubelet[2566]: E1009 07:24:36.795670 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.797767 kubelet[2566]: E1009 07:24:36.796173 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.797767 kubelet[2566]: W1009 07:24:36.796181 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.797767 kubelet[2566]: E1009 07:24:36.796296 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.797767 kubelet[2566]: E1009 07:24:36.796450 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.797767 kubelet[2566]: W1009 07:24:36.796458 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.797767 kubelet[2566]: E1009 07:24:36.796498 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.797767 kubelet[2566]: E1009 07:24:36.797165 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.797767 kubelet[2566]: W1009 07:24:36.797173 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.797767 kubelet[2566]: E1009 07:24:36.797262 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.799027 kubelet[2566]: E1009 07:24:36.797799 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.799027 kubelet[2566]: W1009 07:24:36.797810 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.799027 kubelet[2566]: E1009 07:24:36.797955 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.799027 kubelet[2566]: E1009 07:24:36.798366 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.799027 kubelet[2566]: W1009 07:24:36.798395 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.799027 kubelet[2566]: E1009 07:24:36.798483 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.799027 kubelet[2566]: E1009 07:24:36.798725 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.799027 kubelet[2566]: W1009 07:24:36.798734 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.799027 kubelet[2566]: E1009 07:24:36.798835 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.799233 kubelet[2566]: E1009 07:24:36.799055 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.799233 kubelet[2566]: W1009 07:24:36.799064 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.799233 kubelet[2566]: E1009 07:24:36.799137 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.800038 kubelet[2566]: E1009 07:24:36.799349 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.800038 kubelet[2566]: W1009 07:24:36.799361 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.800038 kubelet[2566]: E1009 07:24:36.799426 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.800038 kubelet[2566]: E1009 07:24:36.799634 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.800038 kubelet[2566]: W1009 07:24:36.799641 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.800038 kubelet[2566]: E1009 07:24:36.799714 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.800038 kubelet[2566]: E1009 07:24:36.799966 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.800038 kubelet[2566]: W1009 07:24:36.799973 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.800038 kubelet[2566]: E1009 07:24:36.799983 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.806079 kubelet[2566]: E1009 07:24:36.806052 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:36.806079 kubelet[2566]: W1009 07:24:36.806075 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:36.806166 kubelet[2566]: E1009 07:24:36.806122 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:36.807000 containerd[1462]: time="2024-10-09T07:24:36.806831649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:24:36.807000 containerd[1462]: time="2024-10-09T07:24:36.806933270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:24:36.807063 containerd[1462]: time="2024-10-09T07:24:36.806962776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:24:36.807084 containerd[1462]: time="2024-10-09T07:24:36.807040562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:24:36.829866 systemd[1]: Started cri-containerd-7a7f15455cac585470025b939f188154404f0c9a920ea4d697b21f4567b7220a.scope - libcontainer container 7a7f15455cac585470025b939f188154404f0c9a920ea4d697b21f4567b7220a. Oct 9 07:24:36.874371 containerd[1462]: time="2024-10-09T07:24:36.874325129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84755ddb6d-2f78t,Uid:18dcd355-8ea5-4c08-9c94-4f1eb38d83e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a7f15455cac585470025b939f188154404f0c9a920ea4d697b21f4567b7220a\"" Oct 9 07:24:36.875295 kubelet[2566]: E1009 07:24:36.875137 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:36.876394 containerd[1462]: time="2024-10-09T07:24:36.876363341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 07:24:37.101020 kubelet[2566]: E1009 07:24:37.100893 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:37.103255 containerd[1462]: time="2024-10-09T07:24:37.103192251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pmqjl,Uid:8ac1f52d-0ea9-4287-8637-a1c17d55ae85,Namespace:calico-system,Attempt:0,}" Oct 9 07:24:37.129900 containerd[1462]: time="2024-10-09T07:24:37.129664434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:24:37.129900 containerd[1462]: time="2024-10-09T07:24:37.129775123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:24:37.129900 containerd[1462]: time="2024-10-09T07:24:37.129799769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:24:37.129900 containerd[1462]: time="2024-10-09T07:24:37.129816971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:24:37.148818 systemd[1]: Started cri-containerd-bbfd79ec9c0b4d6c2d29c61452e47085df79946b1b775209bee05c9b5e7d422f.scope - libcontainer container bbfd79ec9c0b4d6c2d29c61452e47085df79946b1b775209bee05c9b5e7d422f. Oct 9 07:24:37.173015 containerd[1462]: time="2024-10-09T07:24:37.172915621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pmqjl,Uid:8ac1f52d-0ea9-4287-8637-a1c17d55ae85,Namespace:calico-system,Attempt:0,} returns sandbox id \"bbfd79ec9c0b4d6c2d29c61452e47085df79946b1b775209bee05c9b5e7d422f\"" Oct 9 07:24:37.173601 kubelet[2566]: E1009 07:24:37.173574 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:38.260943 kubelet[2566]: E1009 07:24:38.260890 2566 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hsltf" podUID="ebf1fe33-16c6-4476-9371-316390576226" Oct 9 07:24:38.876273 containerd[1462]: time="2024-10-09T07:24:38.876224372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:38.877258 containerd[1462]: time="2024-10-09T07:24:38.877217624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 07:24:38.878603 containerd[1462]: time="2024-10-09T07:24:38.878558560Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:38.880995 containerd[1462]: time="2024-10-09T07:24:38.880945177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:38.881606 containerd[1462]: time="2024-10-09T07:24:38.881570716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.005174091s" Oct 9 07:24:38.881655 containerd[1462]: time="2024-10-09T07:24:38.881611102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 07:24:38.882111 containerd[1462]: time="2024-10-09T07:24:38.882081989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 07:24:38.889154 containerd[1462]: time="2024-10-09T07:24:38.888802351Z" level=info msg="CreateContainer within sandbox \"7a7f15455cac585470025b939f188154404f0c9a920ea4d697b21f4567b7220a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 07:24:38.905763 containerd[1462]: time="2024-10-09T07:24:38.905714416Z" level=info msg="CreateContainer within sandbox \"7a7f15455cac585470025b939f188154404f0c9a920ea4d697b21f4567b7220a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"19e38c7de0224e21c6336995f56853240d7af06b21dd649746a837e6f733201e\"" Oct 9 07:24:38.906364 containerd[1462]: time="2024-10-09T07:24:38.906322161Z" level=info msg="StartContainer for \"19e38c7de0224e21c6336995f56853240d7af06b21dd649746a837e6f733201e\"" Oct 9 07:24:38.933812 systemd[1]: Started cri-containerd-19e38c7de0224e21c6336995f56853240d7af06b21dd649746a837e6f733201e.scope - libcontainer container 19e38c7de0224e21c6336995f56853240d7af06b21dd649746a837e6f733201e. Oct 9 07:24:38.972599 containerd[1462]: time="2024-10-09T07:24:38.972546548Z" level=info msg="StartContainer for \"19e38c7de0224e21c6336995f56853240d7af06b21dd649746a837e6f733201e\" returns successfully" Oct 9 07:24:39.307933 kubelet[2566]: E1009 07:24:39.307898 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:39.398556 kubelet[2566]: E1009 07:24:39.398525 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.398556 kubelet[2566]: W1009 07:24:39.398544 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.398743 kubelet[2566]: E1009 07:24:39.398575 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.398815 kubelet[2566]: E1009 07:24:39.398794 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.398815 kubelet[2566]: W1009 07:24:39.398806 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.398815 kubelet[2566]: E1009 07:24:39.398817 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.399000 kubelet[2566]: E1009 07:24:39.398981 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.399000 kubelet[2566]: W1009 07:24:39.398990 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.399000 kubelet[2566]: E1009 07:24:39.398999 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.399175 kubelet[2566]: E1009 07:24:39.399157 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.399175 kubelet[2566]: W1009 07:24:39.399166 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.399175 kubelet[2566]: E1009 07:24:39.399174 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.399348 kubelet[2566]: E1009 07:24:39.399336 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.399348 kubelet[2566]: W1009 07:24:39.399344 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.399421 kubelet[2566]: E1009 07:24:39.399353 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.399530 kubelet[2566]: E1009 07:24:39.399507 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.399530 kubelet[2566]: W1009 07:24:39.399523 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.399609 kubelet[2566]: E1009 07:24:39.399534 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.399720 kubelet[2566]: E1009 07:24:39.399708 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.399720 kubelet[2566]: W1009 07:24:39.399716 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.399796 kubelet[2566]: E1009 07:24:39.399725 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.399900 kubelet[2566]: E1009 07:24:39.399888 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.399900 kubelet[2566]: W1009 07:24:39.399895 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.399967 kubelet[2566]: E1009 07:24:39.399904 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.400081 kubelet[2566]: E1009 07:24:39.400068 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.400081 kubelet[2566]: W1009 07:24:39.400077 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.400155 kubelet[2566]: E1009 07:24:39.400085 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.400252 kubelet[2566]: E1009 07:24:39.400240 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.400252 kubelet[2566]: W1009 07:24:39.400248 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.400327 kubelet[2566]: E1009 07:24:39.400256 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.400421 kubelet[2566]: E1009 07:24:39.400409 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.400421 kubelet[2566]: W1009 07:24:39.400417 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.400491 kubelet[2566]: E1009 07:24:39.400426 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.400604 kubelet[2566]: E1009 07:24:39.400594 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.400604 kubelet[2566]: W1009 07:24:39.400601 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.400776 kubelet[2566]: E1009 07:24:39.400610 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.400812 kubelet[2566]: E1009 07:24:39.400790 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.400812 kubelet[2566]: W1009 07:24:39.400797 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.400812 kubelet[2566]: E1009 07:24:39.400805 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.400981 kubelet[2566]: E1009 07:24:39.400965 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.400981 kubelet[2566]: W1009 07:24:39.400973 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.400981 kubelet[2566]: E1009 07:24:39.400981 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.401159 kubelet[2566]: E1009 07:24:39.401144 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.401159 kubelet[2566]: W1009 07:24:39.401152 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.401159 kubelet[2566]: E1009 07:24:39.401161 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.499220 kubelet[2566]: E1009 07:24:39.499188 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.499220 kubelet[2566]: W1009 07:24:39.499214 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.499406 kubelet[2566]: E1009 07:24:39.499239 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.499533 kubelet[2566]: E1009 07:24:39.499502 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.499533 kubelet[2566]: W1009 07:24:39.499524 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.499613 kubelet[2566]: E1009 07:24:39.499545 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.499809 kubelet[2566]: E1009 07:24:39.499794 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.499809 kubelet[2566]: W1009 07:24:39.499807 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.499893 kubelet[2566]: E1009 07:24:39.499825 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.500086 kubelet[2566]: E1009 07:24:39.500070 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.500086 kubelet[2566]: W1009 07:24:39.500081 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.500172 kubelet[2566]: E1009 07:24:39.500099 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.500342 kubelet[2566]: E1009 07:24:39.500320 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.500342 kubelet[2566]: W1009 07:24:39.500332 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.500424 kubelet[2566]: E1009 07:24:39.500351 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.500620 kubelet[2566]: E1009 07:24:39.500605 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.500620 kubelet[2566]: W1009 07:24:39.500616 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.500735 kubelet[2566]: E1009 07:24:39.500691 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.500879 kubelet[2566]: E1009 07:24:39.500863 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.500879 kubelet[2566]: W1009 07:24:39.500878 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.500979 kubelet[2566]: E1009 07:24:39.500918 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.501120 kubelet[2566]: E1009 07:24:39.501105 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.501120 kubelet[2566]: W1009 07:24:39.501117 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.501218 kubelet[2566]: E1009 07:24:39.501158 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.501340 kubelet[2566]: E1009 07:24:39.501326 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.501340 kubelet[2566]: W1009 07:24:39.501338 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.501437 kubelet[2566]: E1009 07:24:39.501358 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.501909 kubelet[2566]: E1009 07:24:39.501879 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.501909 kubelet[2566]: W1009 07:24:39.501899 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.501992 kubelet[2566]: E1009 07:24:39.501925 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.502228 kubelet[2566]: E1009 07:24:39.502210 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.502228 kubelet[2566]: W1009 07:24:39.502225 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.502314 kubelet[2566]: E1009 07:24:39.502247 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.502547 kubelet[2566]: E1009 07:24:39.502511 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.502547 kubelet[2566]: W1009 07:24:39.502535 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.502631 kubelet[2566]: E1009 07:24:39.502577 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.502758 kubelet[2566]: E1009 07:24:39.502744 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.502758 kubelet[2566]: W1009 07:24:39.502756 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.502846 kubelet[2566]: E1009 07:24:39.502787 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.503011 kubelet[2566]: E1009 07:24:39.502992 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.503011 kubelet[2566]: W1009 07:24:39.503008 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.503079 kubelet[2566]: E1009 07:24:39.503032 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.503334 kubelet[2566]: E1009 07:24:39.503316 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.503334 kubelet[2566]: W1009 07:24:39.503331 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.503402 kubelet[2566]: E1009 07:24:39.503351 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.503659 kubelet[2566]: E1009 07:24:39.503639 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.503659 kubelet[2566]: W1009 07:24:39.503654 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.503780 kubelet[2566]: E1009 07:24:39.503697 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.504045 kubelet[2566]: E1009 07:24:39.504021 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.504045 kubelet[2566]: W1009 07:24:39.504036 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.504118 kubelet[2566]: E1009 07:24:39.504051 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:39.504538 kubelet[2566]: E1009 07:24:39.504502 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:24:39.504538 kubelet[2566]: W1009 07:24:39.504525 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:24:39.504613 kubelet[2566]: E1009 07:24:39.504541 2566 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:24:40.121820 containerd[1462]: time="2024-10-09T07:24:40.121771931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:40.122535 containerd[1462]: time="2024-10-09T07:24:40.122456209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 07:24:40.123675 containerd[1462]: time="2024-10-09T07:24:40.123649677Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:40.125904 containerd[1462]: time="2024-10-09T07:24:40.125850361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:40.126372 containerd[1462]: time="2024-10-09T07:24:40.126339232Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.244225482s" Oct 9 07:24:40.126413 containerd[1462]: time="2024-10-09T07:24:40.126369709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 07:24:40.127748 containerd[1462]: time="2024-10-09T07:24:40.127720573Z" level=info msg="CreateContainer within sandbox \"bbfd79ec9c0b4d6c2d29c61452e47085df79946b1b775209bee05c9b5e7d422f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 07:24:40.143305 containerd[1462]: time="2024-10-09T07:24:40.143251198Z" level=info msg="CreateContainer within sandbox \"bbfd79ec9c0b4d6c2d29c61452e47085df79946b1b775209bee05c9b5e7d422f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"31236d9c7e6d9b83de17caa481b040509ac69f920f61761801167963abfab0ff\"" Oct 9 07:24:40.143732 containerd[1462]: time="2024-10-09T07:24:40.143667162Z" level=info msg="StartContainer for \"31236d9c7e6d9b83de17caa481b040509ac69f920f61761801167963abfab0ff\"" Oct 9 07:24:40.173860 systemd[1]: Started cri-containerd-31236d9c7e6d9b83de17caa481b040509ac69f920f61761801167963abfab0ff.scope - libcontainer container 31236d9c7e6d9b83de17caa481b040509ac69f920f61761801167963abfab0ff. Oct 9 07:24:40.204894 containerd[1462]: time="2024-10-09T07:24:40.204851877Z" level=info msg="StartContainer for \"31236d9c7e6d9b83de17caa481b040509ac69f920f61761801167963abfab0ff\" returns successfully" Oct 9 07:24:40.216654 systemd[1]: cri-containerd-31236d9c7e6d9b83de17caa481b040509ac69f920f61761801167963abfab0ff.scope: Deactivated successfully. Oct 9 07:24:40.260567 kubelet[2566]: E1009 07:24:40.260472 2566 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hsltf" podUID="ebf1fe33-16c6-4476-9371-316390576226" Oct 9 07:24:40.310116 kubelet[2566]: I1009 07:24:40.310068 2566 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:24:40.311016 kubelet[2566]: E1009 07:24:40.310401 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:40.311016 kubelet[2566]: E1009 07:24:40.310645 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:40.631468 kubelet[2566]: I1009 07:24:40.631420 2566 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-84755ddb6d-2f78t" podStartSLOduration=2.625302871 podStartE2EDuration="4.631384634s" podCreationTimestamp="2024-10-09 07:24:36 +0000 UTC" firstStartedPulling="2024-10-09 07:24:36.875834184 +0000 UTC m=+20.691328412" lastFinishedPulling="2024-10-09 07:24:38.881915957 +0000 UTC m=+22.697410175" observedRunningTime="2024-10-09 07:24:39.316937709 +0000 UTC m=+23.132431927" watchObservedRunningTime="2024-10-09 07:24:40.631384634 +0000 UTC m=+24.446878852" Oct 9 07:24:40.644523 containerd[1462]: time="2024-10-09T07:24:40.644457521Z" level=info msg="shim disconnected" id=31236d9c7e6d9b83de17caa481b040509ac69f920f61761801167963abfab0ff namespace=k8s.io Oct 9 07:24:40.644523 containerd[1462]: time="2024-10-09T07:24:40.644518477Z" level=warning msg="cleaning up after shim disconnected" id=31236d9c7e6d9b83de17caa481b040509ac69f920f61761801167963abfab0ff namespace=k8s.io Oct 9 07:24:40.644523 containerd[1462]: time="2024-10-09T07:24:40.644527413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:24:40.887069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31236d9c7e6d9b83de17caa481b040509ac69f920f61761801167963abfab0ff-rootfs.mount: Deactivated successfully. Oct 9 07:24:41.313303 kubelet[2566]: E1009 07:24:41.313278 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:41.319631 containerd[1462]: time="2024-10-09T07:24:41.319596852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 07:24:42.260614 kubelet[2566]: E1009 07:24:42.260583 2566 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hsltf" podUID="ebf1fe33-16c6-4476-9371-316390576226" Oct 9 07:24:44.263995 kubelet[2566]: E1009 07:24:44.263952 2566 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hsltf" podUID="ebf1fe33-16c6-4476-9371-316390576226" Oct 9 07:24:45.246062 containerd[1462]: time="2024-10-09T07:24:45.246014729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:45.246719 containerd[1462]: time="2024-10-09T07:24:45.246663871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 07:24:45.247707 containerd[1462]: time="2024-10-09T07:24:45.247672007Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:45.249718 containerd[1462]: time="2024-10-09T07:24:45.249672219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:45.250337 containerd[1462]: time="2024-10-09T07:24:45.250301322Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 3.930664034s" Oct 9 07:24:45.250383 containerd[1462]: time="2024-10-09T07:24:45.250335978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 07:24:45.252871 containerd[1462]: time="2024-10-09T07:24:45.252844627Z" level=info msg="CreateContainer within sandbox \"bbfd79ec9c0b4d6c2d29c61452e47085df79946b1b775209bee05c9b5e7d422f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 07:24:45.269900 containerd[1462]: time="2024-10-09T07:24:45.269861274Z" level=info msg="CreateContainer within sandbox \"bbfd79ec9c0b4d6c2d29c61452e47085df79946b1b775209bee05c9b5e7d422f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"84657fa7188cbd25bb6acd243c66a2e1d9d4451d5e347f5a3645cf08c55f77c7\"" Oct 9 07:24:45.270276 containerd[1462]: time="2024-10-09T07:24:45.270255054Z" level=info msg="StartContainer for \"84657fa7188cbd25bb6acd243c66a2e1d9d4451d5e347f5a3645cf08c55f77c7\"" Oct 9 07:24:45.299805 systemd[1]: Started cri-containerd-84657fa7188cbd25bb6acd243c66a2e1d9d4451d5e347f5a3645cf08c55f77c7.scope - libcontainer container 84657fa7188cbd25bb6acd243c66a2e1d9d4451d5e347f5a3645cf08c55f77c7. Oct 9 07:24:45.330942 containerd[1462]: time="2024-10-09T07:24:45.330872283Z" level=info msg="StartContainer for \"84657fa7188cbd25bb6acd243c66a2e1d9d4451d5e347f5a3645cf08c55f77c7\" returns successfully" Oct 9 07:24:46.260977 kubelet[2566]: E1009 07:24:46.260941 2566 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hsltf" podUID="ebf1fe33-16c6-4476-9371-316390576226" Oct 9 07:24:46.325864 kubelet[2566]: E1009 07:24:46.325834 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:46.618200 systemd[1]: cri-containerd-84657fa7188cbd25bb6acd243c66a2e1d9d4451d5e347f5a3645cf08c55f77c7.scope: Deactivated successfully. Oct 9 07:24:46.630576 kubelet[2566]: I1009 07:24:46.630504 2566 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 07:24:46.639040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84657fa7188cbd25bb6acd243c66a2e1d9d4451d5e347f5a3645cf08c55f77c7-rootfs.mount: Deactivated successfully. Oct 9 07:24:46.655568 kubelet[2566]: I1009 07:24:46.654894 2566 topology_manager.go:215] "Topology Admit Handler" podUID="00d4b847-5b1d-41da-b0cb-da401e4a82c9" podNamespace="kube-system" podName="coredns-76f75df574-42m82" Oct 9 07:24:46.662627 kubelet[2566]: I1009 07:24:46.659105 2566 topology_manager.go:215] "Topology Admit Handler" podUID="8f13c8aa-d4e2-4627-b65a-927e3b23dfe5" podNamespace="calico-system" podName="calico-kube-controllers-696496d9fb-275b5" Oct 9 07:24:46.662627 kubelet[2566]: I1009 07:24:46.659743 2566 topology_manager.go:215] "Topology Admit Handler" podUID="55b140d9-ccd9-4df3-a1c4-d69882a267d2" podNamespace="kube-system" podName="coredns-76f75df574-mlbnn" Oct 9 07:24:46.668030 systemd[1]: Created slice kubepods-burstable-pod00d4b847_5b1d_41da_b0cb_da401e4a82c9.slice - libcontainer container kubepods-burstable-pod00d4b847_5b1d_41da_b0cb_da401e4a82c9.slice. Oct 9 07:24:46.673107 systemd[1]: Created slice kubepods-besteffort-pod8f13c8aa_d4e2_4627_b65a_927e3b23dfe5.slice - libcontainer container kubepods-besteffort-pod8f13c8aa_d4e2_4627_b65a_927e3b23dfe5.slice. Oct 9 07:24:46.677316 systemd[1]: Created slice kubepods-burstable-pod55b140d9_ccd9_4df3_a1c4_d69882a267d2.slice - libcontainer container kubepods-burstable-pod55b140d9_ccd9_4df3_a1c4_d69882a267d2.slice. Oct 9 07:24:46.725340 kubelet[2566]: I1009 07:24:46.725303 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00d4b847-5b1d-41da-b0cb-da401e4a82c9-config-volume\") pod \"coredns-76f75df574-42m82\" (UID: \"00d4b847-5b1d-41da-b0cb-da401e4a82c9\") " pod="kube-system/coredns-76f75df574-42m82" Oct 9 07:24:46.725425 kubelet[2566]: I1009 07:24:46.725346 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55b140d9-ccd9-4df3-a1c4-d69882a267d2-config-volume\") pod \"coredns-76f75df574-mlbnn\" (UID: \"55b140d9-ccd9-4df3-a1c4-d69882a267d2\") " pod="kube-system/coredns-76f75df574-mlbnn" Oct 9 07:24:46.725425 kubelet[2566]: I1009 07:24:46.725385 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf2hh\" (UniqueName: \"kubernetes.io/projected/8f13c8aa-d4e2-4627-b65a-927e3b23dfe5-kube-api-access-vf2hh\") pod \"calico-kube-controllers-696496d9fb-275b5\" (UID: \"8f13c8aa-d4e2-4627-b65a-927e3b23dfe5\") " pod="calico-system/calico-kube-controllers-696496d9fb-275b5" Oct 9 07:24:46.725590 kubelet[2566]: I1009 07:24:46.725545 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5nx8\" (UniqueName: \"kubernetes.io/projected/55b140d9-ccd9-4df3-a1c4-d69882a267d2-kube-api-access-f5nx8\") pod \"coredns-76f75df574-mlbnn\" (UID: \"55b140d9-ccd9-4df3-a1c4-d69882a267d2\") " pod="kube-system/coredns-76f75df574-mlbnn" Oct 9 07:24:46.725638 kubelet[2566]: I1009 07:24:46.725617 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr69m\" (UniqueName: \"kubernetes.io/projected/00d4b847-5b1d-41da-b0cb-da401e4a82c9-kube-api-access-hr69m\") pod \"coredns-76f75df574-42m82\" (UID: \"00d4b847-5b1d-41da-b0cb-da401e4a82c9\") " pod="kube-system/coredns-76f75df574-42m82" Oct 9 07:24:46.725665 kubelet[2566]: I1009 07:24:46.725651 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f13c8aa-d4e2-4627-b65a-927e3b23dfe5-tigera-ca-bundle\") pod \"calico-kube-controllers-696496d9fb-275b5\" (UID: \"8f13c8aa-d4e2-4627-b65a-927e3b23dfe5\") " pod="calico-system/calico-kube-controllers-696496d9fb-275b5" Oct 9 07:24:46.967550 systemd[1]: Started sshd@9-10.0.0.107:22-10.0.0.1:43716.service - OpenSSH per-connection server daemon (10.0.0.1:43716). Oct 9 07:24:46.971411 kubelet[2566]: E1009 07:24:46.971308 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:46.980361 kubelet[2566]: E1009 07:24:46.980338 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:46.987201 containerd[1462]: time="2024-10-09T07:24:46.987122867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-696496d9fb-275b5,Uid:8f13c8aa-d4e2-4627-b65a-927e3b23dfe5,Namespace:calico-system,Attempt:0,}" Oct 9 07:24:46.987868 containerd[1462]: time="2024-10-09T07:24:46.987126874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-42m82,Uid:00d4b847-5b1d-41da-b0cb-da401e4a82c9,Namespace:kube-system,Attempt:0,}" Oct 9 07:24:46.988206 containerd[1462]: time="2024-10-09T07:24:46.987129369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mlbnn,Uid:55b140d9-ccd9-4df3-a1c4-d69882a267d2,Namespace:kube-system,Attempt:0,}" Oct 9 07:24:46.989583 containerd[1462]: time="2024-10-09T07:24:46.989543329Z" level=info msg="shim disconnected" id=84657fa7188cbd25bb6acd243c66a2e1d9d4451d5e347f5a3645cf08c55f77c7 namespace=k8s.io Oct 9 07:24:46.989640 containerd[1462]: time="2024-10-09T07:24:46.989585278Z" level=warning msg="cleaning up after shim disconnected" id=84657fa7188cbd25bb6acd243c66a2e1d9d4451d5e347f5a3645cf08c55f77c7 namespace=k8s.io Oct 9 07:24:46.989640 containerd[1462]: time="2024-10-09T07:24:46.989594996Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:24:47.022539 sshd[3293]: Accepted publickey for core from 10.0.0.1 port 43716 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:24:47.025194 sshd[3293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:24:47.035705 systemd-logind[1441]: New session 10 of user core. Oct 9 07:24:47.043357 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 07:24:47.085423 containerd[1462]: time="2024-10-09T07:24:47.085356638Z" level=error msg="Failed to destroy network for sandbox \"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:24:47.085711 containerd[1462]: time="2024-10-09T07:24:47.085656822Z" level=error msg="Failed to destroy network for sandbox \"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:24:47.085891 containerd[1462]: time="2024-10-09T07:24:47.085866497Z" level=error msg="encountered an error cleaning up failed sandbox \"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:24:47.085941 containerd[1462]: time="2024-10-09T07:24:47.085916751Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mlbnn,Uid:55b140d9-ccd9-4df3-a1c4-d69882a267d2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:24:47.086147 containerd[1462]: time="2024-10-09T07:24:47.086122528Z" level=error msg="encountered an error cleaning up failed sandbox \"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:24:47.086180 kubelet[2566]: E1009 07:24:47.086146 2566 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:24:47.086236 kubelet[2566]: E1009 07:24:47.086202 2566 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mlbnn" Oct 9 07:24:47.086236 kubelet[2566]: E1009 07:24:47.086222 2566 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mlbnn" Oct 9 07:24:47.086421 containerd[1462]: time="2024-10-09T07:24:47.086182531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-696496d9fb-275b5,Uid:8f13c8aa-d4e2-4627-b65a-927e3b23dfe5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:24:47.086478 kubelet[2566]: E1009 07:24:47.086270 2566 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-mlbnn_kube-system(55b140d9-ccd9-4df3-a1c4-d69882a267d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-mlbnn_kube-system(55b140d9-ccd9-4df3-a1c4-d69882a267d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mlbnn" podUID="55b140d9-ccd9-4df3-a1c4-d69882a267d2" Oct 9 07:24:47.086601 kubelet[2566]: E1009 07:24:47.086578 2566 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:24:47.086601 kubelet[2566]: E1009 07:24:47.086610 2566 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-696496d9fb-275b5" Oct 9 07:24:47.086700 kubelet[2566]: E1009 07:24:47.086626 2566 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-696496d9fb-275b5" Oct 9 07:24:47.086700 kubelet[2566]: E1009 07:24:47.086662 2566 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-696496d9fb-275b5_calico-system(8f13c8aa-d4e2-4627-b65a-927e3b23dfe5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-696496d9fb-275b5_calico-system(8f13c8aa-d4e2-4627-b65a-927e3b23dfe5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-696496d9fb-275b5" podUID="8f13c8aa-d4e2-4627-b65a-927e3b23dfe5" Oct 9 07:24:47.089063 containerd[1462]: time="2024-10-09T07:24:47.089005569Z" level=error msg="Failed to destroy network for sandbox \"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:24:47.089418 containerd[1462]: time="2024-10-09T07:24:47.089381726Z" level=error msg="encountered an error cleaning up failed sandbox \"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:24:47.089451 containerd[1462]: time="2024-10-09T07:24:47.089432131Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-42m82,Uid:00d4b847-5b1d-41da-b0cb-da401e4a82c9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:24:47.089671 kubelet[2566]: E1009 07:24:47.089630 2566 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:24:47.089821 kubelet[2566]: E1009 07:24:47.089714 2566 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-42m82" Oct 9 07:24:47.089821 kubelet[2566]: E1009 07:24:47.089737 2566 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-42m82" Oct 9 07:24:47.089821 kubelet[2566]: E1009 07:24:47.089794 2566 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-42m82_kube-system(00d4b847-5b1d-41da-b0cb-da401e4a82c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-42m82_kube-system(00d4b847-5b1d-41da-b0cb-da401e4a82c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-42m82" podUID="00d4b847-5b1d-41da-b0cb-da401e4a82c9" Oct 9 07:24:47.164217 sshd[3293]: pam_unix(sshd:session): session closed for user core Oct 9 07:24:47.168456 systemd[1]: sshd@9-10.0.0.107:22-10.0.0.1:43716.service: Deactivated successfully. Oct 9 07:24:47.170551 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 07:24:47.171223 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Oct 9 07:24:47.172063 systemd-logind[1441]: Removed session 10. Oct 9 07:24:47.329464 kubelet[2566]: E1009 07:24:47.329435 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:47.330129 containerd[1462]: time="2024-10-09T07:24:47.330085685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 07:24:47.330503 kubelet[2566]: I1009 07:24:47.330479 2566 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Oct 9 07:24:47.331323 containerd[1462]: time="2024-10-09T07:24:47.331023088Z" level=info msg="StopPodSandbox for \"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\"" Oct 9 07:24:47.331323 containerd[1462]: time="2024-10-09T07:24:47.331225589Z" level=info msg="Ensure that sandbox 3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa in task-service has been cleanup successfully" Oct 9 07:24:47.332139 kubelet[2566]: I1009 07:24:47.331905 2566 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Oct 9 07:24:47.332593 containerd[1462]: time="2024-10-09T07:24:47.332553917Z" level=info msg="StopPodSandbox for \"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\"" Oct 9 07:24:47.332764 containerd[1462]: time="2024-10-09T07:24:47.332740317Z" level=info msg="Ensure that sandbox a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24 in task-service has been cleanup successfully" Oct 9 07:24:47.333522 kubelet[2566]: I1009 07:24:47.333332 2566 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Oct 9 07:24:47.333794 containerd[1462]: time="2024-10-09T07:24:47.333771847Z" level=info msg="StopPodSandbox for \"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\"" Oct 9 07:24:47.335161 containerd[1462]: time="2024-10-09T07:24:47.335095656Z" level=info msg="Ensure that sandbox ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6 in task-service has been cleanup successfully" Oct 9 07:24:47.371207 containerd[1462]: time="2024-10-09T07:24:47.370629369Z" level=error msg="StopPodSandbox for \"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\" failed" error="failed to destroy network for sandbox \"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:24:47.371319 kubelet[2566]: E1009 07:24:47.370942 2566 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Oct 9 07:24:47.371319 kubelet[2566]: E1009 07:24:47.371017 2566 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24"} Oct 9 07:24:47.371319 kubelet[2566]: E1009 07:24:47.371053 2566 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00d4b847-5b1d-41da-b0cb-da401e4a82c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:24:47.371319 kubelet[2566]: E1009 07:24:47.371081 2566 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00d4b847-5b1d-41da-b0cb-da401e4a82c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-42m82" podUID="00d4b847-5b1d-41da-b0cb-da401e4a82c9" Oct 9 07:24:47.371510 kubelet[2566]: E1009 07:24:47.371474 2566 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Oct 9 07:24:47.371510 kubelet[2566]: E1009 07:24:47.371492 2566 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa"} Oct 9 07:24:47.371558 containerd[1462]: time="2024-10-09T07:24:47.371301733Z" level=error msg="StopPodSandbox for \"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\" failed" error="failed to destroy network for sandbox \"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:24:47.371583 kubelet[2566]: E1009 07:24:47.371516 2566 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"55b140d9-ccd9-4df3-a1c4-d69882a267d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:24:47.371583 kubelet[2566]: E1009 07:24:47.371545 2566 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"55b140d9-ccd9-4df3-a1c4-d69882a267d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mlbnn" podUID="55b140d9-ccd9-4df3-a1c4-d69882a267d2" Oct 9 07:24:47.373632 containerd[1462]: time="2024-10-09T07:24:47.373578475Z" level=error msg="StopPodSandbox for \"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\" failed" error="failed to destroy network for sandbox \"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:24:47.373817 kubelet[2566]: E1009 07:24:47.373793 2566 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Oct 9 07:24:47.373817 kubelet[2566]: E1009 07:24:47.373818 2566 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6"} Oct 9 07:24:47.373885 kubelet[2566]: E1009 07:24:47.373846 2566 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8f13c8aa-d4e2-4627-b65a-927e3b23dfe5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:24:47.373885 kubelet[2566]: E1009 07:24:47.373868 2566 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8f13c8aa-d4e2-4627-b65a-927e3b23dfe5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-696496d9fb-275b5" podUID="8f13c8aa-d4e2-4627-b65a-927e3b23dfe5" Oct 9 07:24:47.639587 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24-shm.mount: Deactivated successfully. Oct 9 07:24:47.639706 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6-shm.mount: Deactivated successfully. Oct 9 07:24:48.265978 systemd[1]: Created slice kubepods-besteffort-podebf1fe33_16c6_4476_9371_316390576226.slice - libcontainer container kubepods-besteffort-podebf1fe33_16c6_4476_9371_316390576226.slice. Oct 9 07:24:48.267839 containerd[1462]: time="2024-10-09T07:24:48.267801187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hsltf,Uid:ebf1fe33-16c6-4476-9371-316390576226,Namespace:calico-system,Attempt:0,}" Oct 9 07:24:48.322431 containerd[1462]: time="2024-10-09T07:24:48.322374724Z" level=error msg="Failed to destroy network for sandbox \"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:24:48.324568 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8-shm.mount: Deactivated successfully. Oct 9 07:24:48.325478 containerd[1462]: time="2024-10-09T07:24:48.325434556Z" level=error msg="encountered an error cleaning up failed sandbox \"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:24:48.325522 containerd[1462]: time="2024-10-09T07:24:48.325487044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hsltf,Uid:ebf1fe33-16c6-4476-9371-316390576226,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:24:48.325752 kubelet[2566]: E1009 07:24:48.325727 2566 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:24:48.325819 kubelet[2566]: E1009 07:24:48.325793 2566 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hsltf" Oct 9 07:24:48.325859 kubelet[2566]: E1009 07:24:48.325834 2566 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hsltf" Oct 9 07:24:48.325929 kubelet[2566]: E1009 07:24:48.325914 2566 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hsltf_calico-system(ebf1fe33-16c6-4476-9371-316390576226)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hsltf_calico-system(ebf1fe33-16c6-4476-9371-316390576226)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hsltf" podUID="ebf1fe33-16c6-4476-9371-316390576226" Oct 9 07:24:48.335426 kubelet[2566]: I1009 07:24:48.335406 2566 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Oct 9 07:24:48.336013 containerd[1462]: time="2024-10-09T07:24:48.335981634Z" level=info msg="StopPodSandbox for \"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\"" Oct 9 07:24:48.336179 containerd[1462]: time="2024-10-09T07:24:48.336158526Z" level=info msg="Ensure that sandbox fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8 in task-service has been cleanup successfully" Oct 9 07:24:48.361455 containerd[1462]: time="2024-10-09T07:24:48.361408295Z" level=error msg="StopPodSandbox for \"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\" failed" error="failed to destroy network for sandbox \"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:24:48.361727 kubelet[2566]: E1009 07:24:48.361654 2566 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Oct 9 07:24:48.361798 kubelet[2566]: E1009 07:24:48.361738 2566 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8"} Oct 9 07:24:48.361798 kubelet[2566]: E1009 07:24:48.361773 2566 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ebf1fe33-16c6-4476-9371-316390576226\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:24:48.361911 kubelet[2566]: E1009 07:24:48.361802 2566 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ebf1fe33-16c6-4476-9371-316390576226\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hsltf" podUID="ebf1fe33-16c6-4476-9371-316390576226" Oct 9 07:24:51.054267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount411426654.mount: Deactivated successfully. Oct 9 07:24:51.740069 containerd[1462]: time="2024-10-09T07:24:51.740011854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:51.740875 containerd[1462]: time="2024-10-09T07:24:51.740804003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 07:24:51.741892 containerd[1462]: time="2024-10-09T07:24:51.741857833Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:51.744002 containerd[1462]: time="2024-10-09T07:24:51.743949504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:24:51.744363 containerd[1462]: time="2024-10-09T07:24:51.744313709Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 4.414181836s" Oct 9 07:24:51.744440 containerd[1462]: time="2024-10-09T07:24:51.744364594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 07:24:51.752311 containerd[1462]: time="2024-10-09T07:24:51.752275440Z" level=info msg="CreateContainer within sandbox \"bbfd79ec9c0b4d6c2d29c61452e47085df79946b1b775209bee05c9b5e7d422f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 07:24:51.768746 containerd[1462]: time="2024-10-09T07:24:51.768675498Z" level=info msg="CreateContainer within sandbox \"bbfd79ec9c0b4d6c2d29c61452e47085df79946b1b775209bee05c9b5e7d422f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"42f2a7dd46fb5730ccca247398140e8375292943e58bfddabc8b49950767e1fb\"" Oct 9 07:24:51.769155 containerd[1462]: time="2024-10-09T07:24:51.769127818Z" level=info msg="StartContainer for \"42f2a7dd46fb5730ccca247398140e8375292943e58bfddabc8b49950767e1fb\"" Oct 9 07:24:51.836996 systemd[1]: Started cri-containerd-42f2a7dd46fb5730ccca247398140e8375292943e58bfddabc8b49950767e1fb.scope - libcontainer container 42f2a7dd46fb5730ccca247398140e8375292943e58bfddabc8b49950767e1fb. Oct 9 07:24:52.031584 containerd[1462]: time="2024-10-09T07:24:52.031528072Z" level=info msg="StartContainer for \"42f2a7dd46fb5730ccca247398140e8375292943e58bfddabc8b49950767e1fb\" returns successfully" Oct 9 07:24:52.052277 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 07:24:52.054047 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 07:24:52.178590 systemd[1]: Started sshd@10-10.0.0.107:22-10.0.0.1:50714.service - OpenSSH per-connection server daemon (10.0.0.1:50714). Oct 9 07:24:52.227090 sshd[3624]: Accepted publickey for core from 10.0.0.1 port 50714 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:24:52.229795 sshd[3624]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:24:52.236314 systemd-logind[1441]: New session 11 of user core. Oct 9 07:24:52.246105 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 07:24:52.344044 kubelet[2566]: E1009 07:24:52.343938 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:52.372075 sshd[3624]: pam_unix(sshd:session): session closed for user core Oct 9 07:24:52.386128 systemd[1]: sshd@10-10.0.0.107:22-10.0.0.1:50714.service: Deactivated successfully. Oct 9 07:24:52.389025 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 07:24:52.392539 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Oct 9 07:24:52.393427 systemd-logind[1441]: Removed session 11. Oct 9 07:24:53.345778 kubelet[2566]: E1009 07:24:53.345748 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:56.887512 kubelet[2566]: I1009 07:24:56.887473 2566 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:24:56.888151 kubelet[2566]: E1009 07:24:56.888021 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:56.898121 kubelet[2566]: I1009 07:24:56.898077 2566 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-pmqjl" podStartSLOduration=6.327781668 podStartE2EDuration="20.898022052s" podCreationTimestamp="2024-10-09 07:24:36 +0000 UTC" firstStartedPulling="2024-10-09 07:24:37.174363009 +0000 UTC m=+20.989857227" lastFinishedPulling="2024-10-09 07:24:51.744603393 +0000 UTC m=+35.560097611" observedRunningTime="2024-10-09 07:24:52.359926185 +0000 UTC m=+36.175420403" watchObservedRunningTime="2024-10-09 07:24:56.898022052 +0000 UTC m=+40.713516260" Oct 9 07:24:57.351345 kubelet[2566]: E1009 07:24:57.351305 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:24:57.382755 systemd[1]: Started sshd@11-10.0.0.107:22-10.0.0.1:50720.service - OpenSSH per-connection server daemon (10.0.0.1:50720). Oct 9 07:24:57.450621 sshd[3872]: Accepted publickey for core from 10.0.0.1 port 50720 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:24:57.452063 sshd[3872]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:24:57.455784 systemd-logind[1441]: New session 12 of user core. Oct 9 07:24:57.460803 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 07:24:57.586591 sshd[3872]: pam_unix(sshd:session): session closed for user core Oct 9 07:24:57.602988 systemd[1]: sshd@11-10.0.0.107:22-10.0.0.1:50720.service: Deactivated successfully. Oct 9 07:24:57.606356 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 07:24:57.610096 systemd-logind[1441]: Session 12 logged out. Waiting for processes to exit. Oct 9 07:24:57.620033 systemd[1]: Started sshd@12-10.0.0.107:22-10.0.0.1:50722.service - OpenSSH per-connection server daemon (10.0.0.1:50722). Oct 9 07:24:57.622165 systemd-logind[1441]: Removed session 12. Oct 9 07:24:57.627709 kernel: bpftool[3925]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 07:24:57.664427 sshd[3919]: Accepted publickey for core from 10.0.0.1 port 50722 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:24:57.668950 sshd[3919]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:24:57.676355 systemd-logind[1441]: New session 13 of user core. Oct 9 07:24:57.683364 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 07:24:57.834551 sshd[3919]: pam_unix(sshd:session): session closed for user core Oct 9 07:24:57.844749 systemd[1]: sshd@12-10.0.0.107:22-10.0.0.1:50722.service: Deactivated successfully. Oct 9 07:24:57.847525 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 07:24:57.850459 systemd-logind[1441]: Session 13 logged out. Waiting for processes to exit. Oct 9 07:24:57.859979 systemd[1]: Started sshd@13-10.0.0.107:22-10.0.0.1:50734.service - OpenSSH per-connection server daemon (10.0.0.1:50734). Oct 9 07:24:57.860947 systemd-logind[1441]: Removed session 13. Oct 9 07:24:57.891973 sshd[3959]: Accepted publickey for core from 10.0.0.1 port 50734 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:24:57.893564 sshd[3959]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:24:57.898616 systemd-logind[1441]: New session 14 of user core. Oct 9 07:24:57.907221 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 07:24:57.916297 systemd-networkd[1397]: vxlan.calico: Link UP Oct 9 07:24:57.916306 systemd-networkd[1397]: vxlan.calico: Gained carrier Oct 9 07:24:58.019504 sshd[3959]: pam_unix(sshd:session): session closed for user core Oct 9 07:24:58.023145 systemd[1]: sshd@13-10.0.0.107:22-10.0.0.1:50734.service: Deactivated successfully. Oct 9 07:24:58.025493 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 07:24:58.027265 systemd-logind[1441]: Session 14 logged out. Waiting for processes to exit. Oct 9 07:24:58.028451 systemd-logind[1441]: Removed session 14. Oct 9 07:24:58.261748 containerd[1462]: time="2024-10-09T07:24:58.261695560Z" level=info msg="StopPodSandbox for \"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\"" Oct 9 07:24:58.383721 containerd[1462]: 2024-10-09 07:24:58.310 [INFO][4057] k8s.go 608: Cleaning up netns ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Oct 9 07:24:58.383721 containerd[1462]: 2024-10-09 07:24:58.311 [INFO][4057] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" iface="eth0" netns="/var/run/netns/cni-a2636ece-de9c-f7fc-1105-1ff810da2a11" Oct 9 07:24:58.383721 containerd[1462]: 2024-10-09 07:24:58.311 [INFO][4057] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" iface="eth0" netns="/var/run/netns/cni-a2636ece-de9c-f7fc-1105-1ff810da2a11" Oct 9 07:24:58.383721 containerd[1462]: 2024-10-09 07:24:58.312 [INFO][4057] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" iface="eth0" netns="/var/run/netns/cni-a2636ece-de9c-f7fc-1105-1ff810da2a11" Oct 9 07:24:58.383721 containerd[1462]: 2024-10-09 07:24:58.312 [INFO][4057] k8s.go 615: Releasing IP address(es) ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Oct 9 07:24:58.383721 containerd[1462]: 2024-10-09 07:24:58.312 [INFO][4057] utils.go 188: Calico CNI releasing IP address ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Oct 9 07:24:58.383721 containerd[1462]: 2024-10-09 07:24:58.367 [INFO][4066] ipam_plugin.go 417: Releasing address using handleID ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" HandleID="k8s-pod-network.ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Workload="localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0" Oct 9 07:24:58.383721 containerd[1462]: 2024-10-09 07:24:58.368 [INFO][4066] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:24:58.383721 containerd[1462]: 2024-10-09 07:24:58.368 [INFO][4066] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:24:58.383721 containerd[1462]: 2024-10-09 07:24:58.374 [WARNING][4066] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" HandleID="k8s-pod-network.ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Workload="localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0" Oct 9 07:24:58.383721 containerd[1462]: 2024-10-09 07:24:58.374 [INFO][4066] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" HandleID="k8s-pod-network.ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Workload="localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0" Oct 9 07:24:58.383721 containerd[1462]: 2024-10-09 07:24:58.376 [INFO][4066] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:24:58.383721 containerd[1462]: 2024-10-09 07:24:58.379 [INFO][4057] k8s.go 621: Teardown processing complete. ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Oct 9 07:24:58.384358 containerd[1462]: time="2024-10-09T07:24:58.383893300Z" level=info msg="TearDown network for sandbox \"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\" successfully" Oct 9 07:24:58.384358 containerd[1462]: time="2024-10-09T07:24:58.383918938Z" level=info msg="StopPodSandbox for \"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\" returns successfully" Oct 9 07:24:58.387463 systemd[1]: run-netns-cni\x2da2636ece\x2dde9c\x2df7fc\x2d1105\x2d1ff810da2a11.mount: Deactivated successfully. Oct 9 07:24:58.391668 containerd[1462]: time="2024-10-09T07:24:58.391621442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-696496d9fb-275b5,Uid:8f13c8aa-d4e2-4627-b65a-927e3b23dfe5,Namespace:calico-system,Attempt:1,}" Oct 9 07:24:58.634795 systemd-networkd[1397]: cali93efd645eda: Link UP Oct 9 07:24:58.635736 systemd-networkd[1397]: cali93efd645eda: Gained carrier Oct 9 07:24:58.710333 containerd[1462]: 2024-10-09 07:24:58.447 [INFO][4074] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0 calico-kube-controllers-696496d9fb- calico-system 8f13c8aa-d4e2-4627-b65a-927e3b23dfe5 839 0 2024-10-09 07:24:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:696496d9fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-696496d9fb-275b5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali93efd645eda [] []}} ContainerID="8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db" Namespace="calico-system" Pod="calico-kube-controllers-696496d9fb-275b5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--696496d9fb--275b5-" Oct 9 07:24:58.710333 containerd[1462]: 2024-10-09 07:24:58.448 [INFO][4074] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db" Namespace="calico-system" Pod="calico-kube-controllers-696496d9fb-275b5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0" Oct 9 07:24:58.710333 containerd[1462]: 2024-10-09 07:24:58.480 [INFO][4089] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db" HandleID="k8s-pod-network.8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db" Workload="localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0" Oct 9 07:24:58.710333 containerd[1462]: 2024-10-09 07:24:58.487 [INFO][4089] ipam_plugin.go 270: Auto assigning IP ContainerID="8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db" HandleID="k8s-pod-network.8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db" Workload="localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f4db0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-696496d9fb-275b5", "timestamp":"2024-10-09 07:24:58.479998163 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:24:58.710333 containerd[1462]: 2024-10-09 07:24:58.487 [INFO][4089] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:24:58.710333 containerd[1462]: 2024-10-09 07:24:58.487 [INFO][4089] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:24:58.710333 containerd[1462]: 2024-10-09 07:24:58.487 [INFO][4089] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 07:24:58.710333 containerd[1462]: 2024-10-09 07:24:58.488 [INFO][4089] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db" host="localhost" Oct 9 07:24:58.710333 containerd[1462]: 2024-10-09 07:24:58.492 [INFO][4089] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 07:24:58.710333 containerd[1462]: 2024-10-09 07:24:58.495 [INFO][4089] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 07:24:58.710333 containerd[1462]: 2024-10-09 07:24:58.496 [INFO][4089] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 07:24:58.710333 containerd[1462]: 2024-10-09 07:24:58.498 [INFO][4089] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 07:24:58.710333 containerd[1462]: 2024-10-09 07:24:58.498 [INFO][4089] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db" host="localhost" Oct 9 07:24:58.710333 containerd[1462]: 2024-10-09 07:24:58.499 [INFO][4089] ipam.go 1685: Creating new handle: k8s-pod-network.8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db Oct 9 07:24:58.710333 containerd[1462]: 2024-10-09 07:24:58.530 [INFO][4089] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db" host="localhost" Oct 9 07:24:58.710333 containerd[1462]: 2024-10-09 07:24:58.628 [INFO][4089] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db" host="localhost" Oct 9 07:24:58.710333 containerd[1462]: 2024-10-09 07:24:58.628 [INFO][4089] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db" host="localhost" Oct 9 07:24:58.710333 containerd[1462]: 2024-10-09 07:24:58.628 [INFO][4089] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:24:58.710333 containerd[1462]: 2024-10-09 07:24:58.628 [INFO][4089] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db" HandleID="k8s-pod-network.8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db" Workload="localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0" Oct 9 07:24:58.711071 containerd[1462]: 2024-10-09 07:24:58.632 [INFO][4074] k8s.go 386: Populated endpoint ContainerID="8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db" Namespace="calico-system" Pod="calico-kube-controllers-696496d9fb-275b5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0", GenerateName:"calico-kube-controllers-696496d9fb-", Namespace:"calico-system", SelfLink:"", UID:"8f13c8aa-d4e2-4627-b65a-927e3b23dfe5", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 24, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"696496d9fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-696496d9fb-275b5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali93efd645eda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:24:58.711071 containerd[1462]: 2024-10-09 07:24:58.632 [INFO][4074] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db" Namespace="calico-system" Pod="calico-kube-controllers-696496d9fb-275b5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0" Oct 9 07:24:58.711071 containerd[1462]: 2024-10-09 07:24:58.632 [INFO][4074] dataplane_linux.go 68: Setting the host side veth name to cali93efd645eda ContainerID="8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db" Namespace="calico-system" Pod="calico-kube-controllers-696496d9fb-275b5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0" Oct 9 07:24:58.711071 containerd[1462]: 2024-10-09 07:24:58.635 [INFO][4074] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db" Namespace="calico-system" Pod="calico-kube-controllers-696496d9fb-275b5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0" Oct 9 07:24:58.711071 containerd[1462]: 2024-10-09 07:24:58.635 [INFO][4074] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db" Namespace="calico-system" Pod="calico-kube-controllers-696496d9fb-275b5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0", GenerateName:"calico-kube-controllers-696496d9fb-", Namespace:"calico-system", SelfLink:"", UID:"8f13c8aa-d4e2-4627-b65a-927e3b23dfe5", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 24, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"696496d9fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db", Pod:"calico-kube-controllers-696496d9fb-275b5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali93efd645eda", MAC:"7a:63:e1:ed:66:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:24:58.711071 containerd[1462]: 2024-10-09 07:24:58.706 [INFO][4074] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db" Namespace="calico-system" Pod="calico-kube-controllers-696496d9fb-275b5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0" Oct 9 07:24:58.813726 containerd[1462]: time="2024-10-09T07:24:58.813598068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:24:58.813726 containerd[1462]: time="2024-10-09T07:24:58.813702344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:24:58.813898 containerd[1462]: time="2024-10-09T07:24:58.813727341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:24:58.813898 containerd[1462]: time="2024-10-09T07:24:58.813748561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:24:58.840828 systemd[1]: Started cri-containerd-8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db.scope - libcontainer container 8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db. Oct 9 07:24:58.854726 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 07:24:58.884255 containerd[1462]: time="2024-10-09T07:24:58.884209110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-696496d9fb-275b5,Uid:8f13c8aa-d4e2-4627-b65a-927e3b23dfe5,Namespace:calico-system,Attempt:1,} returns sandbox id \"8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db\"" Oct 9 07:24:58.885884 containerd[1462]: time="2024-10-09T07:24:58.885793465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 07:24:59.848912 systemd-networkd[1397]: vxlan.calico: Gained IPv6LL Oct 9 07:25:00.040946 systemd-networkd[1397]: cali93efd645eda: Gained IPv6LL Oct 9 07:25:00.267845 containerd[1462]: time="2024-10-09T07:25:00.267792970Z" level=info msg="StopPodSandbox for \"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\"" Oct 9 07:25:00.351028 containerd[1462]: 2024-10-09 07:25:00.309 [INFO][4167] k8s.go 608: Cleaning up netns ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Oct 9 07:25:00.351028 containerd[1462]: 2024-10-09 07:25:00.309 [INFO][4167] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" iface="eth0" netns="/var/run/netns/cni-4a138510-aa27-3b51-a677-e587378b3205" Oct 9 07:25:00.351028 containerd[1462]: 2024-10-09 07:25:00.309 [INFO][4167] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" iface="eth0" netns="/var/run/netns/cni-4a138510-aa27-3b51-a677-e587378b3205" Oct 9 07:25:00.351028 containerd[1462]: 2024-10-09 07:25:00.310 [INFO][4167] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" iface="eth0" netns="/var/run/netns/cni-4a138510-aa27-3b51-a677-e587378b3205" Oct 9 07:25:00.351028 containerd[1462]: 2024-10-09 07:25:00.310 [INFO][4167] k8s.go 615: Releasing IP address(es) ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Oct 9 07:25:00.351028 containerd[1462]: 2024-10-09 07:25:00.310 [INFO][4167] utils.go 188: Calico CNI releasing IP address ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Oct 9 07:25:00.351028 containerd[1462]: 2024-10-09 07:25:00.340 [INFO][4175] ipam_plugin.go 417: Releasing address using handleID ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" HandleID="k8s-pod-network.3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Workload="localhost-k8s-coredns--76f75df574--mlbnn-eth0" Oct 9 07:25:00.351028 containerd[1462]: 2024-10-09 07:25:00.340 [INFO][4175] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:25:00.351028 containerd[1462]: 2024-10-09 07:25:00.340 [INFO][4175] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:25:00.351028 containerd[1462]: 2024-10-09 07:25:00.345 [WARNING][4175] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" HandleID="k8s-pod-network.3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Workload="localhost-k8s-coredns--76f75df574--mlbnn-eth0" Oct 9 07:25:00.351028 containerd[1462]: 2024-10-09 07:25:00.345 [INFO][4175] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" HandleID="k8s-pod-network.3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Workload="localhost-k8s-coredns--76f75df574--mlbnn-eth0" Oct 9 07:25:00.351028 containerd[1462]: 2024-10-09 07:25:00.346 [INFO][4175] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:25:00.351028 containerd[1462]: 2024-10-09 07:25:00.348 [INFO][4167] k8s.go 621: Teardown processing complete. ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Oct 9 07:25:00.351403 containerd[1462]: time="2024-10-09T07:25:00.351222357Z" level=info msg="TearDown network for sandbox \"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\" successfully" Oct 9 07:25:00.351403 containerd[1462]: time="2024-10-09T07:25:00.351247464Z" level=info msg="StopPodSandbox for \"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\" returns successfully" Oct 9 07:25:00.351562 kubelet[2566]: E1009 07:25:00.351535 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:25:00.352195 containerd[1462]: time="2024-10-09T07:25:00.352155569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mlbnn,Uid:55b140d9-ccd9-4df3-a1c4-d69882a267d2,Namespace:kube-system,Attempt:1,}" Oct 9 07:25:00.354893 systemd[1]: run-netns-cni\x2d4a138510\x2daa27\x2d3b51\x2da677\x2de587378b3205.mount: Deactivated successfully. Oct 9 07:25:00.466805 systemd-networkd[1397]: cali7d90bcedca6: Link UP Oct 9 07:25:00.467520 systemd-networkd[1397]: cali7d90bcedca6: Gained carrier Oct 9 07:25:00.481782 containerd[1462]: 2024-10-09 07:25:00.404 [INFO][4182] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--mlbnn-eth0 coredns-76f75df574- kube-system 55b140d9-ccd9-4df3-a1c4-d69882a267d2 849 0 2024-10-09 07:24:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-mlbnn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7d90bcedca6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89" Namespace="kube-system" Pod="coredns-76f75df574-mlbnn" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mlbnn-" Oct 9 07:25:00.481782 containerd[1462]: 2024-10-09 07:25:00.404 [INFO][4182] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89" Namespace="kube-system" Pod="coredns-76f75df574-mlbnn" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mlbnn-eth0" Oct 9 07:25:00.481782 containerd[1462]: 2024-10-09 07:25:00.431 [INFO][4196] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89" HandleID="k8s-pod-network.92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89" Workload="localhost-k8s-coredns--76f75df574--mlbnn-eth0" Oct 9 07:25:00.481782 containerd[1462]: 2024-10-09 07:25:00.439 [INFO][4196] ipam_plugin.go 270: Auto assigning IP ContainerID="92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89" HandleID="k8s-pod-network.92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89" Workload="localhost-k8s-coredns--76f75df574--mlbnn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ddf00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-mlbnn", "timestamp":"2024-10-09 07:25:00.431222736 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:25:00.481782 containerd[1462]: 2024-10-09 07:25:00.439 [INFO][4196] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:25:00.481782 containerd[1462]: 2024-10-09 07:25:00.439 [INFO][4196] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:25:00.481782 containerd[1462]: 2024-10-09 07:25:00.439 [INFO][4196] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 07:25:00.481782 containerd[1462]: 2024-10-09 07:25:00.441 [INFO][4196] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89" host="localhost" Oct 9 07:25:00.481782 containerd[1462]: 2024-10-09 07:25:00.444 [INFO][4196] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 07:25:00.481782 containerd[1462]: 2024-10-09 07:25:00.447 [INFO][4196] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 07:25:00.481782 containerd[1462]: 2024-10-09 07:25:00.448 [INFO][4196] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 07:25:00.481782 containerd[1462]: 2024-10-09 07:25:00.450 [INFO][4196] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 07:25:00.481782 containerd[1462]: 2024-10-09 07:25:00.450 [INFO][4196] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89" host="localhost" Oct 9 07:25:00.481782 containerd[1462]: 2024-10-09 07:25:00.451 [INFO][4196] ipam.go 1685: Creating new handle: k8s-pod-network.92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89 Oct 9 07:25:00.481782 containerd[1462]: 2024-10-09 07:25:00.457 [INFO][4196] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89" host="localhost" Oct 9 07:25:00.481782 containerd[1462]: 2024-10-09 07:25:00.461 [INFO][4196] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89" host="localhost" Oct 9 07:25:00.481782 containerd[1462]: 2024-10-09 07:25:00.461 [INFO][4196] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89" host="localhost" Oct 9 07:25:00.481782 containerd[1462]: 2024-10-09 07:25:00.461 [INFO][4196] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:25:00.481782 containerd[1462]: 2024-10-09 07:25:00.461 [INFO][4196] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89" HandleID="k8s-pod-network.92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89" Workload="localhost-k8s-coredns--76f75df574--mlbnn-eth0" Oct 9 07:25:00.484446 containerd[1462]: 2024-10-09 07:25:00.464 [INFO][4182] k8s.go 386: Populated endpoint ContainerID="92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89" Namespace="kube-system" Pod="coredns-76f75df574-mlbnn" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mlbnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mlbnn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"55b140d9-ccd9-4df3-a1c4-d69882a267d2", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 24, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-mlbnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d90bcedca6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:25:00.484446 containerd[1462]: 2024-10-09 07:25:00.464 [INFO][4182] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89" Namespace="kube-system" Pod="coredns-76f75df574-mlbnn" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mlbnn-eth0" Oct 9 07:25:00.484446 containerd[1462]: 2024-10-09 07:25:00.464 [INFO][4182] dataplane_linux.go 68: Setting the host side veth name to cali7d90bcedca6 ContainerID="92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89" Namespace="kube-system" Pod="coredns-76f75df574-mlbnn" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mlbnn-eth0" Oct 9 07:25:00.484446 containerd[1462]: 2024-10-09 07:25:00.466 [INFO][4182] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89" Namespace="kube-system" Pod="coredns-76f75df574-mlbnn" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mlbnn-eth0" Oct 9 07:25:00.484446 containerd[1462]: 2024-10-09 07:25:00.467 [INFO][4182] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89" Namespace="kube-system" Pod="coredns-76f75df574-mlbnn" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mlbnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mlbnn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"55b140d9-ccd9-4df3-a1c4-d69882a267d2", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 24, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89", Pod:"coredns-76f75df574-mlbnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d90bcedca6", MAC:"82:f2:c3:93:ff:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:25:00.484446 containerd[1462]: 2024-10-09 07:25:00.476 [INFO][4182] k8s.go 500: Wrote updated endpoint to datastore ContainerID="92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89" Namespace="kube-system" Pod="coredns-76f75df574-mlbnn" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mlbnn-eth0" Oct 9 07:25:00.503068 containerd[1462]: time="2024-10-09T07:25:00.502957648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:25:00.503068 containerd[1462]: time="2024-10-09T07:25:00.503030284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:25:00.503068 containerd[1462]: time="2024-10-09T07:25:00.503049710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:25:00.503068 containerd[1462]: time="2024-10-09T07:25:00.503073244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:25:00.525933 systemd[1]: Started cri-containerd-92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89.scope - libcontainer container 92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89. Oct 9 07:25:00.540159 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 07:25:00.564273 containerd[1462]: time="2024-10-09T07:25:00.564231598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mlbnn,Uid:55b140d9-ccd9-4df3-a1c4-d69882a267d2,Namespace:kube-system,Attempt:1,} returns sandbox id \"92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89\"" Oct 9 07:25:00.565329 kubelet[2566]: E1009 07:25:00.565294 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:25:00.567975 containerd[1462]: time="2024-10-09T07:25:00.567905556Z" level=info msg="CreateContainer within sandbox \"92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:25:00.598382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1625423089.mount: Deactivated successfully. Oct 9 07:25:00.600936 containerd[1462]: time="2024-10-09T07:25:00.600347896Z" level=info msg="CreateContainer within sandbox \"92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2a473cbc1d504ea654c34641dcbd6b1c590a3c706aa4951dea2bdc9a36345903\"" Oct 9 07:25:00.601027 containerd[1462]: time="2024-10-09T07:25:00.600973750Z" level=info msg="StartContainer for \"2a473cbc1d504ea654c34641dcbd6b1c590a3c706aa4951dea2bdc9a36345903\"" Oct 9 07:25:00.640068 systemd[1]: Started cri-containerd-2a473cbc1d504ea654c34641dcbd6b1c590a3c706aa4951dea2bdc9a36345903.scope - libcontainer container 2a473cbc1d504ea654c34641dcbd6b1c590a3c706aa4951dea2bdc9a36345903. Oct 9 07:25:00.671978 containerd[1462]: time="2024-10-09T07:25:00.671942663Z" level=info msg="StartContainer for \"2a473cbc1d504ea654c34641dcbd6b1c590a3c706aa4951dea2bdc9a36345903\" returns successfully" Oct 9 07:25:01.262254 containerd[1462]: time="2024-10-09T07:25:01.262208582Z" level=info msg="StopPodSandbox for \"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\"" Oct 9 07:25:01.367468 kubelet[2566]: E1009 07:25:01.366902 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:25:01.457325 kubelet[2566]: I1009 07:25:01.457263 2566 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-mlbnn" podStartSLOduration=31.457222932 podStartE2EDuration="31.457222932s" podCreationTimestamp="2024-10-09 07:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:25:01.457156097 +0000 UTC m=+45.272650315" watchObservedRunningTime="2024-10-09 07:25:01.457222932 +0000 UTC m=+45.272717150" Oct 9 07:25:01.463155 containerd[1462]: 2024-10-09 07:25:01.400 [INFO][4321] k8s.go 608: Cleaning up netns ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Oct 9 07:25:01.463155 containerd[1462]: 2024-10-09 07:25:01.400 [INFO][4321] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" iface="eth0" netns="/var/run/netns/cni-21b1ceb5-9ad7-3e2d-da4c-a20199146617" Oct 9 07:25:01.463155 containerd[1462]: 2024-10-09 07:25:01.401 [INFO][4321] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" iface="eth0" netns="/var/run/netns/cni-21b1ceb5-9ad7-3e2d-da4c-a20199146617" Oct 9 07:25:01.463155 containerd[1462]: 2024-10-09 07:25:01.401 [INFO][4321] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" iface="eth0" netns="/var/run/netns/cni-21b1ceb5-9ad7-3e2d-da4c-a20199146617" Oct 9 07:25:01.463155 containerd[1462]: 2024-10-09 07:25:01.401 [INFO][4321] k8s.go 615: Releasing IP address(es) ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Oct 9 07:25:01.463155 containerd[1462]: 2024-10-09 07:25:01.401 [INFO][4321] utils.go 188: Calico CNI releasing IP address ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Oct 9 07:25:01.463155 containerd[1462]: 2024-10-09 07:25:01.420 [INFO][4329] ipam_plugin.go 417: Releasing address using handleID ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" HandleID="k8s-pod-network.fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Workload="localhost-k8s-csi--node--driver--hsltf-eth0" Oct 9 07:25:01.463155 containerd[1462]: 2024-10-09 07:25:01.420 [INFO][4329] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:25:01.463155 containerd[1462]: 2024-10-09 07:25:01.420 [INFO][4329] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:25:01.463155 containerd[1462]: 2024-10-09 07:25:01.456 [WARNING][4329] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" HandleID="k8s-pod-network.fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Workload="localhost-k8s-csi--node--driver--hsltf-eth0" Oct 9 07:25:01.463155 containerd[1462]: 2024-10-09 07:25:01.456 [INFO][4329] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" HandleID="k8s-pod-network.fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Workload="localhost-k8s-csi--node--driver--hsltf-eth0" Oct 9 07:25:01.463155 containerd[1462]: 2024-10-09 07:25:01.458 [INFO][4329] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:25:01.463155 containerd[1462]: 2024-10-09 07:25:01.460 [INFO][4321] k8s.go 621: Teardown processing complete. ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Oct 9 07:25:01.463788 containerd[1462]: time="2024-10-09T07:25:01.463308677Z" level=info msg="TearDown network for sandbox \"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\" successfully" Oct 9 07:25:01.463788 containerd[1462]: time="2024-10-09T07:25:01.463335297Z" level=info msg="StopPodSandbox for \"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\" returns successfully" Oct 9 07:25:01.463930 containerd[1462]: time="2024-10-09T07:25:01.463900327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hsltf,Uid:ebf1fe33-16c6-4476-9371-316390576226,Namespace:calico-system,Attempt:1,}" Oct 9 07:25:01.506874 systemd[1]: run-netns-cni\x2d21b1ceb5\x2d9ad7\x2d3e2d\x2dda4c\x2da20199146617.mount: Deactivated successfully. Oct 9 07:25:01.911799 containerd[1462]: time="2024-10-09T07:25:01.911748345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:01.925984 containerd[1462]: time="2024-10-09T07:25:01.925945949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 07:25:01.929696 containerd[1462]: time="2024-10-09T07:25:01.929648199Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:01.935699 containerd[1462]: time="2024-10-09T07:25:01.933766971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:01.935979 containerd[1462]: time="2024-10-09T07:25:01.935947786Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 3.050114116s" Oct 9 07:25:01.936018 containerd[1462]: time="2024-10-09T07:25:01.935986549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 07:25:01.950505 containerd[1462]: time="2024-10-09T07:25:01.950465491Z" level=info msg="CreateContainer within sandbox \"8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 07:25:01.962622 systemd-networkd[1397]: cali7d90bcedca6: Gained IPv6LL Oct 9 07:25:01.968587 containerd[1462]: time="2024-10-09T07:25:01.968415639Z" level=info msg="CreateContainer within sandbox \"8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f955c0a7ff95a239cc3795f9aec7f38631ab7b88c1f3fc44efc2e8fb9f68ab37\"" Oct 9 07:25:01.970459 containerd[1462]: time="2024-10-09T07:25:01.969100114Z" level=info msg="StartContainer for \"f955c0a7ff95a239cc3795f9aec7f38631ab7b88c1f3fc44efc2e8fb9f68ab37\"" Oct 9 07:25:02.002310 systemd[1]: Started cri-containerd-f955c0a7ff95a239cc3795f9aec7f38631ab7b88c1f3fc44efc2e8fb9f68ab37.scope - libcontainer container f955c0a7ff95a239cc3795f9aec7f38631ab7b88c1f3fc44efc2e8fb9f68ab37. Oct 9 07:25:02.031625 systemd-networkd[1397]: calie390a35c7bb: Link UP Oct 9 07:25:02.032741 systemd-networkd[1397]: calie390a35c7bb: Gained carrier Oct 9 07:25:02.080932 containerd[1462]: 2024-10-09 07:25:01.961 [INFO][4342] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--hsltf-eth0 csi-node-driver- calico-system ebf1fe33-16c6-4476-9371-316390576226 859 0 2024-10-09 07:24:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-hsltf eth0 default [] [] [kns.calico-system ksa.calico-system.default] calie390a35c7bb [] []}} ContainerID="67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e" Namespace="calico-system" Pod="csi-node-driver-hsltf" WorkloadEndpoint="localhost-k8s-csi--node--driver--hsltf-" Oct 9 07:25:02.080932 containerd[1462]: 2024-10-09 07:25:01.961 [INFO][4342] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e" Namespace="calico-system" Pod="csi-node-driver-hsltf" WorkloadEndpoint="localhost-k8s-csi--node--driver--hsltf-eth0" Oct 9 07:25:02.080932 containerd[1462]: 2024-10-09 07:25:01.990 [INFO][4357] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e" HandleID="k8s-pod-network.67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e" Workload="localhost-k8s-csi--node--driver--hsltf-eth0" Oct 9 07:25:02.080932 containerd[1462]: 2024-10-09 07:25:02.001 [INFO][4357] ipam_plugin.go 270: Auto assigning IP ContainerID="67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e" HandleID="k8s-pod-network.67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e" Workload="localhost-k8s-csi--node--driver--hsltf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00060fde0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-hsltf", "timestamp":"2024-10-09 07:25:01.990700436 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:25:02.080932 containerd[1462]: 2024-10-09 07:25:02.001 [INFO][4357] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:25:02.080932 containerd[1462]: 2024-10-09 07:25:02.001 [INFO][4357] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:25:02.080932 containerd[1462]: 2024-10-09 07:25:02.001 [INFO][4357] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 07:25:02.080932 containerd[1462]: 2024-10-09 07:25:02.002 [INFO][4357] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e" host="localhost" Oct 9 07:25:02.080932 containerd[1462]: 2024-10-09 07:25:02.006 [INFO][4357] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 07:25:02.080932 containerd[1462]: 2024-10-09 07:25:02.010 [INFO][4357] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 07:25:02.080932 containerd[1462]: 2024-10-09 07:25:02.011 [INFO][4357] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 07:25:02.080932 containerd[1462]: 2024-10-09 07:25:02.013 [INFO][4357] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 07:25:02.080932 containerd[1462]: 2024-10-09 07:25:02.013 [INFO][4357] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e" host="localhost" Oct 9 07:25:02.080932 containerd[1462]: 2024-10-09 07:25:02.014 [INFO][4357] ipam.go 1685: Creating new handle: k8s-pod-network.67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e Oct 9 07:25:02.080932 containerd[1462]: 2024-10-09 07:25:02.017 [INFO][4357] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e" host="localhost" Oct 9 07:25:02.080932 containerd[1462]: 2024-10-09 07:25:02.025 [INFO][4357] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e" host="localhost" Oct 9 07:25:02.080932 containerd[1462]: 2024-10-09 07:25:02.025 [INFO][4357] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e" host="localhost" Oct 9 07:25:02.080932 containerd[1462]: 2024-10-09 07:25:02.025 [INFO][4357] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:25:02.080932 containerd[1462]: 2024-10-09 07:25:02.025 [INFO][4357] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e" HandleID="k8s-pod-network.67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e" Workload="localhost-k8s-csi--node--driver--hsltf-eth0" Oct 9 07:25:02.081549 containerd[1462]: 2024-10-09 07:25:02.028 [INFO][4342] k8s.go 386: Populated endpoint ContainerID="67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e" Namespace="calico-system" Pod="csi-node-driver-hsltf" WorkloadEndpoint="localhost-k8s-csi--node--driver--hsltf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hsltf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ebf1fe33-16c6-4476-9371-316390576226", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 24, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-hsltf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie390a35c7bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:25:02.081549 containerd[1462]: 2024-10-09 07:25:02.028 [INFO][4342] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e" Namespace="calico-system" Pod="csi-node-driver-hsltf" WorkloadEndpoint="localhost-k8s-csi--node--driver--hsltf-eth0" Oct 9 07:25:02.081549 containerd[1462]: 2024-10-09 07:25:02.028 [INFO][4342] dataplane_linux.go 68: Setting the host side veth name to calie390a35c7bb ContainerID="67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e" Namespace="calico-system" Pod="csi-node-driver-hsltf" WorkloadEndpoint="localhost-k8s-csi--node--driver--hsltf-eth0" Oct 9 07:25:02.081549 containerd[1462]: 2024-10-09 07:25:02.032 [INFO][4342] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e" Namespace="calico-system" Pod="csi-node-driver-hsltf" WorkloadEndpoint="localhost-k8s-csi--node--driver--hsltf-eth0" Oct 9 07:25:02.081549 containerd[1462]: 2024-10-09 07:25:02.032 [INFO][4342] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e" Namespace="calico-system" Pod="csi-node-driver-hsltf" WorkloadEndpoint="localhost-k8s-csi--node--driver--hsltf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hsltf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ebf1fe33-16c6-4476-9371-316390576226", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 24, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e", Pod:"csi-node-driver-hsltf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie390a35c7bb", MAC:"aa:6c:9e:3a:ec:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:25:02.081549 containerd[1462]: 2024-10-09 07:25:02.049 [INFO][4342] k8s.go 500: Wrote updated endpoint to datastore ContainerID="67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e" Namespace="calico-system" Pod="csi-node-driver-hsltf" WorkloadEndpoint="localhost-k8s-csi--node--driver--hsltf-eth0" Oct 9 07:25:02.224452 containerd[1462]: time="2024-10-09T07:25:02.224328599Z" level=info msg="StartContainer for \"f955c0a7ff95a239cc3795f9aec7f38631ab7b88c1f3fc44efc2e8fb9f68ab37\" returns successfully" Oct 9 07:25:02.253568 containerd[1462]: time="2024-10-09T07:25:02.253369786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:25:02.253568 containerd[1462]: time="2024-10-09T07:25:02.253501133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:25:02.253568 containerd[1462]: time="2024-10-09T07:25:02.253547319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:25:02.253787 containerd[1462]: time="2024-10-09T07:25:02.253586864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:25:02.278586 containerd[1462]: time="2024-10-09T07:25:02.278380346Z" level=info msg="StopPodSandbox for \"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\"" Oct 9 07:25:02.280560 systemd[1]: Started cri-containerd-67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e.scope - libcontainer container 67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e. Oct 9 07:25:02.300447 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 07:25:02.316886 containerd[1462]: time="2024-10-09T07:25:02.316843680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hsltf,Uid:ebf1fe33-16c6-4476-9371-316390576226,Namespace:calico-system,Attempt:1,} returns sandbox id \"67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e\"" Oct 9 07:25:02.320040 containerd[1462]: time="2024-10-09T07:25:02.319988745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 07:25:02.383248 kubelet[2566]: E1009 07:25:02.383218 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:25:02.387804 containerd[1462]: 2024-10-09 07:25:02.337 [INFO][4469] k8s.go 608: Cleaning up netns ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Oct 9 07:25:02.387804 containerd[1462]: 2024-10-09 07:25:02.337 [INFO][4469] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" iface="eth0" netns="/var/run/netns/cni-5851df9c-715d-788f-98fb-0e1de9ae34c9" Oct 9 07:25:02.387804 containerd[1462]: 2024-10-09 07:25:02.338 [INFO][4469] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" iface="eth0" netns="/var/run/netns/cni-5851df9c-715d-788f-98fb-0e1de9ae34c9" Oct 9 07:25:02.387804 containerd[1462]: 2024-10-09 07:25:02.338 [INFO][4469] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" iface="eth0" netns="/var/run/netns/cni-5851df9c-715d-788f-98fb-0e1de9ae34c9" Oct 9 07:25:02.387804 containerd[1462]: 2024-10-09 07:25:02.338 [INFO][4469] k8s.go 615: Releasing IP address(es) ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Oct 9 07:25:02.387804 containerd[1462]: 2024-10-09 07:25:02.338 [INFO][4469] utils.go 188: Calico CNI releasing IP address ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Oct 9 07:25:02.387804 containerd[1462]: 2024-10-09 07:25:02.371 [INFO][4484] ipam_plugin.go 417: Releasing address using handleID ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" HandleID="k8s-pod-network.a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Workload="localhost-k8s-coredns--76f75df574--42m82-eth0" Oct 9 07:25:02.387804 containerd[1462]: 2024-10-09 07:25:02.371 [INFO][4484] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:25:02.387804 containerd[1462]: 2024-10-09 07:25:02.372 [INFO][4484] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:25:02.387804 containerd[1462]: 2024-10-09 07:25:02.377 [WARNING][4484] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" HandleID="k8s-pod-network.a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Workload="localhost-k8s-coredns--76f75df574--42m82-eth0" Oct 9 07:25:02.387804 containerd[1462]: 2024-10-09 07:25:02.377 [INFO][4484] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" HandleID="k8s-pod-network.a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Workload="localhost-k8s-coredns--76f75df574--42m82-eth0" Oct 9 07:25:02.387804 containerd[1462]: 2024-10-09 07:25:02.378 [INFO][4484] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:25:02.387804 containerd[1462]: 2024-10-09 07:25:02.381 [INFO][4469] k8s.go 621: Teardown processing complete. ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Oct 9 07:25:02.388457 containerd[1462]: time="2024-10-09T07:25:02.388350495Z" level=info msg="TearDown network for sandbox \"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\" successfully" Oct 9 07:25:02.388457 containerd[1462]: time="2024-10-09T07:25:02.388392414Z" level=info msg="StopPodSandbox for \"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\" returns successfully" Oct 9 07:25:02.388802 kubelet[2566]: E1009 07:25:02.388783 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:25:02.391004 containerd[1462]: time="2024-10-09T07:25:02.390957249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-42m82,Uid:00d4b847-5b1d-41da-b0cb-da401e4a82c9,Namespace:kube-system,Attempt:1,}" Oct 9 07:25:02.397299 kubelet[2566]: I1009 07:25:02.396280 2566 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-696496d9fb-275b5" podStartSLOduration=23.345501468 podStartE2EDuration="26.396246508s" podCreationTimestamp="2024-10-09 07:24:36 +0000 UTC" firstStartedPulling="2024-10-09 07:24:58.885498641 +0000 UTC m=+42.700992859" lastFinishedPulling="2024-10-09 07:25:01.936243671 +0000 UTC m=+45.751737899" observedRunningTime="2024-10-09 07:25:02.396129208 +0000 UTC m=+46.211623426" watchObservedRunningTime="2024-10-09 07:25:02.396246508 +0000 UTC m=+46.211740726" Oct 9 07:25:02.509863 systemd[1]: run-netns-cni\x2d5851df9c\x2d715d\x2d788f\x2d98fb\x2d0e1de9ae34c9.mount: Deactivated successfully. Oct 9 07:25:02.522314 systemd-networkd[1397]: calif4acb53c21e: Link UP Oct 9 07:25:02.522504 systemd-networkd[1397]: calif4acb53c21e: Gained carrier Oct 9 07:25:02.537096 containerd[1462]: 2024-10-09 07:25:02.446 [INFO][4511] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--42m82-eth0 coredns-76f75df574- kube-system 00d4b847-5b1d-41da-b0cb-da401e4a82c9 890 0 2024-10-09 07:24:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-42m82 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif4acb53c21e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92" Namespace="kube-system" Pod="coredns-76f75df574-42m82" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--42m82-" Oct 9 07:25:02.537096 containerd[1462]: 2024-10-09 07:25:02.446 [INFO][4511] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92" Namespace="kube-system" Pod="coredns-76f75df574-42m82" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--42m82-eth0" Oct 9 07:25:02.537096 containerd[1462]: 2024-10-09 07:25:02.483 [INFO][4529] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92" HandleID="k8s-pod-network.e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92" Workload="localhost-k8s-coredns--76f75df574--42m82-eth0" Oct 9 07:25:02.537096 containerd[1462]: 2024-10-09 07:25:02.491 [INFO][4529] ipam_plugin.go 270: Auto assigning IP ContainerID="e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92" HandleID="k8s-pod-network.e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92" Workload="localhost-k8s-coredns--76f75df574--42m82-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000294d70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-42m82", "timestamp":"2024-10-09 07:25:02.483465257 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:25:02.537096 containerd[1462]: 2024-10-09 07:25:02.491 [INFO][4529] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:25:02.537096 containerd[1462]: 2024-10-09 07:25:02.491 [INFO][4529] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:25:02.537096 containerd[1462]: 2024-10-09 07:25:02.491 [INFO][4529] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 07:25:02.537096 containerd[1462]: 2024-10-09 07:25:02.492 [INFO][4529] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92" host="localhost" Oct 9 07:25:02.537096 containerd[1462]: 2024-10-09 07:25:02.496 [INFO][4529] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 07:25:02.537096 containerd[1462]: 2024-10-09 07:25:02.500 [INFO][4529] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 07:25:02.537096 containerd[1462]: 2024-10-09 07:25:02.501 [INFO][4529] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 07:25:02.537096 containerd[1462]: 2024-10-09 07:25:02.504 [INFO][4529] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 07:25:02.537096 containerd[1462]: 2024-10-09 07:25:02.504 [INFO][4529] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92" host="localhost" Oct 9 07:25:02.537096 containerd[1462]: 2024-10-09 07:25:02.507 [INFO][4529] ipam.go 1685: Creating new handle: k8s-pod-network.e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92 Oct 9 07:25:02.537096 containerd[1462]: 2024-10-09 07:25:02.511 [INFO][4529] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92" host="localhost" Oct 9 07:25:02.537096 containerd[1462]: 2024-10-09 07:25:02.517 [INFO][4529] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92" host="localhost" Oct 9 07:25:02.537096 containerd[1462]: 2024-10-09 07:25:02.517 [INFO][4529] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92" host="localhost" Oct 9 07:25:02.537096 containerd[1462]: 2024-10-09 07:25:02.517 [INFO][4529] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:25:02.537096 containerd[1462]: 2024-10-09 07:25:02.517 [INFO][4529] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92" HandleID="k8s-pod-network.e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92" Workload="localhost-k8s-coredns--76f75df574--42m82-eth0" Oct 9 07:25:02.538227 containerd[1462]: 2024-10-09 07:25:02.520 [INFO][4511] k8s.go 386: Populated endpoint ContainerID="e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92" Namespace="kube-system" Pod="coredns-76f75df574-42m82" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--42m82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--42m82-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"00d4b847-5b1d-41da-b0cb-da401e4a82c9", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 24, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-42m82", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4acb53c21e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:25:02.538227 containerd[1462]: 2024-10-09 07:25:02.520 [INFO][4511] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92" Namespace="kube-system" Pod="coredns-76f75df574-42m82" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--42m82-eth0" Oct 9 07:25:02.538227 containerd[1462]: 2024-10-09 07:25:02.521 [INFO][4511] dataplane_linux.go 68: Setting the host side veth name to calif4acb53c21e ContainerID="e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92" Namespace="kube-system" Pod="coredns-76f75df574-42m82" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--42m82-eth0" Oct 9 07:25:02.538227 containerd[1462]: 2024-10-09 07:25:02.522 [INFO][4511] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92" Namespace="kube-system" Pod="coredns-76f75df574-42m82" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--42m82-eth0" Oct 9 07:25:02.538227 containerd[1462]: 2024-10-09 07:25:02.522 [INFO][4511] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92" Namespace="kube-system" Pod="coredns-76f75df574-42m82" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--42m82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--42m82-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"00d4b847-5b1d-41da-b0cb-da401e4a82c9", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 24, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92", Pod:"coredns-76f75df574-42m82", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4acb53c21e", MAC:"d6:1e:c3:b5:88:7a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:25:02.538227 containerd[1462]: 2024-10-09 07:25:02.531 [INFO][4511] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92" Namespace="kube-system" Pod="coredns-76f75df574-42m82" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--42m82-eth0" Oct 9 07:25:02.561366 containerd[1462]: time="2024-10-09T07:25:02.560939854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:25:02.561366 containerd[1462]: time="2024-10-09T07:25:02.561016578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:25:02.561366 containerd[1462]: time="2024-10-09T07:25:02.561038840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:25:02.561366 containerd[1462]: time="2024-10-09T07:25:02.561052075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:25:02.587842 systemd[1]: Started cri-containerd-e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92.scope - libcontainer container e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92. Oct 9 07:25:02.599782 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 07:25:02.625296 containerd[1462]: time="2024-10-09T07:25:02.625256170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-42m82,Uid:00d4b847-5b1d-41da-b0cb-da401e4a82c9,Namespace:kube-system,Attempt:1,} returns sandbox id \"e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92\"" Oct 9 07:25:02.626171 kubelet[2566]: E1009 07:25:02.626139 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:25:02.628366 containerd[1462]: time="2024-10-09T07:25:02.628047320Z" level=info msg="CreateContainer within sandbox \"e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:25:02.643164 containerd[1462]: time="2024-10-09T07:25:02.643139381Z" level=info msg="CreateContainer within sandbox \"e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"583cc20b22485f74f1fdb9f8b7e8506c0e6bc0d0d3572fa0c8602e86109ed46d\"" Oct 9 07:25:02.643497 containerd[1462]: time="2024-10-09T07:25:02.643447469Z" level=info msg="StartContainer for \"583cc20b22485f74f1fdb9f8b7e8506c0e6bc0d0d3572fa0c8602e86109ed46d\"" Oct 9 07:25:02.672820 systemd[1]: Started cri-containerd-583cc20b22485f74f1fdb9f8b7e8506c0e6bc0d0d3572fa0c8602e86109ed46d.scope - libcontainer container 583cc20b22485f74f1fdb9f8b7e8506c0e6bc0d0d3572fa0c8602e86109ed46d. Oct 9 07:25:02.698371 containerd[1462]: time="2024-10-09T07:25:02.698272358Z" level=info msg="StartContainer for \"583cc20b22485f74f1fdb9f8b7e8506c0e6bc0d0d3572fa0c8602e86109ed46d\" returns successfully" Oct 9 07:25:03.033110 systemd[1]: Started sshd@14-10.0.0.107:22-10.0.0.1:35568.service - OpenSSH per-connection server daemon (10.0.0.1:35568). Oct 9 07:25:03.081940 sshd[4628]: Accepted publickey for core from 10.0.0.1 port 35568 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:25:03.083537 sshd[4628]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:25:03.087433 systemd-logind[1441]: New session 15 of user core. Oct 9 07:25:03.095791 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 07:25:03.207625 sshd[4628]: pam_unix(sshd:session): session closed for user core Oct 9 07:25:03.211851 systemd[1]: sshd@14-10.0.0.107:22-10.0.0.1:35568.service: Deactivated successfully. Oct 9 07:25:03.214116 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 07:25:03.214763 systemd-logind[1441]: Session 15 logged out. Waiting for processes to exit. Oct 9 07:25:03.215617 systemd-logind[1441]: Removed session 15. Oct 9 07:25:03.391179 kubelet[2566]: E1009 07:25:03.391041 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:25:03.393024 kubelet[2566]: E1009 07:25:03.391789 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:25:03.426735 kubelet[2566]: I1009 07:25:03.426700 2566 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-42m82" podStartSLOduration=33.426649015 podStartE2EDuration="33.426649015s" podCreationTimestamp="2024-10-09 07:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:25:03.406854108 +0000 UTC m=+47.222348346" watchObservedRunningTime="2024-10-09 07:25:03.426649015 +0000 UTC m=+47.242143233" Oct 9 07:25:03.816866 systemd-networkd[1397]: calie390a35c7bb: Gained IPv6LL Oct 9 07:25:03.817221 containerd[1462]: time="2024-10-09T07:25:03.817067953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:03.818334 containerd[1462]: time="2024-10-09T07:25:03.818274218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 07:25:03.819391 containerd[1462]: time="2024-10-09T07:25:03.819359565Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:03.821320 containerd[1462]: time="2024-10-09T07:25:03.821282325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:03.822052 containerd[1462]: time="2024-10-09T07:25:03.822014349Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.501974529s" Oct 9 07:25:03.822121 containerd[1462]: time="2024-10-09T07:25:03.822051428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 07:25:03.823587 containerd[1462]: time="2024-10-09T07:25:03.823558678Z" level=info msg="CreateContainer within sandbox \"67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 07:25:03.844606 containerd[1462]: time="2024-10-09T07:25:03.844571130Z" level=info msg="CreateContainer within sandbox \"67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"42591f9a880678fcac800cca6d4adcfd1e21afd4aaecb30f95f8a4d3b327afac\"" Oct 9 07:25:03.845337 containerd[1462]: time="2024-10-09T07:25:03.845309055Z" level=info msg="StartContainer for \"42591f9a880678fcac800cca6d4adcfd1e21afd4aaecb30f95f8a4d3b327afac\"" Oct 9 07:25:03.892822 systemd[1]: Started cri-containerd-42591f9a880678fcac800cca6d4adcfd1e21afd4aaecb30f95f8a4d3b327afac.scope - libcontainer container 42591f9a880678fcac800cca6d4adcfd1e21afd4aaecb30f95f8a4d3b327afac. Oct 9 07:25:04.060252 containerd[1462]: time="2024-10-09T07:25:04.060207446Z" level=info msg="StartContainer for \"42591f9a880678fcac800cca6d4adcfd1e21afd4aaecb30f95f8a4d3b327afac\" returns successfully" Oct 9 07:25:04.061379 containerd[1462]: time="2024-10-09T07:25:04.061237189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 07:25:04.393771 kubelet[2566]: E1009 07:25:04.393724 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:25:04.456861 systemd-networkd[1397]: calif4acb53c21e: Gained IPv6LL Oct 9 07:25:04.507420 systemd[1]: run-containerd-runc-k8s.io-42591f9a880678fcac800cca6d4adcfd1e21afd4aaecb30f95f8a4d3b327afac-runc.nHe1Gn.mount: Deactivated successfully. Oct 9 07:25:05.395295 kubelet[2566]: E1009 07:25:05.395258 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:25:06.093774 containerd[1462]: time="2024-10-09T07:25:06.093717902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:06.094481 containerd[1462]: time="2024-10-09T07:25:06.094418507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 07:25:06.095778 containerd[1462]: time="2024-10-09T07:25:06.095729487Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:06.097820 containerd[1462]: time="2024-10-09T07:25:06.097784715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:06.098431 containerd[1462]: time="2024-10-09T07:25:06.098390792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 2.037118557s" Oct 9 07:25:06.098431 containerd[1462]: time="2024-10-09T07:25:06.098427661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 07:25:06.099951 containerd[1462]: time="2024-10-09T07:25:06.099900155Z" level=info msg="CreateContainer within sandbox \"67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 07:25:06.113779 containerd[1462]: time="2024-10-09T07:25:06.113744368Z" level=info msg="CreateContainer within sandbox \"67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a03f4a6c6c1e18d52a3b022cc18203b65e9d305988efcd0831c7cd3522a2ea84\"" Oct 9 07:25:06.114236 containerd[1462]: time="2024-10-09T07:25:06.114111536Z" level=info msg="StartContainer for \"a03f4a6c6c1e18d52a3b022cc18203b65e9d305988efcd0831c7cd3522a2ea84\"" Oct 9 07:25:06.151839 systemd[1]: Started cri-containerd-a03f4a6c6c1e18d52a3b022cc18203b65e9d305988efcd0831c7cd3522a2ea84.scope - libcontainer container a03f4a6c6c1e18d52a3b022cc18203b65e9d305988efcd0831c7cd3522a2ea84. Oct 9 07:25:06.181232 containerd[1462]: time="2024-10-09T07:25:06.181188406Z" level=info msg="StartContainer for \"a03f4a6c6c1e18d52a3b022cc18203b65e9d305988efcd0831c7cd3522a2ea84\" returns successfully" Oct 9 07:25:06.324665 kubelet[2566]: I1009 07:25:06.324629 2566 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 07:25:06.325549 kubelet[2566]: I1009 07:25:06.325529 2566 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 07:25:06.410343 kubelet[2566]: I1009 07:25:06.410197 2566 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-hsltf" podStartSLOduration=26.630690239 podStartE2EDuration="30.410146204s" podCreationTimestamp="2024-10-09 07:24:36 +0000 UTC" firstStartedPulling="2024-10-09 07:25:02.319217938 +0000 UTC m=+46.134712156" lastFinishedPulling="2024-10-09 07:25:06.098673903 +0000 UTC m=+49.914168121" observedRunningTime="2024-10-09 07:25:06.409640676 +0000 UTC m=+50.225134914" watchObservedRunningTime="2024-10-09 07:25:06.410146204 +0000 UTC m=+50.225640422" Oct 9 07:25:08.222928 systemd[1]: Started sshd@15-10.0.0.107:22-10.0.0.1:35584.service - OpenSSH per-connection server daemon (10.0.0.1:35584). Oct 9 07:25:08.260121 sshd[4750]: Accepted publickey for core from 10.0.0.1 port 35584 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:25:08.261980 sshd[4750]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:25:08.266978 systemd-logind[1441]: New session 16 of user core. Oct 9 07:25:08.276804 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 07:25:08.388188 sshd[4750]: pam_unix(sshd:session): session closed for user core Oct 9 07:25:08.392499 systemd[1]: sshd@15-10.0.0.107:22-10.0.0.1:35584.service: Deactivated successfully. Oct 9 07:25:08.394781 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 07:25:08.395399 systemd-logind[1441]: Session 16 logged out. Waiting for processes to exit. Oct 9 07:25:08.396283 systemd-logind[1441]: Removed session 16. Oct 9 07:25:10.007167 kubelet[2566]: E1009 07:25:10.007137 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:25:13.399766 systemd[1]: Started sshd@16-10.0.0.107:22-10.0.0.1:41998.service - OpenSSH per-connection server daemon (10.0.0.1:41998). Oct 9 07:25:13.438922 sshd[4788]: Accepted publickey for core from 10.0.0.1 port 41998 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:25:13.440483 sshd[4788]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:25:13.444192 systemd-logind[1441]: New session 17 of user core. Oct 9 07:25:13.450811 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 07:25:13.562399 sshd[4788]: pam_unix(sshd:session): session closed for user core Oct 9 07:25:13.566472 systemd[1]: sshd@16-10.0.0.107:22-10.0.0.1:41998.service: Deactivated successfully. Oct 9 07:25:13.568465 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 07:25:13.569066 systemd-logind[1441]: Session 17 logged out. Waiting for processes to exit. Oct 9 07:25:13.569848 systemd-logind[1441]: Removed session 17. Oct 9 07:25:16.243763 containerd[1462]: time="2024-10-09T07:25:16.243709138Z" level=info msg="StopPodSandbox for \"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\"" Oct 9 07:25:16.309064 containerd[1462]: 2024-10-09 07:25:16.276 [WARNING][4816] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mlbnn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"55b140d9-ccd9-4df3-a1c4-d69882a267d2", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 24, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89", Pod:"coredns-76f75df574-mlbnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d90bcedca6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:25:16.309064 containerd[1462]: 2024-10-09 07:25:16.276 [INFO][4816] k8s.go 608: Cleaning up netns ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Oct 9 07:25:16.309064 containerd[1462]: 2024-10-09 07:25:16.276 [INFO][4816] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" iface="eth0" netns="" Oct 9 07:25:16.309064 containerd[1462]: 2024-10-09 07:25:16.276 [INFO][4816] k8s.go 615: Releasing IP address(es) ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Oct 9 07:25:16.309064 containerd[1462]: 2024-10-09 07:25:16.276 [INFO][4816] utils.go 188: Calico CNI releasing IP address ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Oct 9 07:25:16.309064 containerd[1462]: 2024-10-09 07:25:16.298 [INFO][4826] ipam_plugin.go 417: Releasing address using handleID ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" HandleID="k8s-pod-network.3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Workload="localhost-k8s-coredns--76f75df574--mlbnn-eth0" Oct 9 07:25:16.309064 containerd[1462]: 2024-10-09 07:25:16.298 [INFO][4826] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:25:16.309064 containerd[1462]: 2024-10-09 07:25:16.298 [INFO][4826] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:25:16.309064 containerd[1462]: 2024-10-09 07:25:16.302 [WARNING][4826] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" HandleID="k8s-pod-network.3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Workload="localhost-k8s-coredns--76f75df574--mlbnn-eth0" Oct 9 07:25:16.309064 containerd[1462]: 2024-10-09 07:25:16.302 [INFO][4826] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" HandleID="k8s-pod-network.3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Workload="localhost-k8s-coredns--76f75df574--mlbnn-eth0" Oct 9 07:25:16.309064 containerd[1462]: 2024-10-09 07:25:16.304 [INFO][4826] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:25:16.309064 containerd[1462]: 2024-10-09 07:25:16.306 [INFO][4816] k8s.go 621: Teardown processing complete. ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Oct 9 07:25:16.309579 containerd[1462]: time="2024-10-09T07:25:16.309097348Z" level=info msg="TearDown network for sandbox \"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\" successfully" Oct 9 07:25:16.309579 containerd[1462]: time="2024-10-09T07:25:16.309125503Z" level=info msg="StopPodSandbox for \"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\" returns successfully" Oct 9 07:25:16.309579 containerd[1462]: time="2024-10-09T07:25:16.309565022Z" level=info msg="RemovePodSandbox for \"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\"" Oct 9 07:25:16.312031 containerd[1462]: time="2024-10-09T07:25:16.312007644Z" level=info msg="Forcibly stopping sandbox \"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\"" Oct 9 07:25:16.386562 containerd[1462]: 2024-10-09 07:25:16.354 [WARNING][4849] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mlbnn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"55b140d9-ccd9-4df3-a1c4-d69882a267d2", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 24, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"92c7f61ddd707de93129fbafc1729f1a824277dc793d2e8fd0f64bab321e3d89", Pod:"coredns-76f75df574-mlbnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d90bcedca6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:25:16.386562 containerd[1462]: 2024-10-09 07:25:16.355 [INFO][4849] k8s.go 608: Cleaning up netns ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Oct 9 07:25:16.386562 containerd[1462]: 2024-10-09 07:25:16.355 [INFO][4849] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" iface="eth0" netns="" Oct 9 07:25:16.386562 containerd[1462]: 2024-10-09 07:25:16.355 [INFO][4849] k8s.go 615: Releasing IP address(es) ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Oct 9 07:25:16.386562 containerd[1462]: 2024-10-09 07:25:16.355 [INFO][4849] utils.go 188: Calico CNI releasing IP address ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Oct 9 07:25:16.386562 containerd[1462]: 2024-10-09 07:25:16.375 [INFO][4856] ipam_plugin.go 417: Releasing address using handleID ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" HandleID="k8s-pod-network.3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Workload="localhost-k8s-coredns--76f75df574--mlbnn-eth0" Oct 9 07:25:16.386562 containerd[1462]: 2024-10-09 07:25:16.375 [INFO][4856] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:25:16.386562 containerd[1462]: 2024-10-09 07:25:16.376 [INFO][4856] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:25:16.386562 containerd[1462]: 2024-10-09 07:25:16.380 [WARNING][4856] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" HandleID="k8s-pod-network.3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Workload="localhost-k8s-coredns--76f75df574--mlbnn-eth0" Oct 9 07:25:16.386562 containerd[1462]: 2024-10-09 07:25:16.380 [INFO][4856] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" HandleID="k8s-pod-network.3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Workload="localhost-k8s-coredns--76f75df574--mlbnn-eth0" Oct 9 07:25:16.386562 containerd[1462]: 2024-10-09 07:25:16.381 [INFO][4856] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:25:16.386562 containerd[1462]: 2024-10-09 07:25:16.384 [INFO][4849] k8s.go 621: Teardown processing complete. ContainerID="3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa" Oct 9 07:25:16.387014 containerd[1462]: time="2024-10-09T07:25:16.386592400Z" level=info msg="TearDown network for sandbox \"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\" successfully" Oct 9 07:25:16.709082 containerd[1462]: time="2024-10-09T07:25:16.709015422Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:25:16.714802 containerd[1462]: time="2024-10-09T07:25:16.714766508Z" level=info msg="RemovePodSandbox \"3c2c7d2fb3a37b1a802b7031d6bc388ac9cd279ec15e4f8a2f5a6e6ab4bcadfa\" returns successfully" Oct 9 07:25:16.715321 containerd[1462]: time="2024-10-09T07:25:16.715281593Z" level=info msg="StopPodSandbox for \"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\"" Oct 9 07:25:16.776410 containerd[1462]: 2024-10-09 07:25:16.745 [WARNING][4878] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--42m82-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"00d4b847-5b1d-41da-b0cb-da401e4a82c9", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 24, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92", Pod:"coredns-76f75df574-42m82", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4acb53c21e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:25:16.776410 containerd[1462]: 2024-10-09 07:25:16.745 [INFO][4878] k8s.go 608: Cleaning up netns ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Oct 9 07:25:16.776410 containerd[1462]: 2024-10-09 07:25:16.745 [INFO][4878] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" iface="eth0" netns="" Oct 9 07:25:16.776410 containerd[1462]: 2024-10-09 07:25:16.745 [INFO][4878] k8s.go 615: Releasing IP address(es) ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Oct 9 07:25:16.776410 containerd[1462]: 2024-10-09 07:25:16.745 [INFO][4878] utils.go 188: Calico CNI releasing IP address ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Oct 9 07:25:16.776410 containerd[1462]: 2024-10-09 07:25:16.765 [INFO][4885] ipam_plugin.go 417: Releasing address using handleID ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" HandleID="k8s-pod-network.a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Workload="localhost-k8s-coredns--76f75df574--42m82-eth0" Oct 9 07:25:16.776410 containerd[1462]: 2024-10-09 07:25:16.765 [INFO][4885] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:25:16.776410 containerd[1462]: 2024-10-09 07:25:16.765 [INFO][4885] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:25:16.776410 containerd[1462]: 2024-10-09 07:25:16.770 [WARNING][4885] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" HandleID="k8s-pod-network.a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Workload="localhost-k8s-coredns--76f75df574--42m82-eth0" Oct 9 07:25:16.776410 containerd[1462]: 2024-10-09 07:25:16.770 [INFO][4885] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" HandleID="k8s-pod-network.a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Workload="localhost-k8s-coredns--76f75df574--42m82-eth0" Oct 9 07:25:16.776410 containerd[1462]: 2024-10-09 07:25:16.771 [INFO][4885] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:25:16.776410 containerd[1462]: 2024-10-09 07:25:16.774 [INFO][4878] k8s.go 621: Teardown processing complete. ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Oct 9 07:25:16.776971 containerd[1462]: time="2024-10-09T07:25:16.776443446Z" level=info msg="TearDown network for sandbox \"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\" successfully" Oct 9 07:25:16.776971 containerd[1462]: time="2024-10-09T07:25:16.776469676Z" level=info msg="StopPodSandbox for \"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\" returns successfully" Oct 9 07:25:16.776971 containerd[1462]: time="2024-10-09T07:25:16.776957208Z" level=info msg="RemovePodSandbox for \"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\"" Oct 9 07:25:16.777045 containerd[1462]: time="2024-10-09T07:25:16.776992707Z" level=info msg="Forcibly stopping sandbox \"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\"" Oct 9 07:25:16.839049 containerd[1462]: 2024-10-09 07:25:16.808 [WARNING][4908] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--42m82-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"00d4b847-5b1d-41da-b0cb-da401e4a82c9", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 24, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2d09e9fb93e0a72622cf109e24831057329802bab4b10ceed9d926b328b8c92", Pod:"coredns-76f75df574-42m82", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4acb53c21e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:25:16.839049 containerd[1462]: 2024-10-09 07:25:16.808 [INFO][4908] k8s.go 608: Cleaning up netns ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Oct 9 07:25:16.839049 containerd[1462]: 2024-10-09 07:25:16.808 [INFO][4908] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" iface="eth0" netns="" Oct 9 07:25:16.839049 containerd[1462]: 2024-10-09 07:25:16.808 [INFO][4908] k8s.go 615: Releasing IP address(es) ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Oct 9 07:25:16.839049 containerd[1462]: 2024-10-09 07:25:16.808 [INFO][4908] utils.go 188: Calico CNI releasing IP address ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Oct 9 07:25:16.839049 containerd[1462]: 2024-10-09 07:25:16.828 [INFO][4915] ipam_plugin.go 417: Releasing address using handleID ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" HandleID="k8s-pod-network.a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Workload="localhost-k8s-coredns--76f75df574--42m82-eth0" Oct 9 07:25:16.839049 containerd[1462]: 2024-10-09 07:25:16.828 [INFO][4915] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:25:16.839049 containerd[1462]: 2024-10-09 07:25:16.828 [INFO][4915] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:25:16.839049 containerd[1462]: 2024-10-09 07:25:16.832 [WARNING][4915] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" HandleID="k8s-pod-network.a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Workload="localhost-k8s-coredns--76f75df574--42m82-eth0" Oct 9 07:25:16.839049 containerd[1462]: 2024-10-09 07:25:16.833 [INFO][4915] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" HandleID="k8s-pod-network.a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Workload="localhost-k8s-coredns--76f75df574--42m82-eth0" Oct 9 07:25:16.839049 containerd[1462]: 2024-10-09 07:25:16.834 [INFO][4915] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:25:16.839049 containerd[1462]: 2024-10-09 07:25:16.836 [INFO][4908] k8s.go 621: Teardown processing complete. ContainerID="a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24" Oct 9 07:25:16.839477 containerd[1462]: time="2024-10-09T07:25:16.839116248Z" level=info msg="TearDown network for sandbox \"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\" successfully" Oct 9 07:25:16.843565 containerd[1462]: time="2024-10-09T07:25:16.843521182Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:25:16.843616 containerd[1462]: time="2024-10-09T07:25:16.843579324Z" level=info msg="RemovePodSandbox \"a17fde341509e20e098df518a8d5dd9d5a9e479eb1ec57067e59caee53472c24\" returns successfully" Oct 9 07:25:16.844090 containerd[1462]: time="2024-10-09T07:25:16.844047499Z" level=info msg="StopPodSandbox for \"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\"" Oct 9 07:25:16.906194 containerd[1462]: 2024-10-09 07:25:16.875 [WARNING][4937] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0", GenerateName:"calico-kube-controllers-696496d9fb-", Namespace:"calico-system", SelfLink:"", UID:"8f13c8aa-d4e2-4627-b65a-927e3b23dfe5", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 24, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"696496d9fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db", Pod:"calico-kube-controllers-696496d9fb-275b5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali93efd645eda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:25:16.906194 containerd[1462]: 2024-10-09 07:25:16.876 [INFO][4937] k8s.go 608: Cleaning up netns ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Oct 9 07:25:16.906194 containerd[1462]: 2024-10-09 07:25:16.876 [INFO][4937] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" iface="eth0" netns="" Oct 9 07:25:16.906194 containerd[1462]: 2024-10-09 07:25:16.876 [INFO][4937] k8s.go 615: Releasing IP address(es) ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Oct 9 07:25:16.906194 containerd[1462]: 2024-10-09 07:25:16.876 [INFO][4937] utils.go 188: Calico CNI releasing IP address ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Oct 9 07:25:16.906194 containerd[1462]: 2024-10-09 07:25:16.895 [INFO][4945] ipam_plugin.go 417: Releasing address using handleID ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" HandleID="k8s-pod-network.ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Workload="localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0" Oct 9 07:25:16.906194 containerd[1462]: 2024-10-09 07:25:16.895 [INFO][4945] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:25:16.906194 containerd[1462]: 2024-10-09 07:25:16.895 [INFO][4945] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:25:16.906194 containerd[1462]: 2024-10-09 07:25:16.900 [WARNING][4945] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" HandleID="k8s-pod-network.ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Workload="localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0" Oct 9 07:25:16.906194 containerd[1462]: 2024-10-09 07:25:16.900 [INFO][4945] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" HandleID="k8s-pod-network.ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Workload="localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0" Oct 9 07:25:16.906194 containerd[1462]: 2024-10-09 07:25:16.901 [INFO][4945] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:25:16.906194 containerd[1462]: 2024-10-09 07:25:16.903 [INFO][4937] k8s.go 621: Teardown processing complete. ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Oct 9 07:25:16.906608 containerd[1462]: time="2024-10-09T07:25:16.906205918Z" level=info msg="TearDown network for sandbox \"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\" successfully" Oct 9 07:25:16.906608 containerd[1462]: time="2024-10-09T07:25:16.906231637Z" level=info msg="StopPodSandbox for \"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\" returns successfully" Oct 9 07:25:16.906726 containerd[1462]: time="2024-10-09T07:25:16.906667991Z" level=info msg="RemovePodSandbox for \"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\"" Oct 9 07:25:16.906726 containerd[1462]: time="2024-10-09T07:25:16.906720393Z" level=info msg="Forcibly stopping sandbox \"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\"" Oct 9 07:25:17.023193 containerd[1462]: 2024-10-09 07:25:16.937 [WARNING][4967] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0", GenerateName:"calico-kube-controllers-696496d9fb-", Namespace:"calico-system", SelfLink:"", UID:"8f13c8aa-d4e2-4627-b65a-927e3b23dfe5", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 24, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"696496d9fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e8b1e77066b3782115bcec6910a983878b867025c5140110a2257f14e76d5db", Pod:"calico-kube-controllers-696496d9fb-275b5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali93efd645eda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:25:17.023193 containerd[1462]: 2024-10-09 07:25:16.937 [INFO][4967] k8s.go 608: Cleaning up netns ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Oct 9 07:25:17.023193 containerd[1462]: 2024-10-09 07:25:16.938 [INFO][4967] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" iface="eth0" netns="" Oct 9 07:25:17.023193 containerd[1462]: 2024-10-09 07:25:16.938 [INFO][4967] k8s.go 615: Releasing IP address(es) ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Oct 9 07:25:17.023193 containerd[1462]: 2024-10-09 07:25:16.938 [INFO][4967] utils.go 188: Calico CNI releasing IP address ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Oct 9 07:25:17.023193 containerd[1462]: 2024-10-09 07:25:16.957 [INFO][4975] ipam_plugin.go 417: Releasing address using handleID ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" HandleID="k8s-pod-network.ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Workload="localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0" Oct 9 07:25:17.023193 containerd[1462]: 2024-10-09 07:25:16.957 [INFO][4975] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:25:17.023193 containerd[1462]: 2024-10-09 07:25:16.957 [INFO][4975] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:25:17.023193 containerd[1462]: 2024-10-09 07:25:17.017 [WARNING][4975] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" HandleID="k8s-pod-network.ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Workload="localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0" Oct 9 07:25:17.023193 containerd[1462]: 2024-10-09 07:25:17.017 [INFO][4975] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" HandleID="k8s-pod-network.ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Workload="localhost-k8s-calico--kube--controllers--696496d9fb--275b5-eth0" Oct 9 07:25:17.023193 containerd[1462]: 2024-10-09 07:25:17.018 [INFO][4975] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:25:17.023193 containerd[1462]: 2024-10-09 07:25:17.020 [INFO][4967] k8s.go 621: Teardown processing complete. ContainerID="ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6" Oct 9 07:25:17.023807 containerd[1462]: time="2024-10-09T07:25:17.023219453Z" level=info msg="TearDown network for sandbox \"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\" successfully" Oct 9 07:25:17.062202 containerd[1462]: time="2024-10-09T07:25:17.062155977Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:25:17.062359 containerd[1462]: time="2024-10-09T07:25:17.062225872Z" level=info msg="RemovePodSandbox \"ad0b73a5b56f5b479cc22cc416f561d32c6cd4f003d46e1ecc59ba48a04d8ca6\" returns successfully" Oct 9 07:25:17.062733 containerd[1462]: time="2024-10-09T07:25:17.062696411Z" level=info msg="StopPodSandbox for \"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\"" Oct 9 07:25:17.157329 containerd[1462]: 2024-10-09 07:25:17.092 [WARNING][5017] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hsltf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ebf1fe33-16c6-4476-9371-316390576226", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 24, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e", Pod:"csi-node-driver-hsltf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie390a35c7bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:25:17.157329 containerd[1462]: 2024-10-09 07:25:17.092 [INFO][5017] k8s.go 608: Cleaning up netns ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Oct 9 07:25:17.157329 containerd[1462]: 2024-10-09 07:25:17.092 [INFO][5017] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" iface="eth0" netns="" Oct 9 07:25:17.157329 containerd[1462]: 2024-10-09 07:25:17.092 [INFO][5017] k8s.go 615: Releasing IP address(es) ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Oct 9 07:25:17.157329 containerd[1462]: 2024-10-09 07:25:17.092 [INFO][5017] utils.go 188: Calico CNI releasing IP address ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Oct 9 07:25:17.157329 containerd[1462]: 2024-10-09 07:25:17.110 [INFO][5024] ipam_plugin.go 417: Releasing address using handleID ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" HandleID="k8s-pod-network.fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Workload="localhost-k8s-csi--node--driver--hsltf-eth0" Oct 9 07:25:17.157329 containerd[1462]: 2024-10-09 07:25:17.110 [INFO][5024] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:25:17.157329 containerd[1462]: 2024-10-09 07:25:17.110 [INFO][5024] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:25:17.157329 containerd[1462]: 2024-10-09 07:25:17.115 [WARNING][5024] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" HandleID="k8s-pod-network.fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Workload="localhost-k8s-csi--node--driver--hsltf-eth0" Oct 9 07:25:17.157329 containerd[1462]: 2024-10-09 07:25:17.115 [INFO][5024] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" HandleID="k8s-pod-network.fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Workload="localhost-k8s-csi--node--driver--hsltf-eth0" Oct 9 07:25:17.157329 containerd[1462]: 2024-10-09 07:25:17.153 [INFO][5024] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:25:17.157329 containerd[1462]: 2024-10-09 07:25:17.155 [INFO][5017] k8s.go 621: Teardown processing complete. ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Oct 9 07:25:17.157782 containerd[1462]: time="2024-10-09T07:25:17.157391064Z" level=info msg="TearDown network for sandbox \"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\" successfully" Oct 9 07:25:17.157782 containerd[1462]: time="2024-10-09T07:25:17.157415451Z" level=info msg="StopPodSandbox for \"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\" returns successfully" Oct 9 07:25:17.157986 containerd[1462]: time="2024-10-09T07:25:17.157967727Z" level=info msg="RemovePodSandbox for \"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\"" Oct 9 07:25:17.158021 containerd[1462]: time="2024-10-09T07:25:17.157992726Z" level=info msg="Forcibly stopping sandbox \"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\"" Oct 9 07:25:17.216603 containerd[1462]: 2024-10-09 07:25:17.188 [WARNING][5047] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hsltf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ebf1fe33-16c6-4476-9371-316390576226", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 24, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"67bf1762753d4f2a182653acb046aa24cb871fec35310548e7a25241eceeaf3e", Pod:"csi-node-driver-hsltf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie390a35c7bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:25:17.216603 containerd[1462]: 2024-10-09 07:25:17.188 [INFO][5047] k8s.go 608: Cleaning up netns ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Oct 9 07:25:17.216603 containerd[1462]: 2024-10-09 07:25:17.188 [INFO][5047] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" iface="eth0" netns="" Oct 9 07:25:17.216603 containerd[1462]: 2024-10-09 07:25:17.188 [INFO][5047] k8s.go 615: Releasing IP address(es) ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Oct 9 07:25:17.216603 containerd[1462]: 2024-10-09 07:25:17.188 [INFO][5047] utils.go 188: Calico CNI releasing IP address ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Oct 9 07:25:17.216603 containerd[1462]: 2024-10-09 07:25:17.206 [INFO][5054] ipam_plugin.go 417: Releasing address using handleID ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" HandleID="k8s-pod-network.fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Workload="localhost-k8s-csi--node--driver--hsltf-eth0" Oct 9 07:25:17.216603 containerd[1462]: 2024-10-09 07:25:17.206 [INFO][5054] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:25:17.216603 containerd[1462]: 2024-10-09 07:25:17.206 [INFO][5054] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:25:17.216603 containerd[1462]: 2024-10-09 07:25:17.211 [WARNING][5054] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" HandleID="k8s-pod-network.fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Workload="localhost-k8s-csi--node--driver--hsltf-eth0" Oct 9 07:25:17.216603 containerd[1462]: 2024-10-09 07:25:17.211 [INFO][5054] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" HandleID="k8s-pod-network.fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Workload="localhost-k8s-csi--node--driver--hsltf-eth0" Oct 9 07:25:17.216603 containerd[1462]: 2024-10-09 07:25:17.212 [INFO][5054] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:25:17.216603 containerd[1462]: 2024-10-09 07:25:17.214 [INFO][5047] k8s.go 621: Teardown processing complete. ContainerID="fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8" Oct 9 07:25:17.217149 containerd[1462]: time="2024-10-09T07:25:17.216645224Z" level=info msg="TearDown network for sandbox \"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\" successfully" Oct 9 07:25:17.220757 containerd[1462]: time="2024-10-09T07:25:17.220730144Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:25:17.220840 containerd[1462]: time="2024-10-09T07:25:17.220785801Z" level=info msg="RemovePodSandbox \"fc8cbc25cd3fab8d5e66561626427b4a64f6b9b9147c180d06eba5b15700efe8\" returns successfully" Oct 9 07:25:18.573672 systemd[1]: Started sshd@17-10.0.0.107:22-10.0.0.1:42008.service - OpenSSH per-connection server daemon (10.0.0.1:42008). Oct 9 07:25:18.611641 sshd[5069]: Accepted publickey for core from 10.0.0.1 port 42008 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:25:18.613066 sshd[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:25:18.616985 systemd-logind[1441]: New session 18 of user core. Oct 9 07:25:18.626832 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 07:25:18.738569 sshd[5069]: pam_unix(sshd:session): session closed for user core Oct 9 07:25:18.751600 systemd[1]: sshd@17-10.0.0.107:22-10.0.0.1:42008.service: Deactivated successfully. Oct 9 07:25:18.753849 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 07:25:18.755626 systemd-logind[1441]: Session 18 logged out. Waiting for processes to exit. Oct 9 07:25:18.771215 systemd[1]: Started sshd@18-10.0.0.107:22-10.0.0.1:42010.service - OpenSSH per-connection server daemon (10.0.0.1:42010). Oct 9 07:25:18.772177 systemd-logind[1441]: Removed session 18. Oct 9 07:25:18.800666 sshd[5084]: Accepted publickey for core from 10.0.0.1 port 42010 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:25:18.801942 sshd[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:25:18.805747 systemd-logind[1441]: New session 19 of user core. Oct 9 07:25:18.814793 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 07:25:18.996141 sshd[5084]: pam_unix(sshd:session): session closed for user core Oct 9 07:25:19.008632 systemd[1]: sshd@18-10.0.0.107:22-10.0.0.1:42010.service: Deactivated successfully. Oct 9 07:25:19.010560 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 07:25:19.012276 systemd-logind[1441]: Session 19 logged out. Waiting for processes to exit. Oct 9 07:25:19.013555 systemd[1]: Started sshd@19-10.0.0.107:22-10.0.0.1:42018.service - OpenSSH per-connection server daemon (10.0.0.1:42018). Oct 9 07:25:19.014743 systemd-logind[1441]: Removed session 19. Oct 9 07:25:19.051638 sshd[5102]: Accepted publickey for core from 10.0.0.1 port 42018 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:25:19.053008 sshd[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:25:19.056859 systemd-logind[1441]: New session 20 of user core. Oct 9 07:25:19.065801 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 07:25:20.492403 sshd[5102]: pam_unix(sshd:session): session closed for user core Oct 9 07:25:20.510004 systemd[1]: Started sshd@20-10.0.0.107:22-10.0.0.1:42030.service - OpenSSH per-connection server daemon (10.0.0.1:42030). Oct 9 07:25:20.510702 systemd[1]: sshd@19-10.0.0.107:22-10.0.0.1:42018.service: Deactivated successfully. Oct 9 07:25:20.514088 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 07:25:20.514949 systemd-logind[1441]: Session 20 logged out. Waiting for processes to exit. Oct 9 07:25:20.517272 systemd-logind[1441]: Removed session 20. Oct 9 07:25:20.541395 sshd[5122]: Accepted publickey for core from 10.0.0.1 port 42030 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:25:20.542985 sshd[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:25:20.546962 systemd-logind[1441]: New session 21 of user core. Oct 9 07:25:20.557824 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 07:25:20.772121 sshd[5122]: pam_unix(sshd:session): session closed for user core Oct 9 07:25:20.782107 systemd[1]: sshd@20-10.0.0.107:22-10.0.0.1:42030.service: Deactivated successfully. Oct 9 07:25:20.784374 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 07:25:20.786053 systemd-logind[1441]: Session 21 logged out. Waiting for processes to exit. Oct 9 07:25:20.787665 systemd[1]: Started sshd@21-10.0.0.107:22-10.0.0.1:49118.service - OpenSSH per-connection server daemon (10.0.0.1:49118). Oct 9 07:25:20.788571 systemd-logind[1441]: Removed session 21. Oct 9 07:25:20.823183 sshd[5136]: Accepted publickey for core from 10.0.0.1 port 49118 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:25:20.824616 sshd[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:25:20.828453 systemd-logind[1441]: New session 22 of user core. Oct 9 07:25:20.837813 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 07:25:20.950205 sshd[5136]: pam_unix(sshd:session): session closed for user core Oct 9 07:25:20.954533 systemd[1]: sshd@21-10.0.0.107:22-10.0.0.1:49118.service: Deactivated successfully. Oct 9 07:25:20.956865 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 07:25:20.957634 systemd-logind[1441]: Session 22 logged out. Waiting for processes to exit. Oct 9 07:25:20.958702 systemd-logind[1441]: Removed session 22. Oct 9 07:25:25.967789 systemd[1]: Started sshd@22-10.0.0.107:22-10.0.0.1:49126.service - OpenSSH per-connection server daemon (10.0.0.1:49126). Oct 9 07:25:26.004162 sshd[5156]: Accepted publickey for core from 10.0.0.1 port 49126 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:25:26.005631 sshd[5156]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:25:26.009161 systemd-logind[1441]: New session 23 of user core. Oct 9 07:25:26.014808 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 07:25:26.125257 sshd[5156]: pam_unix(sshd:session): session closed for user core Oct 9 07:25:26.129056 systemd[1]: sshd@22-10.0.0.107:22-10.0.0.1:49126.service: Deactivated successfully. Oct 9 07:25:26.130887 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 07:25:26.131548 systemd-logind[1441]: Session 23 logged out. Waiting for processes to exit. Oct 9 07:25:26.132530 systemd-logind[1441]: Removed session 23. Oct 9 07:25:31.136315 systemd[1]: Started sshd@23-10.0.0.107:22-10.0.0.1:58672.service - OpenSSH per-connection server daemon (10.0.0.1:58672). Oct 9 07:25:31.171626 sshd[5177]: Accepted publickey for core from 10.0.0.1 port 58672 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:25:31.173037 sshd[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:25:31.176423 systemd-logind[1441]: New session 24 of user core. Oct 9 07:25:31.183814 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 07:25:31.286020 sshd[5177]: pam_unix(sshd:session): session closed for user core Oct 9 07:25:31.290000 systemd[1]: sshd@23-10.0.0.107:22-10.0.0.1:58672.service: Deactivated successfully. Oct 9 07:25:31.292038 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 07:25:31.292672 systemd-logind[1441]: Session 24 logged out. Waiting for processes to exit. Oct 9 07:25:31.293618 systemd-logind[1441]: Removed session 24. Oct 9 07:25:36.302941 systemd[1]: Started sshd@24-10.0.0.107:22-10.0.0.1:58674.service - OpenSSH per-connection server daemon (10.0.0.1:58674). Oct 9 07:25:36.336529 sshd[5191]: Accepted publickey for core from 10.0.0.1 port 58674 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:25:36.340197 sshd[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:25:36.345375 systemd-logind[1441]: New session 25 of user core. Oct 9 07:25:36.356654 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 07:25:36.377835 kubelet[2566]: I1009 07:25:36.377784 2566 topology_manager.go:215] "Topology Admit Handler" podUID="8cc43731-658c-4aa5-b775-d1e19c5af31f" podNamespace="calico-apiserver" podName="calico-apiserver-6bc458f7f9-nn5n2" Oct 9 07:25:36.389386 systemd[1]: Created slice kubepods-besteffort-pod8cc43731_658c_4aa5_b775_d1e19c5af31f.slice - libcontainer container kubepods-besteffort-pod8cc43731_658c_4aa5_b775_d1e19c5af31f.slice. Oct 9 07:25:36.494674 sshd[5191]: pam_unix(sshd:session): session closed for user core Oct 9 07:25:36.497577 systemd[1]: sshd@24-10.0.0.107:22-10.0.0.1:58674.service: Deactivated successfully. Oct 9 07:25:36.501130 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 07:25:36.502287 systemd-logind[1441]: Session 25 logged out. Waiting for processes to exit. Oct 9 07:25:36.504184 systemd-logind[1441]: Removed session 25. Oct 9 07:25:36.510437 kubelet[2566]: I1009 07:25:36.510378 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4xpq\" (UniqueName: \"kubernetes.io/projected/8cc43731-658c-4aa5-b775-d1e19c5af31f-kube-api-access-z4xpq\") pod \"calico-apiserver-6bc458f7f9-nn5n2\" (UID: \"8cc43731-658c-4aa5-b775-d1e19c5af31f\") " pod="calico-apiserver/calico-apiserver-6bc458f7f9-nn5n2" Oct 9 07:25:36.510751 kubelet[2566]: I1009 07:25:36.510722 2566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8cc43731-658c-4aa5-b775-d1e19c5af31f-calico-apiserver-certs\") pod \"calico-apiserver-6bc458f7f9-nn5n2\" (UID: \"8cc43731-658c-4aa5-b775-d1e19c5af31f\") " pod="calico-apiserver/calico-apiserver-6bc458f7f9-nn5n2" Oct 9 07:25:36.611181 kubelet[2566]: E1009 07:25:36.611045 2566 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 07:25:36.611181 kubelet[2566]: E1009 07:25:36.611136 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cc43731-658c-4aa5-b775-d1e19c5af31f-calico-apiserver-certs podName:8cc43731-658c-4aa5-b775-d1e19c5af31f nodeName:}" failed. No retries permitted until 2024-10-09 07:25:37.111116616 +0000 UTC m=+80.926610834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8cc43731-658c-4aa5-b775-d1e19c5af31f-calico-apiserver-certs") pod "calico-apiserver-6bc458f7f9-nn5n2" (UID: "8cc43731-658c-4aa5-b775-d1e19c5af31f") : secret "calico-apiserver-certs" not found Oct 9 07:25:37.113829 kubelet[2566]: E1009 07:25:37.113771 2566 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 07:25:37.113966 kubelet[2566]: E1009 07:25:37.113859 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cc43731-658c-4aa5-b775-d1e19c5af31f-calico-apiserver-certs podName:8cc43731-658c-4aa5-b775-d1e19c5af31f nodeName:}" failed. No retries permitted until 2024-10-09 07:25:38.113840456 +0000 UTC m=+81.929334675 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8cc43731-658c-4aa5-b775-d1e19c5af31f-calico-apiserver-certs") pod "calico-apiserver-6bc458f7f9-nn5n2" (UID: "8cc43731-658c-4aa5-b775-d1e19c5af31f") : secret "calico-apiserver-certs" not found Oct 9 07:25:38.193961 containerd[1462]: time="2024-10-09T07:25:38.193892030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bc458f7f9-nn5n2,Uid:8cc43731-658c-4aa5-b775-d1e19c5af31f,Namespace:calico-apiserver,Attempt:0,}" Oct 9 07:25:38.262585 kubelet[2566]: E1009 07:25:38.262209 2566 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:25:38.304639 systemd-networkd[1397]: cali14d4c2f22aa: Link UP Oct 9 07:25:38.305473 systemd-networkd[1397]: cali14d4c2f22aa: Gained carrier Oct 9 07:25:38.317165 containerd[1462]: 2024-10-09 07:25:38.238 [INFO][5217] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6bc458f7f9--nn5n2-eth0 calico-apiserver-6bc458f7f9- calico-apiserver 8cc43731-658c-4aa5-b775-d1e19c5af31f 1143 0 2024-10-09 07:25:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bc458f7f9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6bc458f7f9-nn5n2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali14d4c2f22aa [] []}} ContainerID="9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a" Namespace="calico-apiserver" Pod="calico-apiserver-6bc458f7f9-nn5n2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc458f7f9--nn5n2-" Oct 9 07:25:38.317165 containerd[1462]: 2024-10-09 07:25:38.238 [INFO][5217] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a" Namespace="calico-apiserver" Pod="calico-apiserver-6bc458f7f9-nn5n2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc458f7f9--nn5n2-eth0" Oct 9 07:25:38.317165 containerd[1462]: 2024-10-09 07:25:38.266 [INFO][5230] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a" HandleID="k8s-pod-network.9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a" Workload="localhost-k8s-calico--apiserver--6bc458f7f9--nn5n2-eth0" Oct 9 07:25:38.317165 containerd[1462]: 2024-10-09 07:25:38.275 [INFO][5230] ipam_plugin.go 270: Auto assigning IP ContainerID="9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a" HandleID="k8s-pod-network.9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a" Workload="localhost-k8s-calico--apiserver--6bc458f7f9--nn5n2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000307250), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6bc458f7f9-nn5n2", "timestamp":"2024-10-09 07:25:38.266286502 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:25:38.317165 containerd[1462]: 2024-10-09 07:25:38.275 [INFO][5230] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:25:38.317165 containerd[1462]: 2024-10-09 07:25:38.275 [INFO][5230] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:25:38.317165 containerd[1462]: 2024-10-09 07:25:38.275 [INFO][5230] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 07:25:38.317165 containerd[1462]: 2024-10-09 07:25:38.277 [INFO][5230] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a" host="localhost" Oct 9 07:25:38.317165 containerd[1462]: 2024-10-09 07:25:38.280 [INFO][5230] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 07:25:38.317165 containerd[1462]: 2024-10-09 07:25:38.285 [INFO][5230] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 07:25:38.317165 containerd[1462]: 2024-10-09 07:25:38.286 [INFO][5230] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 07:25:38.317165 containerd[1462]: 2024-10-09 07:25:38.288 [INFO][5230] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 07:25:38.317165 containerd[1462]: 2024-10-09 07:25:38.288 [INFO][5230] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a" host="localhost" Oct 9 07:25:38.317165 containerd[1462]: 2024-10-09 07:25:38.290 [INFO][5230] ipam.go 1685: Creating new handle: k8s-pod-network.9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a Oct 9 07:25:38.317165 containerd[1462]: 2024-10-09 07:25:38.294 [INFO][5230] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a" host="localhost" Oct 9 07:25:38.317165 containerd[1462]: 2024-10-09 07:25:38.298 [INFO][5230] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a" host="localhost" Oct 9 07:25:38.317165 containerd[1462]: 2024-10-09 07:25:38.298 [INFO][5230] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a" host="localhost" Oct 9 07:25:38.317165 containerd[1462]: 2024-10-09 07:25:38.298 [INFO][5230] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:25:38.317165 containerd[1462]: 2024-10-09 07:25:38.298 [INFO][5230] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a" HandleID="k8s-pod-network.9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a" Workload="localhost-k8s-calico--apiserver--6bc458f7f9--nn5n2-eth0" Oct 9 07:25:38.318802 containerd[1462]: 2024-10-09 07:25:38.302 [INFO][5217] k8s.go 386: Populated endpoint ContainerID="9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a" Namespace="calico-apiserver" Pod="calico-apiserver-6bc458f7f9-nn5n2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc458f7f9--nn5n2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bc458f7f9--nn5n2-eth0", GenerateName:"calico-apiserver-6bc458f7f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8cc43731-658c-4aa5-b775-d1e19c5af31f", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 25, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bc458f7f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6bc458f7f9-nn5n2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14d4c2f22aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:25:38.318802 containerd[1462]: 2024-10-09 07:25:38.302 [INFO][5217] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a" Namespace="calico-apiserver" Pod="calico-apiserver-6bc458f7f9-nn5n2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc458f7f9--nn5n2-eth0" Oct 9 07:25:38.318802 containerd[1462]: 2024-10-09 07:25:38.302 [INFO][5217] dataplane_linux.go 68: Setting the host side veth name to cali14d4c2f22aa ContainerID="9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a" Namespace="calico-apiserver" Pod="calico-apiserver-6bc458f7f9-nn5n2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc458f7f9--nn5n2-eth0" Oct 9 07:25:38.318802 containerd[1462]: 2024-10-09 07:25:38.304 [INFO][5217] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a" Namespace="calico-apiserver" Pod="calico-apiserver-6bc458f7f9-nn5n2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc458f7f9--nn5n2-eth0" Oct 9 07:25:38.318802 containerd[1462]: 2024-10-09 07:25:38.305 [INFO][5217] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a" Namespace="calico-apiserver" Pod="calico-apiserver-6bc458f7f9-nn5n2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc458f7f9--nn5n2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bc458f7f9--nn5n2-eth0", GenerateName:"calico-apiserver-6bc458f7f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8cc43731-658c-4aa5-b775-d1e19c5af31f", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 25, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bc458f7f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a", Pod:"calico-apiserver-6bc458f7f9-nn5n2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14d4c2f22aa", MAC:"de:9f:69:21:f1:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:25:38.318802 containerd[1462]: 2024-10-09 07:25:38.312 [INFO][5217] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a" Namespace="calico-apiserver" Pod="calico-apiserver-6bc458f7f9-nn5n2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc458f7f9--nn5n2-eth0" Oct 9 07:25:38.349299 containerd[1462]: time="2024-10-09T07:25:38.349109904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:25:38.349837 containerd[1462]: time="2024-10-09T07:25:38.349769292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:25:38.349837 containerd[1462]: time="2024-10-09T07:25:38.349795631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:25:38.349837 containerd[1462]: time="2024-10-09T07:25:38.349807745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:25:38.378827 systemd[1]: Started cri-containerd-9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a.scope - libcontainer container 9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a. Oct 9 07:25:38.391793 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 07:25:38.415430 containerd[1462]: time="2024-10-09T07:25:38.415389473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bc458f7f9-nn5n2,Uid:8cc43731-658c-4aa5-b775-d1e19c5af31f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a\"" Oct 9 07:25:38.416966 containerd[1462]: time="2024-10-09T07:25:38.416929390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 07:25:39.959343 systemd[1]: run-containerd-runc-k8s.io-42f2a7dd46fb5730ccca247398140e8375292943e58bfddabc8b49950767e1fb-runc.RxXlEU.mount: Deactivated successfully. Oct 9 07:25:40.360844 systemd-networkd[1397]: cali14d4c2f22aa: Gained IPv6LL Oct 9 07:25:41.303799 containerd[1462]: time="2024-10-09T07:25:41.303758601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:41.304726 containerd[1462]: time="2024-10-09T07:25:41.304699083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 9 07:25:41.307696 containerd[1462]: time="2024-10-09T07:25:41.306923069Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:41.311160 containerd[1462]: time="2024-10-09T07:25:41.311113752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:41.312137 containerd[1462]: time="2024-10-09T07:25:41.312085704Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.895114002s" Oct 9 07:25:41.312137 containerd[1462]: time="2024-10-09T07:25:41.312123997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 07:25:41.313971 containerd[1462]: time="2024-10-09T07:25:41.313932351Z" level=info msg="CreateContainer within sandbox \"9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 07:25:41.334457 containerd[1462]: time="2024-10-09T07:25:41.334394740Z" level=info msg="CreateContainer within sandbox \"9741f66d25ca11c4c89a0f5617b3f95b5b22649591be0f938a827d2751da8d6a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2e58ab19c54fe0cc336c446dc3b1d949de5a8537607878e5f4f01b82726534cd\"" Oct 9 07:25:41.334986 containerd[1462]: time="2024-10-09T07:25:41.334961159Z" level=info msg="StartContainer for \"2e58ab19c54fe0cc336c446dc3b1d949de5a8537607878e5f4f01b82726534cd\"" Oct 9 07:25:41.370825 systemd[1]: Started cri-containerd-2e58ab19c54fe0cc336c446dc3b1d949de5a8537607878e5f4f01b82726534cd.scope - libcontainer container 2e58ab19c54fe0cc336c446dc3b1d949de5a8537607878e5f4f01b82726534cd. Oct 9 07:25:41.463282 containerd[1462]: time="2024-10-09T07:25:41.463228097Z" level=info msg="StartContainer for \"2e58ab19c54fe0cc336c446dc3b1d949de5a8537607878e5f4f01b82726534cd\" returns successfully" Oct 9 07:25:41.476555 kubelet[2566]: I1009 07:25:41.476345 2566 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6bc458f7f9-nn5n2" podStartSLOduration=2.580304976 podStartE2EDuration="5.476307432s" podCreationTimestamp="2024-10-09 07:25:36 +0000 UTC" firstStartedPulling="2024-10-09 07:25:38.416407365 +0000 UTC m=+82.231901583" lastFinishedPulling="2024-10-09 07:25:41.312409821 +0000 UTC m=+85.127904039" observedRunningTime="2024-10-09 07:25:41.476031026 +0000 UTC m=+85.291525244" watchObservedRunningTime="2024-10-09 07:25:41.476307432 +0000 UTC m=+85.291801650" Oct 9 07:25:41.506837 systemd[1]: Started sshd@25-10.0.0.107:22-10.0.0.1:33560.service - OpenSSH per-connection server daemon (10.0.0.1:33560). Oct 9 07:25:41.544883 sshd[5389]: Accepted publickey for core from 10.0.0.1 port 33560 ssh2: RSA SHA256:WUwW0BIGVpmLb0JbqOuUi3u8OpR0rDp6Veh7R76D0+8 Oct 9 07:25:41.546919 sshd[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:25:41.551794 systemd-logind[1441]: New session 26 of user core. Oct 9 07:25:41.557871 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 9 07:25:41.693520 sshd[5389]: pam_unix(sshd:session): session closed for user core Oct 9 07:25:41.698125 systemd[1]: sshd@25-10.0.0.107:22-10.0.0.1:33560.service: Deactivated successfully. Oct 9 07:25:41.701037 systemd[1]: session-26.scope: Deactivated successfully. Oct 9 07:25:41.704124 systemd-logind[1441]: Session 26 logged out. Waiting for processes to exit. Oct 9 07:25:41.705298 systemd-logind[1441]: Removed session 26.