Dec 13 01:33:49.026349 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:33:49.026384 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:33:49.026397 kernel: BIOS-provided physical RAM map: Dec 13 01:33:49.026405 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 01:33:49.026412 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 01:33:49.026419 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 01:33:49.026428 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 01:33:49.026436 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 01:33:49.026444 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Dec 13 01:33:49.026451 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Dec 13 01:33:49.026465 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Dec 13 01:33:49.026473 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Dec 13 01:33:49.026481 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Dec 13 01:33:49.026489 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Dec 13 01:33:49.026502 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Dec 13 01:33:49.026510 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 01:33:49.026521 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Dec 13 01:33:49.026529 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Dec 13 01:33:49.026537 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 01:33:49.026546 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:33:49.026554 kernel: NX (Execute Disable) protection: active Dec 13 01:33:49.026566 kernel: APIC: Static calls initialized Dec 13 01:33:49.026580 kernel: efi: EFI v2.7 by EDK II Dec 13 01:33:49.026589 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Dec 13 01:33:49.026598 kernel: SMBIOS 2.8 present. Dec 13 01:33:49.026606 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Dec 13 01:33:49.026614 kernel: Hypervisor detected: KVM Dec 13 01:33:49.026626 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:33:49.026637 kernel: kvm-clock: using sched offset of 5989350957 cycles Dec 13 01:33:49.026645 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:33:49.026656 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:33:49.026667 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:33:49.026678 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:33:49.026689 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Dec 13 01:33:49.026700 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 13 01:33:49.026711 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:33:49.026724 kernel: Using GB pages for direct mapping Dec 13 01:33:49.026735 kernel: Secure boot disabled Dec 13 01:33:49.026745 kernel: ACPI: Early table checksum verification disabled Dec 13 01:33:49.026756 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 13 01:33:49.026779 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:33:49.026790 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:33:49.026801 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:33:49.026816 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 13 01:33:49.026827 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:33:49.026839 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:33:49.026859 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:33:49.026869 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:33:49.026878 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 01:33:49.026887 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 13 01:33:49.026899 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Dec 13 01:33:49.026908 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 13 01:33:49.026917 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 13 01:33:49.026926 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 13 01:33:49.026938 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 13 01:33:49.026947 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 13 01:33:49.026955 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 13 01:33:49.026967 kernel: No NUMA configuration found Dec 13 01:33:49.026976 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Dec 13 01:33:49.026988 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Dec 13 01:33:49.026999 kernel: Zone ranges: Dec 13 01:33:49.027009 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:33:49.027018 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Dec 13 01:33:49.027026 kernel: Normal empty Dec 13 01:33:49.027035 kernel: Movable zone start for each node Dec 13 01:33:49.027044 kernel: Early memory node ranges Dec 13 01:33:49.027053 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 01:33:49.027062 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 13 01:33:49.027071 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 13 01:33:49.027083 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Dec 13 01:33:49.027094 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Dec 13 01:33:49.027103 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Dec 13 01:33:49.027115 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Dec 13 01:33:49.027124 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:33:49.027152 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 01:33:49.027163 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 13 01:33:49.027172 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:33:49.027181 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Dec 13 01:33:49.027194 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 13 01:33:49.027204 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Dec 13 01:33:49.027215 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:33:49.027224 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:33:49.027233 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:33:49.027242 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:33:49.027251 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:33:49.027260 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:33:49.027269 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:33:49.027281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:33:49.027290 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:33:49.027301 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:33:49.027310 kernel: TSC deadline timer available Dec 13 01:33:49.027319 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:33:49.027328 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:33:49.027337 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:33:49.027346 kernel: kvm-guest: setup PV sched yield Dec 13 01:33:49.027360 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:33:49.027373 kernel: Booting paravirtualized kernel on KVM Dec 13 01:33:49.027382 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:33:49.027392 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:33:49.027400 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 01:33:49.027409 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 01:33:49.027418 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:33:49.027427 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:33:49.027436 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:33:49.027449 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:33:49.027461 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:33:49.027484 kernel: random: crng init done Dec 13 01:33:49.027495 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:33:49.027504 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:33:49.027513 kernel: Fallback order for Node 0: 0 Dec 13 01:33:49.027522 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Dec 13 01:33:49.027530 kernel: Policy zone: DMA32 Dec 13 01:33:49.027539 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:33:49.027552 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 171124K reserved, 0K cma-reserved) Dec 13 01:33:49.027572 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:33:49.028618 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:33:49.028637 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:33:49.028649 kernel: Dynamic Preempt: voluntary Dec 13 01:33:49.028674 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:33:49.028690 kernel: rcu: RCU event tracing is enabled. Dec 13 01:33:49.028702 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:33:49.028714 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:33:49.028726 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:33:49.028738 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:33:49.028749 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:33:49.028768 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:33:49.028780 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:33:49.028795 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:33:49.028807 kernel: Console: colour dummy device 80x25 Dec 13 01:33:49.028819 kernel: printk: console [ttyS0] enabled Dec 13 01:33:49.028836 kernel: ACPI: Core revision 20230628 Dec 13 01:33:49.028861 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:33:49.028872 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:33:49.028881 kernel: x2apic enabled Dec 13 01:33:49.028893 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:33:49.028903 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 01:33:49.028913 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 01:33:49.028922 kernel: kvm-guest: setup PV IPIs Dec 13 01:33:49.028931 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:33:49.028944 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:33:49.028954 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:33:49.028963 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:33:49.028972 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:33:49.028982 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:33:49.028991 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:33:49.029001 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:33:49.029010 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:33:49.029020 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:33:49.029032 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:33:49.029042 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:33:49.029054 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:33:49.029064 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:33:49.029073 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:33:49.029083 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:33:49.029093 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:33:49.029102 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:33:49.029115 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:33:49.029124 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:33:49.029163 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:33:49.029173 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:33:49.029182 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:33:49.029191 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:33:49.029201 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:33:49.029210 kernel: landlock: Up and running. Dec 13 01:33:49.029219 kernel: SELinux: Initializing. Dec 13 01:33:49.029233 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:33:49.029243 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:33:49.029252 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:33:49.029262 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:33:49.029271 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:33:49.029281 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:33:49.029290 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:33:49.029300 kernel: ... version: 0 Dec 13 01:33:49.029309 kernel: ... bit width: 48 Dec 13 01:33:49.029322 kernel: ... generic registers: 6 Dec 13 01:33:49.029331 kernel: ... value mask: 0000ffffffffffff Dec 13 01:33:49.029340 kernel: ... max period: 00007fffffffffff Dec 13 01:33:49.029350 kernel: ... fixed-purpose events: 0 Dec 13 01:33:49.029361 kernel: ... event mask: 000000000000003f Dec 13 01:33:49.029371 kernel: signal: max sigframe size: 1776 Dec 13 01:33:49.029380 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:33:49.029390 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:33:49.029399 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:33:49.029419 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:33:49.029431 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 01:33:49.029440 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:33:49.029449 kernel: smpboot: Max logical packages: 1 Dec 13 01:33:49.029459 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:33:49.029469 kernel: devtmpfs: initialized Dec 13 01:33:49.029478 kernel: x86/mm: Memory block size: 128MB Dec 13 01:33:49.029488 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 13 01:33:49.029497 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 13 01:33:49.029515 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Dec 13 01:33:49.029529 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 13 01:33:49.029542 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 13 01:33:49.029552 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:33:49.029561 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:33:49.029571 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:33:49.029580 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:33:49.029590 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:33:49.029599 kernel: audit: type=2000 audit(1734053627.733:1): state=initialized audit_enabled=0 res=1 Dec 13 01:33:49.029612 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:33:49.029622 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:33:49.029631 kernel: cpuidle: using governor menu Dec 13 01:33:49.029641 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:33:49.029650 kernel: dca service started, version 1.12.1 Dec 13 01:33:49.029660 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:33:49.029670 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 01:33:49.029679 kernel: PCI: Using configuration type 1 for base access Dec 13 01:33:49.029689 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:33:49.029701 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:33:49.029710 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:33:49.029720 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:33:49.029729 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:33:49.029738 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:33:49.029748 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:33:49.029757 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:33:49.029767 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:33:49.029776 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:33:49.029788 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:33:49.029798 kernel: ACPI: Interpreter enabled Dec 13 01:33:49.029807 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:33:49.029816 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:33:49.029826 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:33:49.029835 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:33:49.029852 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:33:49.029861 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:33:49.030189 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:33:49.030360 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:33:49.030504 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:33:49.030516 kernel: PCI host bridge to bus 0000:00 Dec 13 01:33:49.030708 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:33:49.030840 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:33:49.030980 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:33:49.031114 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:33:49.031259 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:33:49.031394 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Dec 13 01:33:49.031527 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:33:49.031721 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:33:49.031924 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:33:49.032075 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Dec 13 01:33:49.032269 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Dec 13 01:33:49.032439 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Dec 13 01:33:49.032584 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Dec 13 01:33:49.032726 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:33:49.032910 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:33:49.033057 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Dec 13 01:33:49.033227 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Dec 13 01:33:49.033394 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Dec 13 01:33:49.033558 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:33:49.033703 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Dec 13 01:33:49.033859 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Dec 13 01:33:49.034008 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Dec 13 01:33:49.034200 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:33:49.034355 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Dec 13 01:33:49.034501 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Dec 13 01:33:49.034660 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Dec 13 01:33:49.034819 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Dec 13 01:33:49.034990 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:33:49.035151 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:33:49.035310 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:33:49.035475 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Dec 13 01:33:49.035617 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Dec 13 01:33:49.035778 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:33:49.035932 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Dec 13 01:33:49.035945 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:33:49.035954 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:33:49.035964 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:33:49.035987 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:33:49.036003 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:33:49.036015 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:33:49.036024 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:33:49.036034 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:33:49.036044 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:33:49.036053 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:33:49.036063 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:33:49.036073 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:33:49.036086 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:33:49.036095 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:33:49.036104 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:33:49.036114 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:33:49.036124 kernel: iommu: Default domain type: Translated Dec 13 01:33:49.036169 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:33:49.036179 kernel: efivars: Registered efivars operations Dec 13 01:33:49.036189 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:33:49.036198 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:33:49.036211 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 13 01:33:49.036221 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Dec 13 01:33:49.036241 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Dec 13 01:33:49.036255 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Dec 13 01:33:49.036417 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:33:49.036561 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:33:49.036726 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:33:49.036740 kernel: vgaarb: loaded Dec 13 01:33:49.036750 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:33:49.036765 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:33:49.036774 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:33:49.036784 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:33:49.036794 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:33:49.036803 kernel: pnp: PnP ACPI init Dec 13 01:33:49.036977 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:33:49.036991 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:33:49.037002 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:33:49.037015 kernel: NET: Registered PF_INET protocol family Dec 13 01:33:49.037025 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:33:49.037035 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:33:49.037045 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:33:49.037054 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:33:49.037064 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:33:49.037073 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:33:49.037083 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:33:49.037093 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:33:49.037105 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:33:49.037115 kernel: NET: Registered PF_XDP protocol family Dec 13 01:33:49.037347 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Dec 13 01:33:49.037502 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Dec 13 01:33:49.037634 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:33:49.037761 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:33:49.037898 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:33:49.038024 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:33:49.038173 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:33:49.038303 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Dec 13 01:33:49.038315 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:33:49.038325 kernel: Initialise system trusted keyrings Dec 13 01:33:49.038335 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:33:49.038344 kernel: Key type asymmetric registered Dec 13 01:33:49.038354 kernel: Asymmetric key parser 'x509' registered Dec 13 01:33:49.038363 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:33:49.038373 kernel: io scheduler mq-deadline registered Dec 13 01:33:49.038387 kernel: io scheduler kyber registered Dec 13 01:33:49.038397 kernel: io scheduler bfq registered Dec 13 01:33:49.038407 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:33:49.038417 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:33:49.038427 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:33:49.038437 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:33:49.038446 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:33:49.038463 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:33:49.038473 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:33:49.038487 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:33:49.038497 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:33:49.038670 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:33:49.038684 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:33:49.038813 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:33:49.038955 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:33:48 UTC (1734053628) Dec 13 01:33:49.039087 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:33:49.039104 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:33:49.039114 kernel: efifb: probing for efifb Dec 13 01:33:49.039220 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Dec 13 01:33:49.039231 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Dec 13 01:33:49.039241 kernel: efifb: scrolling: redraw Dec 13 01:33:49.039250 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Dec 13 01:33:49.039260 kernel: Console: switching to colour frame buffer device 100x37 Dec 13 01:33:49.039292 kernel: fb0: EFI VGA frame buffer device Dec 13 01:33:49.039305 kernel: pstore: Using crash dump compression: deflate Dec 13 01:33:49.039318 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 01:33:49.039328 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:33:49.039337 kernel: Segment Routing with IPv6 Dec 13 01:33:49.039348 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:33:49.039358 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:33:49.039368 kernel: Key type dns_resolver registered Dec 13 01:33:49.039377 kernel: IPI shorthand broadcast: enabled Dec 13 01:33:49.039387 kernel: sched_clock: Marking stable (1306007528, 150293836)->(1668687194, -212385830) Dec 13 01:33:49.039397 kernel: registered taskstats version 1 Dec 13 01:33:49.039407 kernel: Loading compiled-in X.509 certificates Dec 13 01:33:49.039420 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:33:49.039430 kernel: Key type .fscrypt registered Dec 13 01:33:49.039439 kernel: Key type fscrypt-provisioning registered Dec 13 01:33:49.039449 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:33:49.039462 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:33:49.039472 kernel: ima: No architecture policies found Dec 13 01:33:49.039482 kernel: clk: Disabling unused clocks Dec 13 01:33:49.039492 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:33:49.039504 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:33:49.039514 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:33:49.039525 kernel: Run /init as init process Dec 13 01:33:49.039537 kernel: with arguments: Dec 13 01:33:49.039546 kernel: /init Dec 13 01:33:49.039556 kernel: with environment: Dec 13 01:33:49.039566 kernel: HOME=/ Dec 13 01:33:49.039575 kernel: TERM=linux Dec 13 01:33:49.039585 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:33:49.039600 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:33:49.039613 systemd[1]: Detected virtualization kvm. Dec 13 01:33:49.039624 systemd[1]: Detected architecture x86-64. Dec 13 01:33:49.039634 systemd[1]: Running in initrd. Dec 13 01:33:49.039649 systemd[1]: No hostname configured, using default hostname. Dec 13 01:33:49.039659 systemd[1]: Hostname set to . Dec 13 01:33:49.039677 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:33:49.039690 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:33:49.039700 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:33:49.039711 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:33:49.039722 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:33:49.039732 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:33:49.039751 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:33:49.039762 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:33:49.039775 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:33:49.039785 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:33:49.039796 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:33:49.039806 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:33:49.039817 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:33:49.039830 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:33:49.039841 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:33:49.039864 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:33:49.039886 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:33:49.039897 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:33:49.039908 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:33:49.039919 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:33:49.039930 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:33:49.039940 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:33:49.039955 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:33:49.039965 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:33:49.039982 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:33:49.039994 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:33:49.040005 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:33:49.040016 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:33:49.040026 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:33:49.040036 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:33:49.040051 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:33:49.040062 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:33:49.040073 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:33:49.040083 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:33:49.040094 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:33:49.040155 systemd-journald[193]: Collecting audit messages is disabled. Dec 13 01:33:49.040180 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:33:49.040191 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:33:49.040207 systemd-journald[193]: Journal started Dec 13 01:33:49.040231 systemd-journald[193]: Runtime Journal (/run/log/journal/e7376425899a411284bc7e63483b1251) is 6.0M, max 48.3M, 42.2M free. Dec 13 01:33:49.045112 systemd-modules-load[194]: Inserted module 'overlay' Dec 13 01:33:49.056623 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:33:49.059234 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:33:49.062168 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:33:49.068357 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:33:49.090223 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:33:49.093621 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:33:49.096992 kernel: Bridge firewalling registered Dec 13 01:33:49.093872 systemd-modules-load[194]: Inserted module 'br_netfilter' Dec 13 01:33:49.094910 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:33:49.099247 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:33:49.112692 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:33:49.114807 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:33:49.119482 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:33:49.131345 dracut-cmdline[222]: dracut-dracut-053 Dec 13 01:33:49.138826 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:33:49.156348 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:33:49.175495 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:33:49.220503 systemd-resolved[258]: Positive Trust Anchors: Dec 13 01:33:49.220541 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:33:49.220582 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:33:49.225241 systemd-resolved[258]: Defaulting to hostname 'linux'. Dec 13 01:33:49.227948 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:33:49.233665 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:33:49.265185 kernel: SCSI subsystem initialized Dec 13 01:33:49.279205 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:33:49.295208 kernel: iscsi: registered transport (tcp) Dec 13 01:33:49.324355 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:33:49.324460 kernel: QLogic iSCSI HBA Driver Dec 13 01:33:49.392577 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:33:49.404262 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:33:49.432190 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:33:49.432277 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:33:49.432291 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:33:49.487189 kernel: raid6: avx2x4 gen() 23607 MB/s Dec 13 01:33:49.504197 kernel: raid6: avx2x2 gen() 26274 MB/s Dec 13 01:33:49.521383 kernel: raid6: avx2x1 gen() 23446 MB/s Dec 13 01:33:49.521469 kernel: raid6: using algorithm avx2x2 gen() 26274 MB/s Dec 13 01:33:49.539345 kernel: raid6: .... xor() 18762 MB/s, rmw enabled Dec 13 01:33:49.539444 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:33:49.561184 kernel: xor: automatically using best checksumming function avx Dec 13 01:33:49.847173 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:33:49.863646 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:33:49.875276 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:33:49.887868 systemd-udevd[415]: Using default interface naming scheme 'v255'. Dec 13 01:33:49.892748 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:33:49.900313 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:33:49.918828 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Dec 13 01:33:49.958173 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:33:49.973505 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:33:50.040646 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:33:50.051304 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:33:50.067931 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:33:50.070684 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:33:50.072535 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:33:50.074538 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:33:50.084160 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 01:33:50.105029 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:33:50.105058 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:33:50.111943 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:33:50.111962 kernel: GPT:9289727 != 19775487 Dec 13 01:33:50.111977 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:33:50.111992 kernel: GPT:9289727 != 19775487 Dec 13 01:33:50.112006 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:33:50.112021 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:33:50.088528 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:33:50.113663 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:33:50.119821 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:33:50.119849 kernel: AES CTR mode by8 optimization enabled Dec 13 01:33:50.119864 kernel: libata version 3.00 loaded. Dec 13 01:33:50.129223 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:33:50.147904 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:33:50.147927 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:33:50.148159 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:33:50.150592 kernel: scsi host0: ahci Dec 13 01:33:50.150824 kernel: scsi host1: ahci Dec 13 01:33:50.151115 kernel: scsi host2: ahci Dec 13 01:33:50.151351 kernel: scsi host3: ahci Dec 13 01:33:50.151591 kernel: scsi host4: ahci Dec 13 01:33:50.151827 kernel: scsi host5: ahci Dec 13 01:33:50.152026 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Dec 13 01:33:50.152042 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Dec 13 01:33:50.152056 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Dec 13 01:33:50.152070 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Dec 13 01:33:50.152091 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Dec 13 01:33:50.152105 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Dec 13 01:33:50.134171 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:33:50.134409 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:33:50.159276 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (466) Dec 13 01:33:50.159328 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (459) Dec 13 01:33:50.136453 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:33:50.138700 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:33:50.138974 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:33:50.148222 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:33:50.160447 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:33:50.194956 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:33:50.200911 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:33:50.206568 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:33:50.208189 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:33:50.216667 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:33:50.230438 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:33:50.233297 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:33:50.233409 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:33:50.236080 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:33:50.239600 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:33:50.243379 disk-uuid[556]: Primary Header is updated. Dec 13 01:33:50.243379 disk-uuid[556]: Secondary Entries is updated. Dec 13 01:33:50.243379 disk-uuid[556]: Secondary Header is updated. Dec 13 01:33:50.246434 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:33:50.250166 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:33:50.268955 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:33:50.280354 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:33:50.311364 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:33:50.458029 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:33:50.458125 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:33:50.458160 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:33:50.458176 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:33:50.458190 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:33:50.459161 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:33:50.460163 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:33:50.460198 kernel: ata3.00: applying bridge limits Dec 13 01:33:50.461195 kernel: ata3.00: configured for UDMA/100 Dec 13 01:33:50.462161 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:33:50.512191 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:33:50.525824 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:33:50.525839 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:33:51.256184 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:33:51.256673 disk-uuid[558]: The operation has completed successfully. Dec 13 01:33:51.287395 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:33:51.287607 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:33:51.329523 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:33:51.333340 sh[597]: Success Dec 13 01:33:51.347201 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:33:51.387416 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:33:51.401802 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:33:51.404393 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:33:51.417578 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:33:51.417621 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:33:51.417636 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:33:51.418668 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:33:51.419485 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:33:51.425365 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:33:51.426329 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:33:51.450337 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:33:51.452363 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:33:51.465282 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:33:51.465315 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:33:51.465326 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:33:51.469176 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:33:51.481469 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:33:51.483426 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:33:51.572212 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:33:51.579267 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:33:51.593856 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:33:51.605911 systemd-networkd[775]: lo: Link UP Dec 13 01:33:51.605920 systemd-networkd[775]: lo: Gained carrier Dec 13 01:33:51.607585 systemd-networkd[775]: Enumeration completed Dec 13 01:33:51.608014 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:33:51.608018 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:33:51.611794 systemd-networkd[775]: eth0: Link UP Dec 13 01:33:51.611802 systemd-networkd[775]: eth0: Gained carrier Dec 13 01:33:51.611809 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:33:51.620094 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:33:51.622605 systemd[1]: Reached target network.target - Network. Dec 13 01:33:51.639191 systemd-networkd[775]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:33:51.639506 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:33:51.698201 ignition[779]: Ignition 2.19.0 Dec 13 01:33:51.698214 ignition[779]: Stage: fetch-offline Dec 13 01:33:51.698267 ignition[779]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:33:51.698279 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:33:51.698411 ignition[779]: parsed url from cmdline: "" Dec 13 01:33:51.698416 ignition[779]: no config URL provided Dec 13 01:33:51.698422 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:33:51.698433 ignition[779]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:33:51.698464 ignition[779]: op(1): [started] loading QEMU firmware config module Dec 13 01:33:51.698470 ignition[779]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:33:51.709371 ignition[779]: op(1): [finished] loading QEMU firmware config module Dec 13 01:33:51.748649 ignition[779]: parsing config with SHA512: 1804b662146dcde4a9720165228239221b0c1000cbd1ead7272aa3118966d3ceeea423ff9c30f099f6843568bc1f6a871e9c0124c3607089fc8b5c67c34f9afc Dec 13 01:33:51.753234 unknown[779]: fetched base config from "system" Dec 13 01:33:51.753258 unknown[779]: fetched user config from "qemu" Dec 13 01:33:51.753867 systemd-resolved[258]: Detected conflict on linux IN A 10.0.0.100 Dec 13 01:33:51.753880 systemd-resolved[258]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Dec 13 01:33:51.755689 ignition[779]: fetch-offline: fetch-offline passed Dec 13 01:33:51.755844 ignition[779]: Ignition finished successfully Dec 13 01:33:51.761970 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:33:51.764629 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:33:51.772577 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:33:51.792389 ignition[789]: Ignition 2.19.0 Dec 13 01:33:51.792401 ignition[789]: Stage: kargs Dec 13 01:33:51.792575 ignition[789]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:33:51.792587 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:33:51.796506 ignition[789]: kargs: kargs passed Dec 13 01:33:51.796564 ignition[789]: Ignition finished successfully Dec 13 01:33:51.800552 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:33:51.810273 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:33:51.827163 ignition[798]: Ignition 2.19.0 Dec 13 01:33:51.827176 ignition[798]: Stage: disks Dec 13 01:33:51.827364 ignition[798]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:33:51.827376 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:33:51.831257 ignition[798]: disks: disks passed Dec 13 01:33:51.831932 ignition[798]: Ignition finished successfully Dec 13 01:33:51.834801 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:33:51.836068 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:33:51.838124 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:33:51.839438 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:33:51.841533 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:33:51.843820 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:33:51.855614 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:33:51.868153 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:33:51.875467 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:33:51.889448 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:33:51.984154 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:33:51.984739 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:33:51.987641 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:33:51.999371 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:33:52.002495 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:33:52.007649 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (816) Dec 13 01:33:52.003949 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:33:52.004009 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:33:52.004045 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:33:52.018409 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:33:52.018434 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:33:52.018446 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:33:52.018457 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:33:52.013508 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:33:52.025355 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:33:52.027955 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:33:52.064205 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:33:52.071095 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:33:52.077675 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:33:52.084233 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:33:52.183438 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:33:52.195323 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:33:52.197451 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:33:52.207164 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:33:52.228487 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:33:52.243365 ignition[931]: INFO : Ignition 2.19.0 Dec 13 01:33:52.243365 ignition[931]: INFO : Stage: mount Dec 13 01:33:52.245293 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:33:52.245293 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:33:52.248102 ignition[931]: INFO : mount: mount passed Dec 13 01:33:52.248962 ignition[931]: INFO : Ignition finished successfully Dec 13 01:33:52.252309 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:33:52.264210 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:33:52.417587 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:33:52.430408 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:33:52.438195 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (943) Dec 13 01:33:52.440815 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:33:52.440845 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:33:52.440856 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:33:52.444187 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:33:52.446849 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:33:52.481878 ignition[960]: INFO : Ignition 2.19.0 Dec 13 01:33:52.481878 ignition[960]: INFO : Stage: files Dec 13 01:33:52.484212 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:33:52.484212 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:33:52.487218 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:33:52.489562 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:33:52.489562 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:33:52.493933 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:33:52.495634 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:33:52.495634 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:33:52.494871 unknown[960]: wrote ssh authorized keys file for user: core Dec 13 01:33:52.499598 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:33:52.501629 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:33:52.584177 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:33:52.670308 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:33:52.670308 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:33:52.674344 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:33:52.676180 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:33:52.678118 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:33:52.679943 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:33:52.681903 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:33:52.683685 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:33:52.685570 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:33:52.687547 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:33:52.689556 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:33:52.691418 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:33:52.694077 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:33:52.696614 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:33:52.698908 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 01:33:53.017507 systemd-networkd[775]: eth0: Gained IPv6LL Dec 13 01:33:53.035551 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:33:53.914885 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:33:53.914885 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:33:53.919899 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:33:53.919899 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:33:53.919899 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:33:53.919899 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 01:33:53.919899 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:33:53.919899 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:33:53.919899 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 01:33:53.919899 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:33:53.960438 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:33:53.967786 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:33:53.970246 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:33:53.970246 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:33:53.970246 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:33:53.970246 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:33:53.970246 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:33:53.970246 ignition[960]: INFO : files: files passed Dec 13 01:33:53.970246 ignition[960]: INFO : Ignition finished successfully Dec 13 01:33:53.974356 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:33:53.988553 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:33:53.990924 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:33:54.002116 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:33:54.002328 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:33:54.004867 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:33:54.010417 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:33:54.010417 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:33:54.015530 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:33:54.019950 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:33:54.021786 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:33:54.042475 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:33:54.076114 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:33:54.076355 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:33:54.079406 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:33:54.081648 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:33:54.082931 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:33:54.084165 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:33:54.109885 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:33:54.122615 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:33:54.134618 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:33:54.136359 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:33:54.139234 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:33:54.141859 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:33:54.142038 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:33:54.144925 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:33:54.147282 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:33:54.149945 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:33:54.152735 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:33:54.155371 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:33:54.158232 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:33:54.161079 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:33:54.164072 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:33:54.166613 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:33:54.166956 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:33:54.167174 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:33:54.167325 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:33:54.168150 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:33:54.168588 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:33:54.168779 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:33:54.168911 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:33:54.169435 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:33:54.169566 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:33:54.170126 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:33:54.170254 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:33:54.170664 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:33:54.170974 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:33:54.174179 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:33:54.174607 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:33:54.175003 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:33:54.175460 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:33:54.175574 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:33:54.175924 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:33:54.176013 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:33:54.176373 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:33:54.176490 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:33:54.176999 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:33:54.177103 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:33:54.199554 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:33:54.201806 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:33:54.202930 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:33:54.203072 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:33:54.205762 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:33:54.205917 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:33:54.213372 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:33:54.213508 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:33:54.233083 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:33:54.259404 ignition[1014]: INFO : Ignition 2.19.0 Dec 13 01:33:54.259404 ignition[1014]: INFO : Stage: umount Dec 13 01:33:54.261871 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:33:54.261871 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:33:54.261871 ignition[1014]: INFO : umount: umount passed Dec 13 01:33:54.261871 ignition[1014]: INFO : Ignition finished successfully Dec 13 01:33:54.263207 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:33:54.263360 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:33:54.266180 systemd[1]: Stopped target network.target - Network. Dec 13 01:33:54.268056 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:33:54.268166 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:33:54.270290 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:33:54.270347 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:33:54.271622 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:33:54.271683 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:33:54.274011 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:33:54.274064 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:33:54.277072 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:33:54.279293 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:33:54.283234 systemd-networkd[775]: eth0: DHCPv6 lease lost Dec 13 01:33:54.284012 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:33:54.284206 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:33:54.286947 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:33:54.287033 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:33:54.289972 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:33:54.290154 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:33:54.293072 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:33:54.293172 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:33:54.303232 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:33:54.305364 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:33:54.305423 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:33:54.305760 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:33:54.305810 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:33:54.306412 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:33:54.306460 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:33:54.306932 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:33:54.317569 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:33:54.317773 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:33:54.324060 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:33:54.324299 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:33:54.326309 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:33:54.326364 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:33:54.328560 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:33:54.328613 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:33:54.331313 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:33:54.331377 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:33:54.334167 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:33:54.334220 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:33:54.336810 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:33:54.336863 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:33:54.350276 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:33:54.352584 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:33:54.352711 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:33:54.355360 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:33:54.355412 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:33:54.355964 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:33:54.356014 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:33:54.356566 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:33:54.356614 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:33:54.359922 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:33:54.360068 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:33:54.432324 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:33:54.432486 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:33:54.435099 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:33:54.436220 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:33:54.436278 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:33:54.452304 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:33:54.465395 systemd[1]: Switching root. Dec 13 01:33:54.506646 systemd-journald[193]: Journal stopped Dec 13 01:33:56.017403 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Dec 13 01:33:56.017515 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:33:56.017544 kernel: SELinux: policy capability open_perms=1 Dec 13 01:33:56.017561 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:33:56.017576 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:33:56.017592 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:33:56.017608 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:33:56.017625 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:33:56.017650 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:33:56.017678 kernel: audit: type=1403 audit(1734053634.959:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:33:56.017696 systemd[1]: Successfully loaded SELinux policy in 53.769ms. Dec 13 01:33:56.017724 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.395ms. Dec 13 01:33:56.017744 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:33:56.017761 systemd[1]: Detected virtualization kvm. Dec 13 01:33:56.017778 systemd[1]: Detected architecture x86-64. Dec 13 01:33:56.017794 systemd[1]: Detected first boot. Dec 13 01:33:56.017811 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:33:56.017832 zram_generator::config[1058]: No configuration found. Dec 13 01:33:56.017854 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:33:56.017884 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:33:56.017905 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:33:56.017927 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:33:56.017949 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:33:56.017970 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:33:56.017990 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:33:56.018011 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:33:56.018033 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:33:56.018052 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:33:56.018077 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:33:56.018102 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:33:56.018119 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:33:56.018154 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:33:56.018208 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:33:56.018226 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:33:56.018244 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:33:56.018261 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:33:56.018278 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:33:56.018305 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:33:56.018322 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:33:56.018339 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:33:56.018357 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:33:56.018374 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:33:56.018390 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:33:56.018408 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:33:56.018434 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:33:56.018452 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:33:56.018477 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:33:56.018494 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:33:56.018511 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:33:56.018527 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:33:56.018544 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:33:56.018560 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:33:56.018576 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:33:56.018604 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:33:56.018625 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:33:56.018651 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:33:56.018668 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:33:56.019858 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:33:56.019871 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:33:56.019885 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:33:56.019897 systemd[1]: Reached target machines.target - Containers. Dec 13 01:33:56.019909 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:33:56.019927 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:33:56.019940 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:33:56.019953 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:33:56.019965 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:33:56.019977 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:33:56.019989 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:33:56.020001 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:33:56.020013 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:33:56.020028 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:33:56.020041 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:33:56.020053 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:33:56.020065 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:33:56.020088 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:33:56.020101 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:33:56.020113 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:33:56.020126 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:33:56.020163 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:33:56.020184 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:33:56.020197 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:33:56.020209 systemd[1]: Stopped verity-setup.service. Dec 13 01:33:56.020222 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:33:56.020234 kernel: fuse: init (API version 7.39) Dec 13 01:33:56.020248 kernel: loop: module loaded Dec 13 01:33:56.020260 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:33:56.020272 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:33:56.020284 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:33:56.020297 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:33:56.020309 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:33:56.020322 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:33:56.020334 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:33:56.020346 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:33:56.020361 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:33:56.020374 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:33:56.020395 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:33:56.020407 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:33:56.020449 systemd-journald[1121]: Collecting audit messages is disabled. Dec 13 01:33:56.020471 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:33:56.020484 kernel: ACPI: bus type drm_connector registered Dec 13 01:33:56.020499 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:33:56.020511 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:33:56.020523 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:33:56.020535 systemd-journald[1121]: Journal started Dec 13 01:33:56.020560 systemd-journald[1121]: Runtime Journal (/run/log/journal/e7376425899a411284bc7e63483b1251) is 6.0M, max 48.3M, 42.2M free. Dec 13 01:33:55.639173 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:33:55.663792 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:33:55.664344 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:33:56.023839 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:33:56.026049 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:33:56.027016 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:33:56.027242 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:33:56.028743 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:33:56.030264 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:33:56.031918 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:33:56.047803 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:33:56.057278 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:33:56.063317 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:33:56.064746 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:33:56.064799 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:33:56.067548 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:33:56.085515 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:33:56.088840 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:33:56.090333 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:33:56.093243 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:33:56.216626 systemd-journald[1121]: Time spent on flushing to /var/log/journal/e7376425899a411284bc7e63483b1251 is 23.975ms for 990 entries. Dec 13 01:33:56.216626 systemd-journald[1121]: System Journal (/var/log/journal/e7376425899a411284bc7e63483b1251) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:33:56.337672 systemd-journald[1121]: Received client request to flush runtime journal. Dec 13 01:33:56.337793 kernel: loop0: detected capacity change from 0 to 140768 Dec 13 01:33:56.337837 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:33:56.101323 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:33:56.120801 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:33:56.123466 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:33:56.126285 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:33:56.129017 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:33:56.136199 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:33:56.140406 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:33:56.145827 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:33:56.224329 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:33:56.239568 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:33:56.242741 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:33:56.244450 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:33:56.260315 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:33:56.286770 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:33:56.288278 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:33:56.289917 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Dec 13 01:33:56.289932 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Dec 13 01:33:56.299569 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:33:56.302827 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:33:56.304766 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:33:56.317050 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:33:56.326319 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:33:56.413360 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:33:56.436194 kernel: loop1: detected capacity change from 0 to 142488 Dec 13 01:33:56.443263 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:33:56.444412 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:33:56.470653 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:33:56.477636 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:33:56.479329 kernel: loop2: detected capacity change from 0 to 205544 Dec 13 01:33:56.575493 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Dec 13 01:33:56.575521 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Dec 13 01:33:56.581168 kernel: loop3: detected capacity change from 0 to 140768 Dec 13 01:33:56.586032 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:33:56.595161 kernel: loop4: detected capacity change from 0 to 142488 Dec 13 01:33:56.610336 kernel: loop5: detected capacity change from 0 to 205544 Dec 13 01:33:56.617057 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:33:56.617850 (sd-merge)[1200]: Merged extensions into '/usr'. Dec 13 01:33:56.625116 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:33:56.625160 systemd[1]: Reloading... Dec 13 01:33:56.730193 zram_generator::config[1229]: No configuration found. Dec 13 01:33:56.925000 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:33:57.005736 systemd[1]: Reloading finished in 380 ms. Dec 13 01:33:57.028879 ldconfig[1159]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:33:57.048936 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:33:57.077554 systemd[1]: Starting ensure-sysext.service... Dec 13 01:33:57.080219 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:33:57.086618 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:33:57.098794 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:33:57.098816 systemd[1]: Reloading... Dec 13 01:33:57.132769 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:33:57.133447 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:33:57.135391 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:33:57.135872 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Dec 13 01:33:57.135985 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Dec 13 01:33:57.142968 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:33:57.142996 systemd-tmpfiles[1264]: Skipping /boot Dec 13 01:33:57.179718 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:33:57.179743 systemd-tmpfiles[1264]: Skipping /boot Dec 13 01:33:57.249174 zram_generator::config[1292]: No configuration found. Dec 13 01:33:57.450325 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:33:57.535337 systemd[1]: Reloading finished in 435 ms. Dec 13 01:33:57.563014 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:33:57.576033 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:33:57.588615 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:33:57.592039 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:33:57.595279 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:33:57.602356 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:33:57.607063 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:33:57.610441 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:33:57.614377 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:33:57.614559 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:33:57.624982 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:33:57.631121 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:33:57.634700 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:33:57.636053 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:33:57.640389 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:33:57.643848 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:33:57.645489 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:33:57.645742 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:33:57.647835 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:33:57.649859 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:33:57.650335 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:33:57.657095 augenrules[1356]: No rules Dec 13 01:33:57.659209 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:33:57.661735 systemd-udevd[1339]: Using default interface naming scheme 'v255'. Dec 13 01:33:57.664062 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:33:57.664653 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:33:57.673646 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:33:57.678513 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:33:57.678806 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:33:57.686526 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:33:57.689323 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:33:57.695444 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:33:57.696861 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:33:57.701232 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:33:57.702498 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:33:57.703394 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:33:57.713666 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:33:57.713879 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:33:57.715750 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:33:57.716125 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:33:57.718322 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:33:57.718525 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:33:57.721148 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:33:57.722786 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:33:57.736674 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:33:57.744379 systemd[1]: Finished ensure-sysext.service. Dec 13 01:33:57.751370 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:33:57.751545 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:33:57.763374 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:33:57.768401 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:33:57.773325 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:33:57.778535 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:33:57.779832 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:33:57.785325 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:33:57.815395 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:33:57.816991 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:33:57.817028 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:33:57.818098 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:33:57.818312 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:33:57.820260 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:33:57.820448 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:33:57.834986 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1373) Dec 13 01:33:57.835095 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1373) Dec 13 01:33:57.836396 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:33:57.836616 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:33:57.838710 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:33:57.838929 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:33:57.845790 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:33:57.847943 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:33:57.848026 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:33:57.849177 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1379) Dec 13 01:33:57.929481 systemd-resolved[1336]: Positive Trust Anchors: Dec 13 01:33:57.929520 systemd-resolved[1336]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:33:57.929565 systemd-resolved[1336]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:33:57.930174 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:33:57.935104 systemd-resolved[1336]: Defaulting to hostname 'linux'. Dec 13 01:33:57.937854 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:33:57.940320 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:33:57.943326 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:33:57.950802 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 13 01:33:57.953013 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:33:57.956280 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:33:57.964286 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:33:57.969684 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:33:58.018195 systemd-networkd[1406]: lo: Link UP Dec 13 01:33:58.018209 systemd-networkd[1406]: lo: Gained carrier Dec 13 01:33:58.021259 systemd-networkd[1406]: Enumeration completed Dec 13 01:33:58.021410 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:33:58.021826 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:33:58.021833 systemd-networkd[1406]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:33:58.023302 systemd[1]: Reached target network.target - Network. Dec 13 01:33:58.024151 systemd-networkd[1406]: eth0: Link UP Dec 13 01:33:58.024157 systemd-networkd[1406]: eth0: Gained carrier Dec 13 01:33:58.024174 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:33:58.077811 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:33:58.093153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:33:58.104166 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:33:58.106296 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:33:58.106577 systemd-networkd[1406]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:33:58.110206 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Dec 13 01:33:58.112685 kernel: kvm_amd: TSC scaling supported Dec 13 01:33:58.112736 kernel: kvm_amd: Nested Virtualization enabled Dec 13 01:33:58.112754 kernel: kvm_amd: Nested Paging enabled Dec 13 01:33:58.764332 systemd-timesyncd[1411]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:33:58.764390 systemd-timesyncd[1411]: Initial clock synchronization to Fri 2024-12-13 01:33:58.764193 UTC. Dec 13 01:33:58.764596 systemd-resolved[1336]: Clock change detected. Flushing caches. Dec 13 01:33:58.765774 kernel: kvm_amd: LBR virtualization supported Dec 13 01:33:58.765810 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 01:33:58.765831 kernel: kvm_amd: Virtual GIF supported Dec 13 01:33:58.770362 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:33:58.785778 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:33:58.789562 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:33:58.789709 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:33:58.804891 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:33:58.819434 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:33:58.828749 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:33:58.839464 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:33:58.859294 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:33:58.890857 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:33:58.893844 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:33:58.895488 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:33:58.896999 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:33:58.898721 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:33:58.900599 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:33:58.902015 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:33:58.903533 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:33:58.905026 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:33:58.905066 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:33:58.906195 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:33:58.908536 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:33:58.911798 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:33:58.919987 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:33:58.922899 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:33:58.924980 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:33:58.926362 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:33:58.927360 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:33:58.928433 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:33:58.928468 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:33:58.929776 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:33:58.932140 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:33:58.934604 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:33:58.956730 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:33:58.963117 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:33:58.964192 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:33:58.966666 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:33:58.967651 jq[1444]: false Dec 13 01:33:58.970275 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:33:58.973750 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:33:58.979815 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:33:58.985408 extend-filesystems[1445]: Found loop3 Dec 13 01:33:58.986766 extend-filesystems[1445]: Found loop4 Dec 13 01:33:58.987650 extend-filesystems[1445]: Found loop5 Dec 13 01:33:58.987650 extend-filesystems[1445]: Found sr0 Dec 13 01:33:58.987650 extend-filesystems[1445]: Found vda Dec 13 01:33:58.987650 extend-filesystems[1445]: Found vda1 Dec 13 01:33:58.987650 extend-filesystems[1445]: Found vda2 Dec 13 01:33:58.987650 extend-filesystems[1445]: Found vda3 Dec 13 01:33:58.992841 extend-filesystems[1445]: Found usr Dec 13 01:33:58.992841 extend-filesystems[1445]: Found vda4 Dec 13 01:33:58.992841 extend-filesystems[1445]: Found vda6 Dec 13 01:33:58.992841 extend-filesystems[1445]: Found vda7 Dec 13 01:33:58.992841 extend-filesystems[1445]: Found vda9 Dec 13 01:33:58.992841 extend-filesystems[1445]: Checking size of /dev/vda9 Dec 13 01:33:58.992340 dbus-daemon[1443]: [system] SELinux support is enabled Dec 13 01:33:58.995856 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:33:59.000299 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:33:59.000966 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:33:59.002491 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:33:59.006312 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:33:59.041125 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:33:59.048696 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:33:59.055107 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:33:59.056106 jq[1461]: true Dec 13 01:33:59.055415 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:33:59.055897 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:33:59.056178 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:33:59.058973 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:33:59.059250 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:33:59.065904 extend-filesystems[1445]: Resized partition /dev/vda9 Dec 13 01:33:59.074413 extend-filesystems[1472]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:33:59.083945 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1382) Dec 13 01:33:59.086168 update_engine[1459]: I20241213 01:33:59.086071 1459 main.cc:92] Flatcar Update Engine starting Dec 13 01:33:59.088220 update_engine[1459]: I20241213 01:33:59.087604 1459 update_check_scheduler.cc:74] Next update check in 5m51s Dec 13 01:33:59.091801 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:33:59.093793 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:33:59.094289 jq[1468]: true Dec 13 01:33:59.099291 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:33:59.099331 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:33:59.101013 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:33:59.101045 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:33:59.109844 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:33:59.127990 tar[1467]: linux-amd64/helm Dec 13 01:33:59.136568 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:33:59.144051 systemd-logind[1455]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:33:59.144088 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:33:59.145773 systemd-logind[1455]: New seat seat0. Dec 13 01:33:59.149882 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:33:59.317433 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:33:59.369619 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:33:59.402627 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:33:59.410798 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:33:59.439274 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:33:59.439547 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:33:59.446704 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:33:59.446914 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:33:59.488426 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:33:59.499234 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:33:59.503041 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:33:59.504743 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:33:59.523537 extend-filesystems[1472]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:33:59.523537 extend-filesystems[1472]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:33:59.523537 extend-filesystems[1472]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:33:59.529099 extend-filesystems[1445]: Resized filesystem in /dev/vda9 Dec 13 01:33:59.524874 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:33:59.525271 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:33:59.531205 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:33:59.532585 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:33:59.535607 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:33:59.831274 containerd[1471]: time="2024-12-13T01:33:59.831055068Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:33:59.859794 containerd[1471]: time="2024-12-13T01:33:59.859703544Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:33:59.862261 containerd[1471]: time="2024-12-13T01:33:59.862185128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:33:59.862261 containerd[1471]: time="2024-12-13T01:33:59.862228469Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:33:59.862261 containerd[1471]: time="2024-12-13T01:33:59.862255259Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:33:59.862592 containerd[1471]: time="2024-12-13T01:33:59.862478809Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:33:59.862592 containerd[1471]: time="2024-12-13T01:33:59.862510799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:33:59.862667 containerd[1471]: time="2024-12-13T01:33:59.862598994Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:33:59.862667 containerd[1471]: time="2024-12-13T01:33:59.862611618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:33:59.862868 containerd[1471]: time="2024-12-13T01:33:59.862828084Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:33:59.862868 containerd[1471]: time="2024-12-13T01:33:59.862848813Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:33:59.862868 containerd[1471]: time="2024-12-13T01:33:59.862862949Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:33:59.862957 containerd[1471]: time="2024-12-13T01:33:59.862872707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:33:59.863016 containerd[1471]: time="2024-12-13T01:33:59.862991701Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:33:59.863308 containerd[1471]: time="2024-12-13T01:33:59.863277246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:33:59.863487 containerd[1471]: time="2024-12-13T01:33:59.863448307Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:33:59.863487 containerd[1471]: time="2024-12-13T01:33:59.863474286Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:33:59.863686 containerd[1471]: time="2024-12-13T01:33:59.863611864Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:33:59.863686 containerd[1471]: time="2024-12-13T01:33:59.863682216Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:33:59.870338 containerd[1471]: time="2024-12-13T01:33:59.870271803Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:33:59.870338 containerd[1471]: time="2024-12-13T01:33:59.870342255Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:33:59.870338 containerd[1471]: time="2024-12-13T01:33:59.870361301Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:33:59.870551 containerd[1471]: time="2024-12-13T01:33:59.870376750Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:33:59.870551 containerd[1471]: time="2024-12-13T01:33:59.870394213Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:33:59.870627 containerd[1471]: time="2024-12-13T01:33:59.870607052Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:33:59.871013 containerd[1471]: time="2024-12-13T01:33:59.870957239Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:33:59.871220 containerd[1471]: time="2024-12-13T01:33:59.871187320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:33:59.871220 containerd[1471]: time="2024-12-13T01:33:59.871211936Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:33:59.871260 containerd[1471]: time="2024-12-13T01:33:59.871227766Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:33:59.871260 containerd[1471]: time="2024-12-13T01:33:59.871243375Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:33:59.871260 containerd[1471]: time="2024-12-13T01:33:59.871259145Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:33:59.871338 containerd[1471]: time="2024-12-13T01:33:59.871276548Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:33:59.871338 containerd[1471]: time="2024-12-13T01:33:59.871292638Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:33:59.871338 containerd[1471]: time="2024-12-13T01:33:59.871307065Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:33:59.871338 containerd[1471]: time="2024-12-13T01:33:59.871322293Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:33:59.871338 containerd[1471]: time="2024-12-13T01:33:59.871335739Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:33:59.871459 containerd[1471]: time="2024-12-13T01:33:59.871348673Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:33:59.871459 containerd[1471]: time="2024-12-13T01:33:59.871374070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:33:59.871459 containerd[1471]: time="2024-12-13T01:33:59.871387766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:33:59.871459 containerd[1471]: time="2024-12-13T01:33:59.871402614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:33:59.871459 containerd[1471]: time="2024-12-13T01:33:59.871414947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:33:59.871459 containerd[1471]: time="2024-12-13T01:33:59.871430256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:33:59.871459 containerd[1471]: time="2024-12-13T01:33:59.871443471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:33:59.871459 containerd[1471]: time="2024-12-13T01:33:59.871455674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:33:59.871647 containerd[1471]: time="2024-12-13T01:33:59.871468297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:33:59.871647 containerd[1471]: time="2024-12-13T01:33:59.871481592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:33:59.871647 containerd[1471]: time="2024-12-13T01:33:59.871496881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:33:59.871647 containerd[1471]: time="2024-12-13T01:33:59.871527679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:33:59.871647 containerd[1471]: time="2024-12-13T01:33:59.871545522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:33:59.871647 containerd[1471]: time="2024-12-13T01:33:59.871560620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:33:59.871647 containerd[1471]: time="2024-12-13T01:33:59.871580417Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:33:59.871647 containerd[1471]: time="2024-12-13T01:33:59.871615373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:33:59.871647 containerd[1471]: time="2024-12-13T01:33:59.871632144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:33:59.871830 containerd[1471]: time="2024-12-13T01:33:59.871673222Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:33:59.871830 containerd[1471]: time="2024-12-13T01:33:59.871741449Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:33:59.871830 containerd[1471]: time="2024-12-13T01:33:59.871767328Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:33:59.871830 containerd[1471]: time="2024-12-13T01:33:59.871784570Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:33:59.871830 containerd[1471]: time="2024-12-13T01:33:59.871801592Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:33:59.871830 containerd[1471]: time="2024-12-13T01:33:59.871815849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:33:59.871948 containerd[1471]: time="2024-12-13T01:33:59.871838361Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:33:59.871948 containerd[1471]: time="2024-12-13T01:33:59.871865292Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:33:59.871948 containerd[1471]: time="2024-12-13T01:33:59.871881192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:33:59.873489 containerd[1471]: time="2024-12-13T01:33:59.873317505Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:33:59.873489 containerd[1471]: time="2024-12-13T01:33:59.873439404Z" level=info msg="Connect containerd service" Dec 13 01:33:59.873954 containerd[1471]: time="2024-12-13T01:33:59.873508533Z" level=info msg="using legacy CRI server" Dec 13 01:33:59.873954 containerd[1471]: time="2024-12-13T01:33:59.873544721Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:33:59.873954 containerd[1471]: time="2024-12-13T01:33:59.873694462Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:33:59.907965 containerd[1471]: time="2024-12-13T01:33:59.907878519Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:33:59.908547 containerd[1471]: time="2024-12-13T01:33:59.908403173Z" level=info msg="Start subscribing containerd event" Dec 13 01:33:59.908660 containerd[1471]: time="2024-12-13T01:33:59.908615992Z" level=info msg="Start recovering state" Dec 13 01:33:59.909026 containerd[1471]: time="2024-12-13T01:33:59.908686224Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:33:59.909026 containerd[1471]: time="2024-12-13T01:33:59.908761926Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:33:59.909026 containerd[1471]: time="2024-12-13T01:33:59.908809495Z" level=info msg="Start event monitor" Dec 13 01:33:59.909026 containerd[1471]: time="2024-12-13T01:33:59.908839301Z" level=info msg="Start snapshots syncer" Dec 13 01:33:59.909026 containerd[1471]: time="2024-12-13T01:33:59.908862685Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:33:59.909026 containerd[1471]: time="2024-12-13T01:33:59.908872393Z" level=info msg="Start streaming server" Dec 13 01:33:59.909447 containerd[1471]: time="2024-12-13T01:33:59.909048934Z" level=info msg="containerd successfully booted in 0.080900s" Dec 13 01:33:59.909283 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:33:59.949401 tar[1467]: linux-amd64/LICENSE Dec 13 01:33:59.949578 tar[1467]: linux-amd64/README.md Dec 13 01:33:59.969013 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:34:00.132916 systemd-networkd[1406]: eth0: Gained IPv6LL Dec 13 01:34:00.137809 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:34:00.139994 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:34:00.154062 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:34:00.157489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:34:00.160260 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:34:00.183082 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:34:00.183368 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:34:00.185374 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:34:00.188296 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:34:01.485613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:34:01.492673 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:34:01.493605 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:34:01.495334 systemd[1]: Startup finished in 1.471s (kernel) + 6.196s (initrd) + 5.936s (userspace) = 13.604s. Dec 13 01:34:02.454394 kubelet[1556]: E1213 01:34:02.454246 1556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:34:02.458695 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:34:02.458934 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:34:02.459335 systemd[1]: kubelet.service: Consumed 1.941s CPU time. Dec 13 01:34:02.509331 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:34:02.510757 systemd[1]: Started sshd@0-10.0.0.100:22-10.0.0.1:53926.service - OpenSSH per-connection server daemon (10.0.0.1:53926). Dec 13 01:34:02.559925 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 53926 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:34:02.562255 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:02.572650 systemd-logind[1455]: New session 1 of user core. Dec 13 01:34:02.573953 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:34:02.582746 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:34:02.598642 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:34:02.600657 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:34:02.609679 (systemd)[1574]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:34:02.738102 systemd[1574]: Queued start job for default target default.target. Dec 13 01:34:02.749883 systemd[1574]: Created slice app.slice - User Application Slice. Dec 13 01:34:02.749909 systemd[1574]: Reached target paths.target - Paths. Dec 13 01:34:02.749922 systemd[1574]: Reached target timers.target - Timers. Dec 13 01:34:02.751765 systemd[1574]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:34:02.766029 systemd[1574]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:34:02.766199 systemd[1574]: Reached target sockets.target - Sockets. Dec 13 01:34:02.766218 systemd[1574]: Reached target basic.target - Basic System. Dec 13 01:34:02.766260 systemd[1574]: Reached target default.target - Main User Target. Dec 13 01:34:02.766299 systemd[1574]: Startup finished in 149ms. Dec 13 01:34:02.766694 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:34:02.768441 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:34:02.835813 systemd[1]: Started sshd@1-10.0.0.100:22-10.0.0.1:53932.service - OpenSSH per-connection server daemon (10.0.0.1:53932). Dec 13 01:34:02.883618 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 53932 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:34:02.885777 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:02.890642 systemd-logind[1455]: New session 2 of user core. Dec 13 01:34:02.901661 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:34:03.016140 sshd[1585]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:03.023265 systemd[1]: sshd@1-10.0.0.100:22-10.0.0.1:53932.service: Deactivated successfully. Dec 13 01:34:03.025070 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:34:03.026663 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:34:03.041771 systemd[1]: Started sshd@2-10.0.0.100:22-10.0.0.1:53938.service - OpenSSH per-connection server daemon (10.0.0.1:53938). Dec 13 01:34:03.042976 systemd-logind[1455]: Removed session 2. Dec 13 01:34:03.073134 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 53938 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:34:03.074998 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:03.079906 systemd-logind[1455]: New session 3 of user core. Dec 13 01:34:03.094778 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:34:03.147134 sshd[1592]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:03.166483 systemd[1]: sshd@2-10.0.0.100:22-10.0.0.1:53938.service: Deactivated successfully. Dec 13 01:34:03.168386 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:34:03.170340 systemd-logind[1455]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:34:03.180115 systemd[1]: Started sshd@3-10.0.0.100:22-10.0.0.1:53950.service - OpenSSH per-connection server daemon (10.0.0.1:53950). Dec 13 01:34:03.181269 systemd-logind[1455]: Removed session 3. Dec 13 01:34:03.209743 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 53950 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:34:03.211865 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:03.216167 systemd-logind[1455]: New session 4 of user core. Dec 13 01:34:03.225656 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:34:03.283179 sshd[1599]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:03.301470 systemd[1]: sshd@3-10.0.0.100:22-10.0.0.1:53950.service: Deactivated successfully. Dec 13 01:34:03.304342 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:34:03.306763 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:34:03.315342 systemd[1]: Started sshd@4-10.0.0.100:22-10.0.0.1:53962.service - OpenSSH per-connection server daemon (10.0.0.1:53962). Dec 13 01:34:03.316605 systemd-logind[1455]: Removed session 4. Dec 13 01:34:03.344284 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 53962 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:34:03.345956 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:03.350147 systemd-logind[1455]: New session 5 of user core. Dec 13 01:34:03.359775 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:34:03.418466 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:34:03.418836 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:34:03.442848 sudo[1609]: pam_unix(sudo:session): session closed for user root Dec 13 01:34:03.444922 sshd[1606]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:03.452455 systemd[1]: sshd@4-10.0.0.100:22-10.0.0.1:53962.service: Deactivated successfully. Dec 13 01:34:03.454403 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:34:03.456195 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:34:03.457567 systemd[1]: Started sshd@5-10.0.0.100:22-10.0.0.1:53978.service - OpenSSH per-connection server daemon (10.0.0.1:53978). Dec 13 01:34:03.458285 systemd-logind[1455]: Removed session 5. Dec 13 01:34:03.492840 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 53978 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:34:03.494424 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:03.498667 systemd-logind[1455]: New session 6 of user core. Dec 13 01:34:03.515649 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:34:03.569554 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:34:03.569908 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:34:03.573806 sudo[1618]: pam_unix(sudo:session): session closed for user root Dec 13 01:34:03.580858 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:34:03.581213 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:34:03.599747 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:34:03.601503 auditctl[1621]: No rules Dec 13 01:34:03.602986 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:34:03.603258 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:34:03.605133 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:34:03.645686 augenrules[1639]: No rules Dec 13 01:34:03.648439 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:34:03.649978 sudo[1617]: pam_unix(sudo:session): session closed for user root Dec 13 01:34:03.652925 sshd[1614]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:03.667641 systemd[1]: sshd@5-10.0.0.100:22-10.0.0.1:53978.service: Deactivated successfully. Dec 13 01:34:03.669756 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:34:03.671750 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:34:03.682831 systemd[1]: Started sshd@6-10.0.0.100:22-10.0.0.1:53980.service - OpenSSH per-connection server daemon (10.0.0.1:53980). Dec 13 01:34:03.683726 systemd-logind[1455]: Removed session 6. Dec 13 01:34:03.713126 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 53980 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:34:03.715058 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:03.719727 systemd-logind[1455]: New session 7 of user core. Dec 13 01:34:03.735677 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:34:03.791270 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:34:03.791748 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:34:04.391946 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:34:04.392944 (dockerd)[1670]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:34:05.151992 dockerd[1670]: time="2024-12-13T01:34:05.151891506Z" level=info msg="Starting up" Dec 13 01:34:05.702733 dockerd[1670]: time="2024-12-13T01:34:05.702662798Z" level=info msg="Loading containers: start." Dec 13 01:34:05.834579 kernel: Initializing XFRM netlink socket Dec 13 01:34:06.062691 systemd-networkd[1406]: docker0: Link UP Dec 13 01:34:06.090327 dockerd[1670]: time="2024-12-13T01:34:06.090258328Z" level=info msg="Loading containers: done." Dec 13 01:34:06.116125 dockerd[1670]: time="2024-12-13T01:34:06.116050588Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:34:06.116306 dockerd[1670]: time="2024-12-13T01:34:06.116229033Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:34:06.116460 dockerd[1670]: time="2024-12-13T01:34:06.116428607Z" level=info msg="Daemon has completed initialization" Dec 13 01:34:06.281843 dockerd[1670]: time="2024-12-13T01:34:06.281694540Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:34:06.282005 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:34:07.032867 containerd[1471]: time="2024-12-13T01:34:07.032790172Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 01:34:08.348403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4109265901.mount: Deactivated successfully. Dec 13 01:34:10.244978 containerd[1471]: time="2024-12-13T01:34:10.244887210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:10.245640 containerd[1471]: time="2024-12-13T01:34:10.245556566Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975483" Dec 13 01:34:10.246986 containerd[1471]: time="2024-12-13T01:34:10.246945901Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:10.250328 containerd[1471]: time="2024-12-13T01:34:10.250299591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:10.251261 containerd[1471]: time="2024-12-13T01:34:10.251219486Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 3.218365665s" Dec 13 01:34:10.251313 containerd[1471]: time="2024-12-13T01:34:10.251266083Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 13 01:34:10.253185 containerd[1471]: time="2024-12-13T01:34:10.253145658Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 01:34:12.525748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:34:12.538683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:34:12.741729 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:34:12.746608 (kubelet)[1881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:34:13.516409 kubelet[1881]: E1213 01:34:13.516332 1881 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:34:13.523338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:34:13.523605 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:34:14.885875 containerd[1471]: time="2024-12-13T01:34:14.885816630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:14.894887 containerd[1471]: time="2024-12-13T01:34:14.894840493Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702157" Dec 13 01:34:14.911211 containerd[1471]: time="2024-12-13T01:34:14.911171639Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:14.958274 containerd[1471]: time="2024-12-13T01:34:14.958213690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:14.959574 containerd[1471]: time="2024-12-13T01:34:14.959539597Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 4.706312205s" Dec 13 01:34:14.959628 containerd[1471]: time="2024-12-13T01:34:14.959582848Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 13 01:34:14.960141 containerd[1471]: time="2024-12-13T01:34:14.960080501Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 01:34:17.130286 containerd[1471]: time="2024-12-13T01:34:17.130197395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:17.140831 containerd[1471]: time="2024-12-13T01:34:17.140761216Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652067" Dec 13 01:34:18.024423 containerd[1471]: time="2024-12-13T01:34:18.024331445Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:18.080422 containerd[1471]: time="2024-12-13T01:34:18.080335864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:18.081796 containerd[1471]: time="2024-12-13T01:34:18.081713487Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 3.121595105s" Dec 13 01:34:18.081796 containerd[1471]: time="2024-12-13T01:34:18.081761167Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 13 01:34:18.082424 containerd[1471]: time="2024-12-13T01:34:18.082368385Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 01:34:20.961503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2130620033.mount: Deactivated successfully. Dec 13 01:34:21.655388 containerd[1471]: time="2024-12-13T01:34:21.655303654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:21.671481 containerd[1471]: time="2024-12-13T01:34:21.671427252Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230243" Dec 13 01:34:21.707043 containerd[1471]: time="2024-12-13T01:34:21.706975117Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:21.724865 containerd[1471]: time="2024-12-13T01:34:21.724811677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:21.725538 containerd[1471]: time="2024-12-13T01:34:21.725480481Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 3.643052363s" Dec 13 01:34:21.725596 containerd[1471]: time="2024-12-13T01:34:21.725542998Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 01:34:21.726188 containerd[1471]: time="2024-12-13T01:34:21.726132584Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:34:23.525734 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:34:23.541852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:34:23.701686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:34:23.828684 (kubelet)[1913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:34:24.483643 kubelet[1913]: E1213 01:34:24.483550 1913 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:34:24.488156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:34:24.488399 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:34:25.427922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3580378319.mount: Deactivated successfully. Dec 13 01:34:30.213009 containerd[1471]: time="2024-12-13T01:34:30.212918276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:30.245506 containerd[1471]: time="2024-12-13T01:34:30.245411372Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:34:30.259100 containerd[1471]: time="2024-12-13T01:34:30.259042957Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:30.285608 containerd[1471]: time="2024-12-13T01:34:30.285535469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:30.286726 containerd[1471]: time="2024-12-13T01:34:30.286665879Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 8.560495795s" Dec 13 01:34:30.286726 containerd[1471]: time="2024-12-13T01:34:30.286710583Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:34:30.287257 containerd[1471]: time="2024-12-13T01:34:30.287230568Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 01:34:32.386482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount821377058.mount: Deactivated successfully. Dec 13 01:34:32.684136 containerd[1471]: time="2024-12-13T01:34:32.683893144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:32.710095 containerd[1471]: time="2024-12-13T01:34:32.709955066Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 13 01:34:32.728288 containerd[1471]: time="2024-12-13T01:34:32.728195868Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:32.753105 containerd[1471]: time="2024-12-13T01:34:32.752989924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:32.753795 containerd[1471]: time="2024-12-13T01:34:32.753730633Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.466464538s" Dec 13 01:34:32.753795 containerd[1471]: time="2024-12-13T01:34:32.753781761Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 01:34:32.754350 containerd[1471]: time="2024-12-13T01:34:32.754303849Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 01:34:34.526003 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:34:34.544993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:34:34.724852 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:34:34.730740 (kubelet)[1977]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:34:34.932392 kubelet[1977]: E1213 01:34:34.932139 1977 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:34:34.937399 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:34:34.937745 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:34:35.836834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1421613063.mount: Deactivated successfully. Dec 13 01:34:42.186207 containerd[1471]: time="2024-12-13T01:34:42.186118324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:42.186821 containerd[1471]: time="2024-12-13T01:34:42.186766007Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Dec 13 01:34:42.187981 containerd[1471]: time="2024-12-13T01:34:42.187940913Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:42.191082 containerd[1471]: time="2024-12-13T01:34:42.191034272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:42.192377 containerd[1471]: time="2024-12-13T01:34:42.192309750Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 9.437964141s" Dec 13 01:34:42.192453 containerd[1471]: time="2024-12-13T01:34:42.192380014Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 13 01:34:43.915283 update_engine[1459]: I20241213 01:34:43.915141 1459 update_attempter.cc:509] Updating boot flags... Dec 13 01:34:43.945733 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2070) Dec 13 01:34:43.988618 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2068) Dec 13 01:34:44.243240 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:34:44.251742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:34:44.281026 systemd[1]: Reloading requested from client PID 2084 ('systemctl') (unit session-7.scope)... Dec 13 01:34:44.281040 systemd[1]: Reloading... Dec 13 01:34:44.366556 zram_generator::config[2129]: No configuration found. Dec 13 01:34:44.607306 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:34:44.710014 systemd[1]: Reloading finished in 428 ms. Dec 13 01:34:44.781072 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:34:44.784597 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:34:44.784946 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:34:44.787442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:34:44.954840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:34:44.962051 (kubelet)[2173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:34:45.026021 kubelet[2173]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:34:45.026021 kubelet[2173]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:34:45.026021 kubelet[2173]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:34:45.026478 kubelet[2173]: I1213 01:34:45.026112 2173 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:34:45.285267 kubelet[2173]: I1213 01:34:45.285176 2173 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:34:45.285267 kubelet[2173]: I1213 01:34:45.285227 2173 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:34:45.288242 kubelet[2173]: I1213 01:34:45.288176 2173 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:34:45.311889 kubelet[2173]: I1213 01:34:45.311822 2173 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:34:45.312421 kubelet[2173]: E1213 01:34:45.312377 2173 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:34:45.321245 kubelet[2173]: E1213 01:34:45.321195 2173 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:34:45.321245 kubelet[2173]: I1213 01:34:45.321237 2173 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:34:45.329362 kubelet[2173]: I1213 01:34:45.329315 2173 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:34:45.333380 kubelet[2173]: I1213 01:34:45.333306 2173 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:34:45.333681 kubelet[2173]: I1213 01:34:45.333623 2173 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:34:45.333942 kubelet[2173]: I1213 01:34:45.333671 2173 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:34:45.334060 kubelet[2173]: I1213 01:34:45.333970 2173 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:34:45.334060 kubelet[2173]: I1213 01:34:45.333986 2173 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:34:45.334225 kubelet[2173]: I1213 01:34:45.334193 2173 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:34:45.336377 kubelet[2173]: I1213 01:34:45.336339 2173 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:34:45.336377 kubelet[2173]: I1213 01:34:45.336374 2173 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:34:45.336444 kubelet[2173]: I1213 01:34:45.336432 2173 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:34:45.336487 kubelet[2173]: I1213 01:34:45.336467 2173 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:34:45.340174 kubelet[2173]: W1213 01:34:45.340095 2173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 01:34:45.340219 kubelet[2173]: E1213 01:34:45.340193 2173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:34:45.342360 kubelet[2173]: W1213 01:34:45.340282 2173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 01:34:45.342360 kubelet[2173]: E1213 01:34:45.340318 2173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:34:45.343742 kubelet[2173]: I1213 01:34:45.343667 2173 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:34:45.346872 kubelet[2173]: I1213 01:34:45.346839 2173 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:34:45.347630 kubelet[2173]: W1213 01:34:45.347598 2173 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:34:45.348533 kubelet[2173]: I1213 01:34:45.348463 2173 server.go:1269] "Started kubelet" Dec 13 01:34:45.349885 kubelet[2173]: I1213 01:34:45.349644 2173 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:34:45.350078 kubelet[2173]: I1213 01:34:45.350035 2173 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:34:45.350963 kubelet[2173]: I1213 01:34:45.350944 2173 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:34:45.354933 kubelet[2173]: I1213 01:34:45.351672 2173 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:34:45.354933 kubelet[2173]: I1213 01:34:45.351692 2173 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:34:45.354933 kubelet[2173]: I1213 01:34:45.352254 2173 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:34:45.354933 kubelet[2173]: I1213 01:34:45.352384 2173 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:34:45.354933 kubelet[2173]: I1213 01:34:45.351680 2173 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:34:45.354933 kubelet[2173]: I1213 01:34:45.352464 2173 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:34:45.354933 kubelet[2173]: I1213 01:34:45.353238 2173 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:34:45.354933 kubelet[2173]: I1213 01:34:45.353383 2173 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:34:45.354933 kubelet[2173]: W1213 01:34:45.353465 2173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 01:34:45.354933 kubelet[2173]: E1213 01:34:45.353531 2173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:34:45.354933 kubelet[2173]: E1213 01:34:45.353718 2173 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:34:45.355230 kubelet[2173]: E1213 01:34:45.353772 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="200ms" Dec 13 01:34:45.355230 kubelet[2173]: E1213 01:34:45.354868 2173 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:34:45.355273 kubelet[2173]: I1213 01:34:45.355245 2173 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:34:45.365970 kubelet[2173]: E1213 01:34:45.363841 2173 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.100:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181098a3b0a677fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:34:45.34843187 +0000 UTC m=+0.380517502,LastTimestamp:2024-12-13 01:34:45.34843187 +0000 UTC m=+0.380517502,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:34:45.374785 kubelet[2173]: I1213 01:34:45.374760 2173 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:34:45.374785 kubelet[2173]: I1213 01:34:45.374778 2173 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:34:45.374891 kubelet[2173]: I1213 01:34:45.374800 2173 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:34:45.453907 kubelet[2173]: E1213 01:34:45.453843 2173 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:34:45.554410 kubelet[2173]: E1213 01:34:45.554314 2173 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:34:45.554677 kubelet[2173]: E1213 01:34:45.554633 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="400ms" Dec 13 01:34:45.655360 kubelet[2173]: E1213 01:34:45.655303 2173 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:34:45.755918 kubelet[2173]: E1213 01:34:45.755847 2173 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:34:45.856273 kubelet[2173]: E1213 01:34:45.856095 2173 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:34:45.891289 kubelet[2173]: I1213 01:34:45.891240 2173 policy_none.go:49] "None policy: Start" Dec 13 01:34:45.894333 kubelet[2173]: I1213 01:34:45.893790 2173 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:34:45.894333 kubelet[2173]: I1213 01:34:45.893831 2173 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:34:45.897191 kubelet[2173]: I1213 01:34:45.897119 2173 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:34:45.899366 kubelet[2173]: I1213 01:34:45.899330 2173 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:34:45.899410 kubelet[2173]: I1213 01:34:45.899385 2173 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:34:45.899449 kubelet[2173]: I1213 01:34:45.899415 2173 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:34:45.899542 kubelet[2173]: E1213 01:34:45.899478 2173 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:34:45.901105 kubelet[2173]: W1213 01:34:45.900574 2173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 01:34:45.901105 kubelet[2173]: E1213 01:34:45.900622 2173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:34:45.906082 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:34:45.920247 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:34:45.923887 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:34:45.933672 kubelet[2173]: I1213 01:34:45.933626 2173 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:34:45.933919 kubelet[2173]: I1213 01:34:45.933897 2173 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:34:45.933983 kubelet[2173]: I1213 01:34:45.933919 2173 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:34:45.934284 kubelet[2173]: I1213 01:34:45.934183 2173 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:34:45.935736 kubelet[2173]: E1213 01:34:45.935693 2173 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:34:45.955908 kubelet[2173]: E1213 01:34:45.955846 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="800ms" Dec 13 01:34:46.008385 systemd[1]: Created slice kubepods-burstable-podab980fc3cdda73e72d13abc3b7629c94.slice - libcontainer container kubepods-burstable-podab980fc3cdda73e72d13abc3b7629c94.slice. Dec 13 01:34:46.029947 systemd[1]: Created slice kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice - libcontainer container kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice. Dec 13 01:34:46.035896 kubelet[2173]: I1213 01:34:46.035847 2173 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:34:46.036349 kubelet[2173]: E1213 01:34:46.036189 2173 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Dec 13 01:34:46.045493 systemd[1]: Created slice kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice - libcontainer container kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice. Dec 13 01:34:46.056439 kubelet[2173]: I1213 01:34:46.056411 2173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab980fc3cdda73e72d13abc3b7629c94-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ab980fc3cdda73e72d13abc3b7629c94\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:34:46.056553 kubelet[2173]: I1213 01:34:46.056440 2173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab980fc3cdda73e72d13abc3b7629c94-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ab980fc3cdda73e72d13abc3b7629c94\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:34:46.056553 kubelet[2173]: I1213 01:34:46.056460 2173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:34:46.056553 kubelet[2173]: I1213 01:34:46.056474 2173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:34:46.056553 kubelet[2173]: I1213 01:34:46.056489 2173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab980fc3cdda73e72d13abc3b7629c94-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ab980fc3cdda73e72d13abc3b7629c94\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:34:46.056553 kubelet[2173]: I1213 01:34:46.056507 2173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:34:46.056666 kubelet[2173]: I1213 01:34:46.056545 2173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:34:46.056666 kubelet[2173]: I1213 01:34:46.056598 2173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:34:46.056666 kubelet[2173]: I1213 01:34:46.056638 2173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:34:46.217616 kubelet[2173]: W1213 01:34:46.217429 2173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 01:34:46.217616 kubelet[2173]: E1213 01:34:46.217508 2173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:34:46.238306 kubelet[2173]: I1213 01:34:46.238261 2173 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:34:46.238686 kubelet[2173]: E1213 01:34:46.238647 2173 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Dec 13 01:34:46.327444 kubelet[2173]: E1213 01:34:46.327405 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:46.328098 containerd[1471]: time="2024-12-13T01:34:46.328036554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ab980fc3cdda73e72d13abc3b7629c94,Namespace:kube-system,Attempt:0,}" Dec 13 01:34:46.343617 kubelet[2173]: E1213 01:34:46.343568 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:46.344032 containerd[1471]: time="2024-12-13T01:34:46.343997666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,}" Dec 13 01:34:46.348460 kubelet[2173]: E1213 01:34:46.348437 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:46.348873 containerd[1471]: time="2024-12-13T01:34:46.348840694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,}" Dec 13 01:34:46.554858 kubelet[2173]: W1213 01:34:46.554795 2173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 01:34:46.555025 kubelet[2173]: E1213 01:34:46.554863 2173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:34:46.600766 kubelet[2173]: W1213 01:34:46.600678 2173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 01:34:46.600931 kubelet[2173]: E1213 01:34:46.600768 2173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:34:46.640428 kubelet[2173]: I1213 01:34:46.640378 2173 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:34:46.640901 kubelet[2173]: E1213 01:34:46.640849 2173 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Dec 13 01:34:46.757018 kubelet[2173]: E1213 01:34:46.756946 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="1.6s" Dec 13 01:34:46.918801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2660260794.mount: Deactivated successfully. Dec 13 01:34:46.927077 containerd[1471]: time="2024-12-13T01:34:46.927026980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:34:46.927884 containerd[1471]: time="2024-12-13T01:34:46.927836567Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:34:46.931543 containerd[1471]: time="2024-12-13T01:34:46.928826193Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:34:46.932094 containerd[1471]: time="2024-12-13T01:34:46.932056152Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:34:46.933101 containerd[1471]: time="2024-12-13T01:34:46.933053985Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:34:46.934039 containerd[1471]: time="2024-12-13T01:34:46.933966536Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:34:46.934815 containerd[1471]: time="2024-12-13T01:34:46.934763638Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:34:46.937524 containerd[1471]: time="2024-12-13T01:34:46.937481996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:34:46.939839 containerd[1471]: time="2024-12-13T01:34:46.939812217Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 595.762011ms" Dec 13 01:34:46.940596 containerd[1471]: time="2024-12-13T01:34:46.940544326Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 591.602599ms" Dec 13 01:34:46.941438 containerd[1471]: time="2024-12-13T01:34:46.941208635Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 613.086028ms" Dec 13 01:34:47.124413 kubelet[2173]: W1213 01:34:47.122249 2173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 01:34:47.124413 kubelet[2173]: E1213 01:34:47.122327 2173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:34:47.224693 containerd[1471]: time="2024-12-13T01:34:47.205418760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:34:47.224693 containerd[1471]: time="2024-12-13T01:34:47.205628839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:34:47.224693 containerd[1471]: time="2024-12-13T01:34:47.205732796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:47.224693 containerd[1471]: time="2024-12-13T01:34:47.206071158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:47.275048 containerd[1471]: time="2024-12-13T01:34:47.274787674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:34:47.275048 containerd[1471]: time="2024-12-13T01:34:47.274870210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:34:47.275048 containerd[1471]: time="2024-12-13T01:34:47.274880841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:47.275048 containerd[1471]: time="2024-12-13T01:34:47.274987974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:47.283901 containerd[1471]: time="2024-12-13T01:34:47.283743704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:34:47.284041 containerd[1471]: time="2024-12-13T01:34:47.283902144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:34:47.284041 containerd[1471]: time="2024-12-13T01:34:47.283961486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:47.284129 containerd[1471]: time="2024-12-13T01:34:47.284080803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:47.301130 systemd[1]: Started cri-containerd-c1a4face0f59815c7b2582da70f674367face3b79fb73e3e77ea0fe1ce7506d2.scope - libcontainer container c1a4face0f59815c7b2582da70f674367face3b79fb73e3e77ea0fe1ce7506d2. Dec 13 01:34:47.343348 systemd[1]: Started cri-containerd-b4f7747b530aadcdbca09bbc9cf14420f8c8512498b1e2d96c5035c71b72916b.scope - libcontainer container b4f7747b530aadcdbca09bbc9cf14420f8c8512498b1e2d96c5035c71b72916b. Dec 13 01:34:47.345529 kubelet[2173]: E1213 01:34:47.345422 2173 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:34:47.349427 systemd[1]: Started cri-containerd-b5f833c0c6b7b2791a6e69e43454995000302ac6e557de230be480cb94a59470.scope - libcontainer container b5f833c0c6b7b2791a6e69e43454995000302ac6e557de230be480cb94a59470. Dec 13 01:34:47.418884 containerd[1471]: time="2024-12-13T01:34:47.418791584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ab980fc3cdda73e72d13abc3b7629c94,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4f7747b530aadcdbca09bbc9cf14420f8c8512498b1e2d96c5035c71b72916b\"" Dec 13 01:34:47.419414 containerd[1471]: time="2024-12-13T01:34:47.418792015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5f833c0c6b7b2791a6e69e43454995000302ac6e557de230be480cb94a59470\"" Dec 13 01:34:47.420400 kubelet[2173]: E1213 01:34:47.420349 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:47.420565 kubelet[2173]: E1213 01:34:47.420448 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:47.423245 containerd[1471]: time="2024-12-13T01:34:47.423208247Z" level=info msg="CreateContainer within sandbox \"b4f7747b530aadcdbca09bbc9cf14420f8c8512498b1e2d96c5035c71b72916b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:34:47.423401 containerd[1471]: time="2024-12-13T01:34:47.423376527Z" level=info msg="CreateContainer within sandbox \"b5f833c0c6b7b2791a6e69e43454995000302ac6e557de230be480cb94a59470\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:34:47.424345 containerd[1471]: time="2024-12-13T01:34:47.424265241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1a4face0f59815c7b2582da70f674367face3b79fb73e3e77ea0fe1ce7506d2\"" Dec 13 01:34:47.425041 kubelet[2173]: E1213 01:34:47.425014 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:47.426741 containerd[1471]: time="2024-12-13T01:34:47.426706389Z" level=info msg="CreateContainer within sandbox \"c1a4face0f59815c7b2582da70f674367face3b79fb73e3e77ea0fe1ce7506d2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:34:47.443271 kubelet[2173]: I1213 01:34:47.443225 2173 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:34:47.443734 kubelet[2173]: E1213 01:34:47.443688 2173 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Dec 13 01:34:47.450073 containerd[1471]: time="2024-12-13T01:34:47.450015831Z" level=info msg="CreateContainer within sandbox \"b4f7747b530aadcdbca09bbc9cf14420f8c8512498b1e2d96c5035c71b72916b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"717dab524f8c3c1104a9b8eb7137f4d0a822029218e2faea49c18acb07656493\"" Dec 13 01:34:47.450699 containerd[1471]: time="2024-12-13T01:34:47.450645344Z" level=info msg="StartContainer for \"717dab524f8c3c1104a9b8eb7137f4d0a822029218e2faea49c18acb07656493\"" Dec 13 01:34:47.452139 containerd[1471]: time="2024-12-13T01:34:47.452089151Z" level=info msg="CreateContainer within sandbox \"b5f833c0c6b7b2791a6e69e43454995000302ac6e557de230be480cb94a59470\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dff7b879b97450613b1e494a5ca6e81eac9a2f75ca1897ccf487fd65d419264d\"" Dec 13 01:34:47.452478 containerd[1471]: time="2024-12-13T01:34:47.452432151Z" level=info msg="StartContainer for \"dff7b879b97450613b1e494a5ca6e81eac9a2f75ca1897ccf487fd65d419264d\"" Dec 13 01:34:47.457169 containerd[1471]: time="2024-12-13T01:34:47.457093038Z" level=info msg="CreateContainer within sandbox \"c1a4face0f59815c7b2582da70f674367face3b79fb73e3e77ea0fe1ce7506d2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5a0162cfad62346dc96fb3ee83fbe4dff187e1666e590656eb09b71f62b772a6\"" Dec 13 01:34:47.457461 containerd[1471]: time="2024-12-13T01:34:47.457435597Z" level=info msg="StartContainer for \"5a0162cfad62346dc96fb3ee83fbe4dff187e1666e590656eb09b71f62b772a6\"" Dec 13 01:34:47.490845 systemd[1]: Started cri-containerd-717dab524f8c3c1104a9b8eb7137f4d0a822029218e2faea49c18acb07656493.scope - libcontainer container 717dab524f8c3c1104a9b8eb7137f4d0a822029218e2faea49c18acb07656493. Dec 13 01:34:47.496195 systemd[1]: Started cri-containerd-5a0162cfad62346dc96fb3ee83fbe4dff187e1666e590656eb09b71f62b772a6.scope - libcontainer container 5a0162cfad62346dc96fb3ee83fbe4dff187e1666e590656eb09b71f62b772a6. Dec 13 01:34:47.498144 systemd[1]: Started cri-containerd-dff7b879b97450613b1e494a5ca6e81eac9a2f75ca1897ccf487fd65d419264d.scope - libcontainer container dff7b879b97450613b1e494a5ca6e81eac9a2f75ca1897ccf487fd65d419264d. Dec 13 01:34:47.541884 containerd[1471]: time="2024-12-13T01:34:47.541394170Z" level=info msg="StartContainer for \"717dab524f8c3c1104a9b8eb7137f4d0a822029218e2faea49c18acb07656493\" returns successfully" Dec 13 01:34:47.563935 containerd[1471]: time="2024-12-13T01:34:47.563876314Z" level=info msg="StartContainer for \"dff7b879b97450613b1e494a5ca6e81eac9a2f75ca1897ccf487fd65d419264d\" returns successfully" Dec 13 01:34:47.569965 containerd[1471]: time="2024-12-13T01:34:47.569909270Z" level=info msg="StartContainer for \"5a0162cfad62346dc96fb3ee83fbe4dff187e1666e590656eb09b71f62b772a6\" returns successfully" Dec 13 01:34:47.926059 kubelet[2173]: E1213 01:34:47.926006 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:47.938593 kubelet[2173]: E1213 01:34:47.930962 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:47.949600 kubelet[2173]: E1213 01:34:47.943426 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:48.774863 kubelet[2173]: E1213 01:34:48.774795 2173 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:34:48.946271 kubelet[2173]: E1213 01:34:48.946230 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:49.046263 kubelet[2173]: I1213 01:34:49.046195 2173 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:34:49.085603 kubelet[2173]: I1213 01:34:49.085530 2173 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Dec 13 01:34:49.344407 kubelet[2173]: I1213 01:34:49.344232 2173 apiserver.go:52] "Watching apiserver" Dec 13 01:34:49.352829 kubelet[2173]: I1213 01:34:49.352763 2173 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:34:50.001218 kubelet[2173]: E1213 01:34:50.001172 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:50.947591 kubelet[2173]: E1213 01:34:50.947554 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:51.409445 systemd[1]: Reloading requested from client PID 2454 ('systemctl') (unit session-7.scope)... Dec 13 01:34:51.409470 systemd[1]: Reloading... Dec 13 01:34:51.484623 zram_generator::config[2496]: No configuration found. Dec 13 01:34:51.889545 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:34:51.982097 systemd[1]: Reloading finished in 572 ms. Dec 13 01:34:52.028646 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:34:52.057244 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:34:52.057572 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:34:52.057633 systemd[1]: kubelet.service: Consumed 1.072s CPU time, 119.6M memory peak, 0B memory swap peak. Dec 13 01:34:52.069821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:34:52.227627 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:34:52.238166 (kubelet)[2538]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:34:52.287686 kubelet[2538]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:34:52.287686 kubelet[2538]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:34:52.287686 kubelet[2538]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:34:52.288359 kubelet[2538]: I1213 01:34:52.287980 2538 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:34:52.296478 kubelet[2538]: I1213 01:34:52.296429 2538 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:34:52.296478 kubelet[2538]: I1213 01:34:52.296457 2538 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:34:52.296740 kubelet[2538]: I1213 01:34:52.296706 2538 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:34:52.298118 kubelet[2538]: I1213 01:34:52.298086 2538 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:34:52.300545 kubelet[2538]: I1213 01:34:52.300459 2538 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:34:52.304008 kubelet[2538]: E1213 01:34:52.303947 2538 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:34:52.304008 kubelet[2538]: I1213 01:34:52.303990 2538 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:34:52.310483 kubelet[2538]: I1213 01:34:52.310433 2538 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:34:52.310658 kubelet[2538]: I1213 01:34:52.310624 2538 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:34:52.310835 kubelet[2538]: I1213 01:34:52.310778 2538 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:34:52.311036 kubelet[2538]: I1213 01:34:52.310824 2538 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:34:52.311164 kubelet[2538]: I1213 01:34:52.311056 2538 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:34:52.311164 kubelet[2538]: I1213 01:34:52.311068 2538 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:34:52.311164 kubelet[2538]: I1213 01:34:52.311120 2538 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:34:52.311261 kubelet[2538]: I1213 01:34:52.311252 2538 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:34:52.311296 kubelet[2538]: I1213 01:34:52.311273 2538 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:34:52.311330 kubelet[2538]: I1213 01:34:52.311314 2538 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:34:52.311368 kubelet[2538]: I1213 01:34:52.311340 2538 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:34:52.312176 kubelet[2538]: I1213 01:34:52.311939 2538 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:34:52.313542 kubelet[2538]: I1213 01:34:52.312383 2538 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:34:52.313542 kubelet[2538]: I1213 01:34:52.312917 2538 server.go:1269] "Started kubelet" Dec 13 01:34:52.314482 kubelet[2538]: I1213 01:34:52.314395 2538 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:34:52.314966 kubelet[2538]: I1213 01:34:52.314937 2538 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:34:52.315027 kubelet[2538]: I1213 01:34:52.315006 2538 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:34:52.315132 kubelet[2538]: I1213 01:34:52.315111 2538 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:34:52.316201 kubelet[2538]: I1213 01:34:52.316178 2538 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:34:52.318087 kubelet[2538]: I1213 01:34:52.318044 2538 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:34:52.318777 kubelet[2538]: I1213 01:34:52.318753 2538 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:34:52.318888 kubelet[2538]: I1213 01:34:52.318870 2538 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:34:52.319089 kubelet[2538]: I1213 01:34:52.319070 2538 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:34:52.323156 kubelet[2538]: I1213 01:34:52.322144 2538 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:34:52.323156 kubelet[2538]: I1213 01:34:52.322246 2538 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:34:52.323156 kubelet[2538]: E1213 01:34:52.322953 2538 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:34:52.327665 kubelet[2538]: I1213 01:34:52.327405 2538 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:34:52.329934 kubelet[2538]: I1213 01:34:52.329902 2538 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:34:52.331927 kubelet[2538]: E1213 01:34:52.331883 2538 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:34:52.335234 kubelet[2538]: I1213 01:34:52.335177 2538 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:34:52.335308 kubelet[2538]: I1213 01:34:52.335243 2538 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:34:52.335308 kubelet[2538]: I1213 01:34:52.335274 2538 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:34:52.335468 kubelet[2538]: E1213 01:34:52.335438 2538 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:34:52.376377 kubelet[2538]: I1213 01:34:52.376341 2538 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:34:52.376377 kubelet[2538]: I1213 01:34:52.376362 2538 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:34:52.376377 kubelet[2538]: I1213 01:34:52.376383 2538 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:34:52.376604 kubelet[2538]: I1213 01:34:52.376567 2538 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:34:52.376604 kubelet[2538]: I1213 01:34:52.376579 2538 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:34:52.376604 kubelet[2538]: I1213 01:34:52.376599 2538 policy_none.go:49] "None policy: Start" Dec 13 01:34:52.377360 kubelet[2538]: I1213 01:34:52.377333 2538 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:34:52.377360 kubelet[2538]: I1213 01:34:52.377355 2538 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:34:52.377506 kubelet[2538]: I1213 01:34:52.377494 2538 state_mem.go:75] "Updated machine memory state" Dec 13 01:34:52.382803 kubelet[2538]: I1213 01:34:52.382775 2538 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:34:52.383003 kubelet[2538]: I1213 01:34:52.382984 2538 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:34:52.383057 kubelet[2538]: I1213 01:34:52.383002 2538 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:34:52.383602 kubelet[2538]: I1213 01:34:52.383562 2538 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:34:52.444372 kubelet[2538]: E1213 01:34:52.444036 2538 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:34:52.489163 kubelet[2538]: I1213 01:34:52.488991 2538 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:34:52.497059 kubelet[2538]: I1213 01:34:52.497022 2538 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Dec 13 01:34:52.497295 kubelet[2538]: I1213 01:34:52.497117 2538 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Dec 13 01:34:52.520506 kubelet[2538]: I1213 01:34:52.520437 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab980fc3cdda73e72d13abc3b7629c94-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ab980fc3cdda73e72d13abc3b7629c94\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:34:52.520667 kubelet[2538]: I1213 01:34:52.520575 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab980fc3cdda73e72d13abc3b7629c94-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ab980fc3cdda73e72d13abc3b7629c94\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:34:52.520667 kubelet[2538]: I1213 01:34:52.520600 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:34:52.520667 kubelet[2538]: I1213 01:34:52.520617 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:34:52.520667 kubelet[2538]: I1213 01:34:52.520634 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:34:52.520667 kubelet[2538]: I1213 01:34:52.520653 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:34:52.520788 kubelet[2538]: I1213 01:34:52.520671 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab980fc3cdda73e72d13abc3b7629c94-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ab980fc3cdda73e72d13abc3b7629c94\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:34:52.520788 kubelet[2538]: I1213 01:34:52.520687 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:34:52.520788 kubelet[2538]: I1213 01:34:52.520705 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:34:52.744856 kubelet[2538]: E1213 01:34:52.744691 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:52.745742 kubelet[2538]: E1213 01:34:52.745673 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:52.745742 kubelet[2538]: E1213 01:34:52.745683 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:53.312107 kubelet[2538]: I1213 01:34:53.312037 2538 apiserver.go:52] "Watching apiserver" Dec 13 01:34:53.319702 kubelet[2538]: I1213 01:34:53.319668 2538 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:34:53.352347 kubelet[2538]: E1213 01:34:53.352303 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:53.353014 kubelet[2538]: E1213 01:34:53.352868 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:53.430900 kubelet[2538]: E1213 01:34:53.430337 2538 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 01:34:53.430900 kubelet[2538]: E1213 01:34:53.430567 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:53.449939 kubelet[2538]: I1213 01:34:53.449739 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.44970563 podStartE2EDuration="1.44970563s" podCreationTimestamp="2024-12-13 01:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:34:53.448791853 +0000 UTC m=+1.204185286" watchObservedRunningTime="2024-12-13 01:34:53.44970563 +0000 UTC m=+1.205099043" Dec 13 01:34:53.468292 kubelet[2538]: I1213 01:34:53.468220 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.468193358 podStartE2EDuration="4.468193358s" podCreationTimestamp="2024-12-13 01:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:34:53.45722735 +0000 UTC m=+1.212620783" watchObservedRunningTime="2024-12-13 01:34:53.468193358 +0000 UTC m=+1.223586771" Dec 13 01:34:53.468582 kubelet[2538]: I1213 01:34:53.468393 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.468385201 podStartE2EDuration="1.468385201s" podCreationTimestamp="2024-12-13 01:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:34:53.468169684 +0000 UTC m=+1.223563097" watchObservedRunningTime="2024-12-13 01:34:53.468385201 +0000 UTC m=+1.223778624" Dec 13 01:34:54.355401 kubelet[2538]: E1213 01:34:54.355326 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:54.355839 kubelet[2538]: E1213 01:34:54.355502 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:54.606655 kubelet[2538]: E1213 01:34:54.604069 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:55.357486 kubelet[2538]: E1213 01:34:55.357430 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:55.814574 kubelet[2538]: I1213 01:34:55.814500 2538 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:34:55.815021 containerd[1471]: time="2024-12-13T01:34:55.814972461Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:34:55.815404 kubelet[2538]: I1213 01:34:55.815237 2538 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:34:56.669040 systemd[1]: Created slice kubepods-besteffort-podaf9e4069_78a6_4a45_ab1a_20a9d76e5a83.slice - libcontainer container kubepods-besteffort-podaf9e4069_78a6_4a45_ab1a_20a9d76e5a83.slice. Dec 13 01:34:56.752428 kubelet[2538]: I1213 01:34:56.752244 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af9e4069-78a6-4a45-ab1a-20a9d76e5a83-lib-modules\") pod \"kube-proxy-q48xb\" (UID: \"af9e4069-78a6-4a45-ab1a-20a9d76e5a83\") " pod="kube-system/kube-proxy-q48xb" Dec 13 01:34:56.752428 kubelet[2538]: I1213 01:34:56.752300 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9gv7\" (UniqueName: \"kubernetes.io/projected/af9e4069-78a6-4a45-ab1a-20a9d76e5a83-kube-api-access-l9gv7\") pod \"kube-proxy-q48xb\" (UID: \"af9e4069-78a6-4a45-ab1a-20a9d76e5a83\") " pod="kube-system/kube-proxy-q48xb" Dec 13 01:34:56.752428 kubelet[2538]: I1213 01:34:56.752321 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/af9e4069-78a6-4a45-ab1a-20a9d76e5a83-kube-proxy\") pod \"kube-proxy-q48xb\" (UID: \"af9e4069-78a6-4a45-ab1a-20a9d76e5a83\") " pod="kube-system/kube-proxy-q48xb" Dec 13 01:34:56.752428 kubelet[2538]: I1213 01:34:56.752341 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af9e4069-78a6-4a45-ab1a-20a9d76e5a83-xtables-lock\") pod \"kube-proxy-q48xb\" (UID: \"af9e4069-78a6-4a45-ab1a-20a9d76e5a83\") " pod="kube-system/kube-proxy-q48xb" Dec 13 01:34:56.942341 systemd[1]: Created slice kubepods-besteffort-podb3680651_98f8_4e6a_bc38_d86e44fa710f.slice - libcontainer container kubepods-besteffort-podb3680651_98f8_4e6a_bc38_d86e44fa710f.slice. Dec 13 01:34:56.955142 kubelet[2538]: I1213 01:34:56.954930 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx9v8\" (UniqueName: \"kubernetes.io/projected/b3680651-98f8-4e6a-bc38-d86e44fa710f-kube-api-access-gx9v8\") pod \"tigera-operator-76c4976dd7-fxbgm\" (UID: \"b3680651-98f8-4e6a-bc38-d86e44fa710f\") " pod="tigera-operator/tigera-operator-76c4976dd7-fxbgm" Dec 13 01:34:56.955142 kubelet[2538]: I1213 01:34:56.955138 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b3680651-98f8-4e6a-bc38-d86e44fa710f-var-lib-calico\") pod \"tigera-operator-76c4976dd7-fxbgm\" (UID: \"b3680651-98f8-4e6a-bc38-d86e44fa710f\") " pod="tigera-operator/tigera-operator-76c4976dd7-fxbgm" Dec 13 01:34:56.981257 kubelet[2538]: E1213 01:34:56.981182 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:56.982294 containerd[1471]: time="2024-12-13T01:34:56.982239701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q48xb,Uid:af9e4069-78a6-4a45-ab1a-20a9d76e5a83,Namespace:kube-system,Attempt:0,}" Dec 13 01:34:57.026906 containerd[1471]: time="2024-12-13T01:34:57.025368290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:34:57.026906 containerd[1471]: time="2024-12-13T01:34:57.026678502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:34:57.026906 containerd[1471]: time="2024-12-13T01:34:57.026703048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:57.027348 containerd[1471]: time="2024-12-13T01:34:57.026911320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:57.055908 systemd[1]: Started cri-containerd-97aeef4f22d88b5af8d8439c9f61602a6681472894e8ad333a4d79b57c0ac15d.scope - libcontainer container 97aeef4f22d88b5af8d8439c9f61602a6681472894e8ad333a4d79b57c0ac15d. Dec 13 01:34:57.096221 containerd[1471]: time="2024-12-13T01:34:57.096143530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q48xb,Uid:af9e4069-78a6-4a45-ab1a-20a9d76e5a83,Namespace:kube-system,Attempt:0,} returns sandbox id \"97aeef4f22d88b5af8d8439c9f61602a6681472894e8ad333a4d79b57c0ac15d\"" Dec 13 01:34:57.101921 kubelet[2538]: E1213 01:34:57.101874 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:57.104303 containerd[1471]: time="2024-12-13T01:34:57.104262222Z" level=info msg="CreateContainer within sandbox \"97aeef4f22d88b5af8d8439c9f61602a6681472894e8ad333a4d79b57c0ac15d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:34:57.128901 containerd[1471]: time="2024-12-13T01:34:57.128815944Z" level=info msg="CreateContainer within sandbox \"97aeef4f22d88b5af8d8439c9f61602a6681472894e8ad333a4d79b57c0ac15d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bcf7581c6e76b541ec0cb1ac9e4dd6431caebf3a83211c268ce856cf9824cf18\"" Dec 13 01:34:57.130219 containerd[1471]: time="2024-12-13T01:34:57.129654375Z" level=info msg="StartContainer for \"bcf7581c6e76b541ec0cb1ac9e4dd6431caebf3a83211c268ce856cf9824cf18\"" Dec 13 01:34:57.170749 systemd[1]: Started cri-containerd-bcf7581c6e76b541ec0cb1ac9e4dd6431caebf3a83211c268ce856cf9824cf18.scope - libcontainer container bcf7581c6e76b541ec0cb1ac9e4dd6431caebf3a83211c268ce856cf9824cf18. Dec 13 01:34:57.207281 containerd[1471]: time="2024-12-13T01:34:57.206956294Z" level=info msg="StartContainer for \"bcf7581c6e76b541ec0cb1ac9e4dd6431caebf3a83211c268ce856cf9824cf18\" returns successfully" Dec 13 01:34:57.247485 containerd[1471]: time="2024-12-13T01:34:57.247421646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-fxbgm,Uid:b3680651-98f8-4e6a-bc38-d86e44fa710f,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:34:57.285708 containerd[1471]: time="2024-12-13T01:34:57.285356735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:34:57.285708 containerd[1471]: time="2024-12-13T01:34:57.285419394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:34:57.285708 containerd[1471]: time="2024-12-13T01:34:57.285430986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:57.285708 containerd[1471]: time="2024-12-13T01:34:57.285538618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:57.304762 systemd[1]: Started cri-containerd-55e174499f6f3aa20628c51e37c0c178fce11181ae9e05c7c2721fee5ecec3f5.scope - libcontainer container 55e174499f6f3aa20628c51e37c0c178fce11181ae9e05c7c2721fee5ecec3f5. Dec 13 01:34:57.353994 containerd[1471]: time="2024-12-13T01:34:57.353935372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-fxbgm,Uid:b3680651-98f8-4e6a-bc38-d86e44fa710f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"55e174499f6f3aa20628c51e37c0c178fce11181ae9e05c7c2721fee5ecec3f5\"" Dec 13 01:34:57.357284 containerd[1471]: time="2024-12-13T01:34:57.357229475Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:34:57.365251 kubelet[2538]: E1213 01:34:57.365212 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:57.378070 kubelet[2538]: I1213 01:34:57.377809 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q48xb" podStartSLOduration=1.377781078 podStartE2EDuration="1.377781078s" podCreationTimestamp="2024-12-13 01:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:34:57.375250936 +0000 UTC m=+5.130644359" watchObservedRunningTime="2024-12-13 01:34:57.377781078 +0000 UTC m=+5.133174491" Dec 13 01:34:57.811192 sudo[1650]: pam_unix(sudo:session): session closed for user root Dec 13 01:34:57.814362 sshd[1647]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:57.819491 systemd[1]: sshd@6-10.0.0.100:22-10.0.0.1:53980.service: Deactivated successfully. Dec 13 01:34:57.822197 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:34:57.822488 systemd[1]: session-7.scope: Consumed 4.759s CPU time, 159.5M memory peak, 0B memory swap peak. Dec 13 01:34:57.823122 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:34:57.824254 systemd-logind[1455]: Removed session 7. Dec 13 01:34:59.698664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2819147777.mount: Deactivated successfully. Dec 13 01:35:00.191181 containerd[1471]: time="2024-12-13T01:35:00.191059988Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:00.192035 containerd[1471]: time="2024-12-13T01:35:00.191935798Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764309" Dec 13 01:35:00.194379 containerd[1471]: time="2024-12-13T01:35:00.194320812Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:00.198459 containerd[1471]: time="2024-12-13T01:35:00.198371815Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:00.198976 containerd[1471]: time="2024-12-13T01:35:00.198921040Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.841598118s" Dec 13 01:35:00.198976 containerd[1471]: time="2024-12-13T01:35:00.198972887Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 01:35:00.201647 containerd[1471]: time="2024-12-13T01:35:00.201579268Z" level=info msg="CreateContainer within sandbox \"55e174499f6f3aa20628c51e37c0c178fce11181ae9e05c7c2721fee5ecec3f5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:35:00.219544 containerd[1471]: time="2024-12-13T01:35:00.219463416Z" level=info msg="CreateContainer within sandbox \"55e174499f6f3aa20628c51e37c0c178fce11181ae9e05c7c2721fee5ecec3f5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3311468fcb3b2531ee45c998a582eb42e0fe53b0db818a8b75d13267dfd4d1a7\"" Dec 13 01:35:00.220200 containerd[1471]: time="2024-12-13T01:35:00.220161802Z" level=info msg="StartContainer for \"3311468fcb3b2531ee45c998a582eb42e0fe53b0db818a8b75d13267dfd4d1a7\"" Dec 13 01:35:00.262804 systemd[1]: Started cri-containerd-3311468fcb3b2531ee45c998a582eb42e0fe53b0db818a8b75d13267dfd4d1a7.scope - libcontainer container 3311468fcb3b2531ee45c998a582eb42e0fe53b0db818a8b75d13267dfd4d1a7. Dec 13 01:35:00.294862 containerd[1471]: time="2024-12-13T01:35:00.294791480Z" level=info msg="StartContainer for \"3311468fcb3b2531ee45c998a582eb42e0fe53b0db818a8b75d13267dfd4d1a7\" returns successfully" Dec 13 01:35:00.383030 kubelet[2538]: I1213 01:35:00.382946 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-fxbgm" podStartSLOduration=1.539073166 podStartE2EDuration="4.382924782s" podCreationTimestamp="2024-12-13 01:34:56 +0000 UTC" firstStartedPulling="2024-12-13 01:34:57.356302026 +0000 UTC m=+5.111695429" lastFinishedPulling="2024-12-13 01:35:00.200153631 +0000 UTC m=+7.955547045" observedRunningTime="2024-12-13 01:35:00.382289826 +0000 UTC m=+8.137683259" watchObservedRunningTime="2024-12-13 01:35:00.382924782 +0000 UTC m=+8.138318215" Dec 13 01:35:03.188828 kubelet[2538]: E1213 01:35:03.188786 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:03.677537 systemd[1]: Created slice kubepods-besteffort-poddba9551d_b4e1_4ad9_a48f_8ce6efa67ba9.slice - libcontainer container kubepods-besteffort-poddba9551d_b4e1_4ad9_a48f_8ce6efa67ba9.slice. Dec 13 01:35:03.698330 kubelet[2538]: I1213 01:35:03.698279 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dba9551d-b4e1-4ad9-a48f-8ce6efa67ba9-tigera-ca-bundle\") pod \"calico-typha-5976bc5f65-5l9f4\" (UID: \"dba9551d-b4e1-4ad9-a48f-8ce6efa67ba9\") " pod="calico-system/calico-typha-5976bc5f65-5l9f4" Dec 13 01:35:03.698330 kubelet[2538]: I1213 01:35:03.698317 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dba9551d-b4e1-4ad9-a48f-8ce6efa67ba9-typha-certs\") pod \"calico-typha-5976bc5f65-5l9f4\" (UID: \"dba9551d-b4e1-4ad9-a48f-8ce6efa67ba9\") " pod="calico-system/calico-typha-5976bc5f65-5l9f4" Dec 13 01:35:03.698330 kubelet[2538]: I1213 01:35:03.698337 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rww6j\" (UniqueName: \"kubernetes.io/projected/dba9551d-b4e1-4ad9-a48f-8ce6efa67ba9-kube-api-access-rww6j\") pod \"calico-typha-5976bc5f65-5l9f4\" (UID: \"dba9551d-b4e1-4ad9-a48f-8ce6efa67ba9\") " pod="calico-system/calico-typha-5976bc5f65-5l9f4" Dec 13 01:35:03.710988 systemd[1]: Created slice kubepods-besteffort-pode916d1fa_1aee_439a_b813_6aebef5bfe30.slice - libcontainer container kubepods-besteffort-pode916d1fa_1aee_439a_b813_6aebef5bfe30.slice. Dec 13 01:35:03.800227 kubelet[2538]: I1213 01:35:03.798965 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e916d1fa-1aee-439a-b813-6aebef5bfe30-lib-modules\") pod \"calico-node-4krg2\" (UID: \"e916d1fa-1aee-439a-b813-6aebef5bfe30\") " pod="calico-system/calico-node-4krg2" Dec 13 01:35:03.800227 kubelet[2538]: I1213 01:35:03.799007 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e916d1fa-1aee-439a-b813-6aebef5bfe30-cni-bin-dir\") pod \"calico-node-4krg2\" (UID: \"e916d1fa-1aee-439a-b813-6aebef5bfe30\") " pod="calico-system/calico-node-4krg2" Dec 13 01:35:03.800227 kubelet[2538]: I1213 01:35:03.799024 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e916d1fa-1aee-439a-b813-6aebef5bfe30-var-lib-calico\") pod \"calico-node-4krg2\" (UID: \"e916d1fa-1aee-439a-b813-6aebef5bfe30\") " pod="calico-system/calico-node-4krg2" Dec 13 01:35:03.800227 kubelet[2538]: I1213 01:35:03.799040 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e916d1fa-1aee-439a-b813-6aebef5bfe30-cni-log-dir\") pod \"calico-node-4krg2\" (UID: \"e916d1fa-1aee-439a-b813-6aebef5bfe30\") " pod="calico-system/calico-node-4krg2" Dec 13 01:35:03.800227 kubelet[2538]: I1213 01:35:03.799069 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e916d1fa-1aee-439a-b813-6aebef5bfe30-tigera-ca-bundle\") pod \"calico-node-4krg2\" (UID: \"e916d1fa-1aee-439a-b813-6aebef5bfe30\") " pod="calico-system/calico-node-4krg2" Dec 13 01:35:03.800484 kubelet[2538]: I1213 01:35:03.799086 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e916d1fa-1aee-439a-b813-6aebef5bfe30-node-certs\") pod \"calico-node-4krg2\" (UID: \"e916d1fa-1aee-439a-b813-6aebef5bfe30\") " pod="calico-system/calico-node-4krg2" Dec 13 01:35:03.800484 kubelet[2538]: I1213 01:35:03.799103 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e916d1fa-1aee-439a-b813-6aebef5bfe30-cni-net-dir\") pod \"calico-node-4krg2\" (UID: \"e916d1fa-1aee-439a-b813-6aebef5bfe30\") " pod="calico-system/calico-node-4krg2" Dec 13 01:35:03.800484 kubelet[2538]: I1213 01:35:03.799126 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e916d1fa-1aee-439a-b813-6aebef5bfe30-var-run-calico\") pod \"calico-node-4krg2\" (UID: \"e916d1fa-1aee-439a-b813-6aebef5bfe30\") " pod="calico-system/calico-node-4krg2" Dec 13 01:35:03.800484 kubelet[2538]: I1213 01:35:03.799146 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7psfh\" (UniqueName: \"kubernetes.io/projected/e916d1fa-1aee-439a-b813-6aebef5bfe30-kube-api-access-7psfh\") pod \"calico-node-4krg2\" (UID: \"e916d1fa-1aee-439a-b813-6aebef5bfe30\") " pod="calico-system/calico-node-4krg2" Dec 13 01:35:03.800484 kubelet[2538]: I1213 01:35:03.799166 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e916d1fa-1aee-439a-b813-6aebef5bfe30-xtables-lock\") pod \"calico-node-4krg2\" (UID: \"e916d1fa-1aee-439a-b813-6aebef5bfe30\") " pod="calico-system/calico-node-4krg2" Dec 13 01:35:03.800625 kubelet[2538]: I1213 01:35:03.799181 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e916d1fa-1aee-439a-b813-6aebef5bfe30-policysync\") pod \"calico-node-4krg2\" (UID: \"e916d1fa-1aee-439a-b813-6aebef5bfe30\") " pod="calico-system/calico-node-4krg2" Dec 13 01:35:03.800625 kubelet[2538]: I1213 01:35:03.799197 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e916d1fa-1aee-439a-b813-6aebef5bfe30-flexvol-driver-host\") pod \"calico-node-4krg2\" (UID: \"e916d1fa-1aee-439a-b813-6aebef5bfe30\") " pod="calico-system/calico-node-4krg2" Dec 13 01:35:03.837397 kubelet[2538]: E1213 01:35:03.837166 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b257v" podUID="40d19590-db9c-41bd-9d1d-bd10d8bd864c" Dec 13 01:35:03.900382 kubelet[2538]: I1213 01:35:03.900327 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/40d19590-db9c-41bd-9d1d-bd10d8bd864c-socket-dir\") pod \"csi-node-driver-b257v\" (UID: \"40d19590-db9c-41bd-9d1d-bd10d8bd864c\") " pod="calico-system/csi-node-driver-b257v" Dec 13 01:35:03.900382 kubelet[2538]: I1213 01:35:03.900376 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/40d19590-db9c-41bd-9d1d-bd10d8bd864c-registration-dir\") pod \"csi-node-driver-b257v\" (UID: \"40d19590-db9c-41bd-9d1d-bd10d8bd864c\") " pod="calico-system/csi-node-driver-b257v" Dec 13 01:35:03.900626 kubelet[2538]: I1213 01:35:03.900411 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/40d19590-db9c-41bd-9d1d-bd10d8bd864c-varrun\") pod \"csi-node-driver-b257v\" (UID: \"40d19590-db9c-41bd-9d1d-bd10d8bd864c\") " pod="calico-system/csi-node-driver-b257v" Dec 13 01:35:03.900626 kubelet[2538]: I1213 01:35:03.900439 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/40d19590-db9c-41bd-9d1d-bd10d8bd864c-kubelet-dir\") pod \"csi-node-driver-b257v\" (UID: \"40d19590-db9c-41bd-9d1d-bd10d8bd864c\") " pod="calico-system/csi-node-driver-b257v" Dec 13 01:35:03.900626 kubelet[2538]: I1213 01:35:03.900468 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cf9b\" (UniqueName: \"kubernetes.io/projected/40d19590-db9c-41bd-9d1d-bd10d8bd864c-kube-api-access-2cf9b\") pod \"csi-node-driver-b257v\" (UID: \"40d19590-db9c-41bd-9d1d-bd10d8bd864c\") " pod="calico-system/csi-node-driver-b257v" Dec 13 01:35:03.902880 kubelet[2538]: E1213 01:35:03.902824 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:03.903057 kubelet[2538]: W1213 01:35:03.902962 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:03.903187 kubelet[2538]: E1213 01:35:03.903167 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:03.911019 kubelet[2538]: E1213 01:35:03.910966 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:03.911019 kubelet[2538]: W1213 01:35:03.911005 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:03.911170 kubelet[2538]: E1213 01:35:03.911039 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:03.913430 kubelet[2538]: E1213 01:35:03.913127 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:03.913430 kubelet[2538]: W1213 01:35:03.913159 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:03.913430 kubelet[2538]: E1213 01:35:03.913194 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:03.980435 kubelet[2538]: E1213 01:35:03.980295 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:03.981423 containerd[1471]: time="2024-12-13T01:35:03.980728128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5976bc5f65-5l9f4,Uid:dba9551d-b4e1-4ad9-a48f-8ce6efa67ba9,Namespace:calico-system,Attempt:0,}" Dec 13 01:35:04.002994 kubelet[2538]: E1213 01:35:04.002752 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.002994 kubelet[2538]: W1213 01:35:04.002780 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.002994 kubelet[2538]: E1213 01:35:04.002812 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.003290 kubelet[2538]: E1213 01:35:04.003275 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.003367 kubelet[2538]: W1213 01:35:04.003353 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.003453 kubelet[2538]: E1213 01:35:04.003438 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.004007 kubelet[2538]: E1213 01:35:04.003949 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.004007 kubelet[2538]: W1213 01:35:04.003987 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.004279 kubelet[2538]: E1213 01:35:04.004255 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.004607 kubelet[2538]: E1213 01:35:04.004588 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.004607 kubelet[2538]: W1213 01:35:04.004602 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.004607 kubelet[2538]: E1213 01:35:04.004618 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.005921 kubelet[2538]: E1213 01:35:04.004979 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.005921 kubelet[2538]: W1213 01:35:04.004994 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.005921 kubelet[2538]: E1213 01:35:04.005055 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.005921 kubelet[2538]: E1213 01:35:04.005354 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.005921 kubelet[2538]: W1213 01:35:04.005463 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.005921 kubelet[2538]: E1213 01:35:04.005599 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.005921 kubelet[2538]: E1213 01:35:04.005844 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.005921 kubelet[2538]: W1213 01:35:04.005863 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.006643 kubelet[2538]: E1213 01:35:04.005934 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.006643 kubelet[2538]: E1213 01:35:04.006245 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.006643 kubelet[2538]: W1213 01:35:04.006255 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.006643 kubelet[2538]: E1213 01:35:04.006279 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.006643 kubelet[2538]: E1213 01:35:04.006583 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.006643 kubelet[2538]: W1213 01:35:04.006592 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.006835 kubelet[2538]: E1213 01:35:04.006678 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.007083 kubelet[2538]: E1213 01:35:04.007059 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.007083 kubelet[2538]: W1213 01:35:04.007077 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.007297 kubelet[2538]: E1213 01:35:04.007218 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.007355 kubelet[2538]: E1213 01:35:04.007336 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.007355 kubelet[2538]: W1213 01:35:04.007344 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.007448 kubelet[2538]: E1213 01:35:04.007430 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.007674 kubelet[2538]: E1213 01:35:04.007647 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.007741 kubelet[2538]: W1213 01:35:04.007682 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.007831 kubelet[2538]: E1213 01:35:04.007742 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.007958 kubelet[2538]: E1213 01:35:04.007937 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.007958 kubelet[2538]: W1213 01:35:04.007949 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.008038 kubelet[2538]: E1213 01:35:04.008000 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.008391 kubelet[2538]: E1213 01:35:04.008370 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.008391 kubelet[2538]: W1213 01:35:04.008384 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.008686 kubelet[2538]: E1213 01:35:04.008465 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.008686 kubelet[2538]: E1213 01:35:04.008656 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.008686 kubelet[2538]: W1213 01:35:04.008664 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.008781 kubelet[2538]: E1213 01:35:04.008736 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.009765 kubelet[2538]: E1213 01:35:04.009004 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.009765 kubelet[2538]: W1213 01:35:04.009022 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.009765 kubelet[2538]: E1213 01:35:04.009232 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.009765 kubelet[2538]: E1213 01:35:04.009678 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.009765 kubelet[2538]: W1213 01:35:04.009690 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.009765 kubelet[2538]: E1213 01:35:04.009749 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.010194 kubelet[2538]: E1213 01:35:04.010174 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.010194 kubelet[2538]: W1213 01:35:04.010192 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.010620 kubelet[2538]: E1213 01:35:04.010304 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.010620 kubelet[2538]: E1213 01:35:04.010496 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.010620 kubelet[2538]: W1213 01:35:04.010507 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.010732 kubelet[2538]: E1213 01:35:04.010658 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.010830 kubelet[2538]: E1213 01:35:04.010809 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.010830 kubelet[2538]: W1213 01:35:04.010823 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.010938 kubelet[2538]: E1213 01:35:04.010916 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.011472 kubelet[2538]: E1213 01:35:04.011440 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.011472 kubelet[2538]: W1213 01:35:04.011457 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.011627 kubelet[2538]: E1213 01:35:04.011558 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.011848 kubelet[2538]: E1213 01:35:04.011827 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.011848 kubelet[2538]: W1213 01:35:04.011842 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.012073 kubelet[2538]: E1213 01:35:04.011965 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.012129 kubelet[2538]: E1213 01:35:04.012100 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.012129 kubelet[2538]: W1213 01:35:04.012108 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.012129 kubelet[2538]: E1213 01:35:04.012121 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.012560 kubelet[2538]: E1213 01:35:04.012391 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.012560 kubelet[2538]: W1213 01:35:04.012405 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.012560 kubelet[2538]: E1213 01:35:04.012426 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.014564 kubelet[2538]: E1213 01:35:04.013754 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:04.014618 containerd[1471]: time="2024-12-13T01:35:04.014347576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4krg2,Uid:e916d1fa-1aee-439a-b813-6aebef5bfe30,Namespace:calico-system,Attempt:0,}" Dec 13 01:35:04.015202 kubelet[2538]: E1213 01:35:04.015168 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.015202 kubelet[2538]: W1213 01:35:04.015191 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.015290 kubelet[2538]: E1213 01:35:04.015209 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.017122 containerd[1471]: time="2024-12-13T01:35:04.016889049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:35:04.017202 containerd[1471]: time="2024-12-13T01:35:04.017148227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:35:04.017247 containerd[1471]: time="2024-12-13T01:35:04.017221395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:04.017449 containerd[1471]: time="2024-12-13T01:35:04.017395633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:04.025349 kubelet[2538]: E1213 01:35:04.025297 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.025349 kubelet[2538]: W1213 01:35:04.025333 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.025349 kubelet[2538]: E1213 01:35:04.025360 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.048825 systemd[1]: Started cri-containerd-a5493d1853d25d02ead1d97e46a3da04df6c2f349e5c4e281150477ed5563f57.scope - libcontainer container a5493d1853d25d02ead1d97e46a3da04df6c2f349e5c4e281150477ed5563f57. Dec 13 01:35:04.076812 containerd[1471]: time="2024-12-13T01:35:04.076694355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:35:04.078023 containerd[1471]: time="2024-12-13T01:35:04.077979093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:35:04.078175 containerd[1471]: time="2024-12-13T01:35:04.078142420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:04.078415 containerd[1471]: time="2024-12-13T01:35:04.078380369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:04.102923 systemd[1]: Started cri-containerd-347f9f683b390c0a29a2a9962012c3284c8e82928749b81601ea3fccfd882507.scope - libcontainer container 347f9f683b390c0a29a2a9962012c3284c8e82928749b81601ea3fccfd882507. Dec 13 01:35:04.113617 containerd[1471]: time="2024-12-13T01:35:04.113562699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5976bc5f65-5l9f4,Uid:dba9551d-b4e1-4ad9-a48f-8ce6efa67ba9,Namespace:calico-system,Attempt:0,} returns sandbox id \"a5493d1853d25d02ead1d97e46a3da04df6c2f349e5c4e281150477ed5563f57\"" Dec 13 01:35:04.114862 kubelet[2538]: E1213 01:35:04.114818 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:04.116245 containerd[1471]: time="2024-12-13T01:35:04.116202447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:35:04.136246 containerd[1471]: time="2024-12-13T01:35:04.136202231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4krg2,Uid:e916d1fa-1aee-439a-b813-6aebef5bfe30,Namespace:calico-system,Attempt:0,} returns sandbox id \"347f9f683b390c0a29a2a9962012c3284c8e82928749b81601ea3fccfd882507\"" Dec 13 01:35:04.137397 kubelet[2538]: E1213 01:35:04.137358 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:04.608144 kubelet[2538]: E1213 01:35:04.608096 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:04.688165 kubelet[2538]: E1213 01:35:04.687936 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:04.693107 kubelet[2538]: E1213 01:35:04.693070 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.693107 kubelet[2538]: W1213 01:35:04.693096 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.693418 kubelet[2538]: E1213 01:35:04.693121 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.693484 kubelet[2538]: E1213 01:35:04.693424 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.693484 kubelet[2538]: W1213 01:35:04.693449 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.693484 kubelet[2538]: E1213 01:35:04.693461 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.693794 kubelet[2538]: E1213 01:35:04.693770 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.693794 kubelet[2538]: W1213 01:35:04.693786 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.693794 kubelet[2538]: E1213 01:35:04.693797 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.694672 kubelet[2538]: E1213 01:35:04.694040 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.694672 kubelet[2538]: W1213 01:35:04.694052 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.694672 kubelet[2538]: E1213 01:35:04.694063 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.694672 kubelet[2538]: E1213 01:35:04.694305 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.694672 kubelet[2538]: W1213 01:35:04.694313 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.694672 kubelet[2538]: E1213 01:35:04.694322 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.694672 kubelet[2538]: E1213 01:35:04.694487 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.694672 kubelet[2538]: W1213 01:35:04.694495 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.694672 kubelet[2538]: E1213 01:35:04.694502 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.695235 kubelet[2538]: E1213 01:35:04.694781 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.695235 kubelet[2538]: W1213 01:35:04.694790 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.695235 kubelet[2538]: E1213 01:35:04.694800 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.695235 kubelet[2538]: E1213 01:35:04.695211 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.695235 kubelet[2538]: W1213 01:35:04.695220 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.695235 kubelet[2538]: E1213 01:35:04.695230 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.695494 kubelet[2538]: E1213 01:35:04.695439 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.695494 kubelet[2538]: W1213 01:35:04.695448 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.695494 kubelet[2538]: E1213 01:35:04.695457 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.696648 kubelet[2538]: E1213 01:35:04.695681 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.696648 kubelet[2538]: W1213 01:35:04.695700 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.696648 kubelet[2538]: E1213 01:35:04.695710 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.696856 kubelet[2538]: E1213 01:35:04.696829 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.696856 kubelet[2538]: W1213 01:35:04.696851 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.696856 kubelet[2538]: E1213 01:35:04.696876 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.697417 kubelet[2538]: E1213 01:35:04.697287 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.697417 kubelet[2538]: W1213 01:35:04.697308 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.697417 kubelet[2538]: E1213 01:35:04.697323 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.698169 kubelet[2538]: E1213 01:35:04.698147 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.698169 kubelet[2538]: W1213 01:35:04.698165 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.698793 kubelet[2538]: E1213 01:35:04.698177 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.699077 kubelet[2538]: E1213 01:35:04.698898 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.699077 kubelet[2538]: W1213 01:35:04.698917 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.699077 kubelet[2538]: E1213 01:35:04.698929 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.699774 kubelet[2538]: E1213 01:35:04.699576 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.699774 kubelet[2538]: W1213 01:35:04.699651 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.699774 kubelet[2538]: E1213 01:35:04.699682 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.700491 kubelet[2538]: E1213 01:35:04.700407 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.700491 kubelet[2538]: W1213 01:35:04.700459 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.700491 kubelet[2538]: E1213 01:35:04.700475 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.701037 kubelet[2538]: E1213 01:35:04.701013 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.701037 kubelet[2538]: W1213 01:35:04.701036 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.701209 kubelet[2538]: E1213 01:35:04.701049 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.701539 kubelet[2538]: E1213 01:35:04.701410 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.701539 kubelet[2538]: W1213 01:35:04.701429 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.701539 kubelet[2538]: E1213 01:35:04.701445 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.701849 kubelet[2538]: E1213 01:35:04.701825 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.701932 kubelet[2538]: W1213 01:35:04.701842 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.701932 kubelet[2538]: E1213 01:35:04.701892 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.702928 kubelet[2538]: E1213 01:35:04.702217 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.702928 kubelet[2538]: W1213 01:35:04.702237 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.702928 kubelet[2538]: E1213 01:35:04.702253 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.702928 kubelet[2538]: E1213 01:35:04.702551 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.702928 kubelet[2538]: W1213 01:35:04.702561 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.702928 kubelet[2538]: E1213 01:35:04.702573 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.702928 kubelet[2538]: E1213 01:35:04.702846 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.702928 kubelet[2538]: W1213 01:35:04.702858 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.702928 kubelet[2538]: E1213 01:35:04.702896 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.703300 kubelet[2538]: E1213 01:35:04.703163 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.703300 kubelet[2538]: W1213 01:35:04.703175 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.703300 kubelet[2538]: E1213 01:35:04.703187 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.703626 kubelet[2538]: E1213 01:35:04.703607 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.703626 kubelet[2538]: W1213 01:35:04.703623 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.703738 kubelet[2538]: E1213 01:35:04.703637 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.703931 kubelet[2538]: E1213 01:35:04.703908 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.703931 kubelet[2538]: W1213 01:35:04.703928 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.704082 kubelet[2538]: E1213 01:35:04.703941 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.704226 kubelet[2538]: E1213 01:35:04.704191 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.704226 kubelet[2538]: W1213 01:35:04.704209 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.704226 kubelet[2538]: E1213 01:35:04.704220 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.704564 kubelet[2538]: E1213 01:35:04.704498 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.704564 kubelet[2538]: W1213 01:35:04.704553 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.704635 kubelet[2538]: E1213 01:35:04.704567 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.704830 kubelet[2538]: E1213 01:35:04.704801 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.704830 kubelet[2538]: W1213 01:35:04.704816 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.704830 kubelet[2538]: E1213 01:35:04.704827 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.705085 kubelet[2538]: E1213 01:35:04.705067 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.705085 kubelet[2538]: W1213 01:35:04.705082 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.705160 kubelet[2538]: E1213 01:35:04.705093 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.705303 kubelet[2538]: E1213 01:35:04.705287 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.705303 kubelet[2538]: W1213 01:35:04.705300 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.705372 kubelet[2538]: E1213 01:35:04.705312 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.705588 kubelet[2538]: E1213 01:35:04.705572 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.705588 kubelet[2538]: W1213 01:35:04.705586 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.705646 kubelet[2538]: E1213 01:35:04.705597 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.705820 kubelet[2538]: E1213 01:35:04.705805 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.705820 kubelet[2538]: W1213 01:35:04.705817 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.705880 kubelet[2538]: E1213 01:35:04.705827 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.706110 kubelet[2538]: E1213 01:35:04.706088 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.706110 kubelet[2538]: W1213 01:35:04.706102 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.706110 kubelet[2538]: E1213 01:35:04.706113 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.706349 kubelet[2538]: E1213 01:35:04.706326 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.706349 kubelet[2538]: W1213 01:35:04.706341 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.706349 kubelet[2538]: E1213 01:35:04.706353 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.707238 kubelet[2538]: E1213 01:35:04.707015 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.707238 kubelet[2538]: W1213 01:35:04.707033 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.707238 kubelet[2538]: E1213 01:35:04.707045 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.707543 kubelet[2538]: E1213 01:35:04.707468 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.707817 kubelet[2538]: W1213 01:35:04.707713 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.707817 kubelet[2538]: E1213 01:35:04.707756 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.708180 kubelet[2538]: E1213 01:35:04.708131 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.708180 kubelet[2538]: W1213 01:35:04.708144 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.708180 kubelet[2538]: E1213 01:35:04.708155 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.708394 kubelet[2538]: E1213 01:35:04.708341 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.708394 kubelet[2538]: W1213 01:35:04.708355 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.708394 kubelet[2538]: E1213 01:35:04.708364 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.708789 kubelet[2538]: E1213 01:35:04.708665 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.708789 kubelet[2538]: W1213 01:35:04.708681 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.708789 kubelet[2538]: E1213 01:35:04.708691 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:04.709112 kubelet[2538]: E1213 01:35:04.708991 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:04.709112 kubelet[2538]: W1213 01:35:04.709005 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:04.709112 kubelet[2538]: E1213 01:35:04.709045 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:05.335627 kubelet[2538]: E1213 01:35:05.335565 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b257v" podUID="40d19590-db9c-41bd-9d1d-bd10d8bd864c" Dec 13 01:35:05.384845 kubelet[2538]: E1213 01:35:05.384803 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:05.411987 kubelet[2538]: E1213 01:35:05.411933 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:05.411987 kubelet[2538]: W1213 01:35:05.411962 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:05.411987 kubelet[2538]: E1213 01:35:05.411990 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:05.412361 kubelet[2538]: E1213 01:35:05.412314 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:05.412361 kubelet[2538]: W1213 01:35:05.412347 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:05.412449 kubelet[2538]: E1213 01:35:05.412377 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:05.412719 kubelet[2538]: E1213 01:35:05.412684 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:05.412719 kubelet[2538]: W1213 01:35:05.412701 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:05.412719 kubelet[2538]: E1213 01:35:05.412712 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:05.412994 kubelet[2538]: E1213 01:35:05.412968 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:05.412994 kubelet[2538]: W1213 01:35:05.412983 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:05.412994 kubelet[2538]: E1213 01:35:05.412994 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:05.413259 kubelet[2538]: E1213 01:35:05.413235 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:05.413259 kubelet[2538]: W1213 01:35:05.413249 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:05.413259 kubelet[2538]: E1213 01:35:05.413261 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:05.413485 kubelet[2538]: E1213 01:35:05.413461 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:05.413485 kubelet[2538]: W1213 01:35:05.413474 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:05.413485 kubelet[2538]: E1213 01:35:05.413485 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:05.413790 kubelet[2538]: E1213 01:35:05.413760 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:05.413790 kubelet[2538]: W1213 01:35:05.413778 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:05.413790 kubelet[2538]: E1213 01:35:05.413792 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:05.414045 kubelet[2538]: E1213 01:35:05.414021 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:05.414045 kubelet[2538]: W1213 01:35:05.414035 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:05.414045 kubelet[2538]: E1213 01:35:05.414045 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:05.414276 kubelet[2538]: E1213 01:35:05.414254 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:05.414276 kubelet[2538]: W1213 01:35:05.414267 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:05.414276 kubelet[2538]: E1213 01:35:05.414276 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:05.414492 kubelet[2538]: E1213 01:35:05.414469 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:05.414492 kubelet[2538]: W1213 01:35:05.414480 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:05.414492 kubelet[2538]: E1213 01:35:05.414489 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:05.414799 kubelet[2538]: E1213 01:35:05.414777 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:05.414799 kubelet[2538]: W1213 01:35:05.414790 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:05.414799 kubelet[2538]: E1213 01:35:05.414799 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:05.415065 kubelet[2538]: E1213 01:35:05.415042 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:05.415065 kubelet[2538]: W1213 01:35:05.415063 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:05.415178 kubelet[2538]: E1213 01:35:05.415076 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:05.415381 kubelet[2538]: E1213 01:35:05.415349 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:05.415381 kubelet[2538]: W1213 01:35:05.415363 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:05.415381 kubelet[2538]: E1213 01:35:05.415375 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:05.415728 kubelet[2538]: E1213 01:35:05.415587 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:05.415728 kubelet[2538]: W1213 01:35:05.415596 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:05.415728 kubelet[2538]: E1213 01:35:05.415605 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:05.415832 kubelet[2538]: E1213 01:35:05.415809 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:05.415832 kubelet[2538]: W1213 01:35:05.415817 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:05.415832 kubelet[2538]: E1213 01:35:05.415825 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:05.958019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount435603286.mount: Deactivated successfully. Dec 13 01:35:06.993941 containerd[1471]: time="2024-12-13T01:35:06.993840412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:07.030657 containerd[1471]: time="2024-12-13T01:35:07.030557990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 13 01:35:07.082007 containerd[1471]: time="2024-12-13T01:35:07.081941182Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:07.178740 containerd[1471]: time="2024-12-13T01:35:07.178669347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:07.179557 containerd[1471]: time="2024-12-13T01:35:07.179493097Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.063218474s" Dec 13 01:35:07.179557 containerd[1471]: time="2024-12-13T01:35:07.179555253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 01:35:07.181391 containerd[1471]: time="2024-12-13T01:35:07.181365017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:35:07.189286 containerd[1471]: time="2024-12-13T01:35:07.189230142Z" level=info msg="CreateContainer within sandbox \"a5493d1853d25d02ead1d97e46a3da04df6c2f349e5c4e281150477ed5563f57\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:35:07.252443 containerd[1471]: time="2024-12-13T01:35:07.252175431Z" level=info msg="CreateContainer within sandbox \"a5493d1853d25d02ead1d97e46a3da04df6c2f349e5c4e281150477ed5563f57\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3719d403d17d727064d329ef1cebe469535f4449eef6a80e49287115e53021f2\"" Dec 13 01:35:07.253395 containerd[1471]: time="2024-12-13T01:35:07.252885687Z" level=info msg="StartContainer for \"3719d403d17d727064d329ef1cebe469535f4449eef6a80e49287115e53021f2\"" Dec 13 01:35:07.289868 systemd[1]: Started cri-containerd-3719d403d17d727064d329ef1cebe469535f4449eef6a80e49287115e53021f2.scope - libcontainer container 3719d403d17d727064d329ef1cebe469535f4449eef6a80e49287115e53021f2. Dec 13 01:35:07.336449 kubelet[2538]: E1213 01:35:07.336370 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b257v" podUID="40d19590-db9c-41bd-9d1d-bd10d8bd864c" Dec 13 01:35:07.340782 containerd[1471]: time="2024-12-13T01:35:07.340737287Z" level=info msg="StartContainer for \"3719d403d17d727064d329ef1cebe469535f4449eef6a80e49287115e53021f2\" returns successfully" Dec 13 01:35:07.388857 kubelet[2538]: E1213 01:35:07.388791 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:07.431043 kubelet[2538]: E1213 01:35:07.429500 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.431043 kubelet[2538]: W1213 01:35:07.429564 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.431043 kubelet[2538]: E1213 01:35:07.429596 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.431043 kubelet[2538]: E1213 01:35:07.429867 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.431043 kubelet[2538]: W1213 01:35:07.429880 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.431043 kubelet[2538]: E1213 01:35:07.429891 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.431043 kubelet[2538]: E1213 01:35:07.430338 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.431043 kubelet[2538]: W1213 01:35:07.430349 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.431043 kubelet[2538]: E1213 01:35:07.430358 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.431043 kubelet[2538]: E1213 01:35:07.430604 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.431499 kubelet[2538]: W1213 01:35:07.430613 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.431499 kubelet[2538]: E1213 01:35:07.430622 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.431499 kubelet[2538]: E1213 01:35:07.430825 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.431499 kubelet[2538]: W1213 01:35:07.430833 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.431499 kubelet[2538]: E1213 01:35:07.430841 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.431499 kubelet[2538]: E1213 01:35:07.431084 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.431499 kubelet[2538]: W1213 01:35:07.431094 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.431499 kubelet[2538]: E1213 01:35:07.431103 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.431499 kubelet[2538]: E1213 01:35:07.431410 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.431499 kubelet[2538]: W1213 01:35:07.431422 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.431827 kubelet[2538]: E1213 01:35:07.431438 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.431827 kubelet[2538]: E1213 01:35:07.431780 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.431827 kubelet[2538]: W1213 01:35:07.431793 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.431827 kubelet[2538]: E1213 01:35:07.431814 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.432302 kubelet[2538]: E1213 01:35:07.432138 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.432302 kubelet[2538]: W1213 01:35:07.432280 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.432302 kubelet[2538]: E1213 01:35:07.432313 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.432697 kubelet[2538]: E1213 01:35:07.432632 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.432697 kubelet[2538]: W1213 01:35:07.432642 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.432697 kubelet[2538]: E1213 01:35:07.432655 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.433038 kubelet[2538]: E1213 01:35:07.432993 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.433038 kubelet[2538]: W1213 01:35:07.433030 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.433038 kubelet[2538]: E1213 01:35:07.433040 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.433462 kubelet[2538]: E1213 01:35:07.433440 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.433596 kubelet[2538]: W1213 01:35:07.433577 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.433643 kubelet[2538]: E1213 01:35:07.433608 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.434254 kubelet[2538]: E1213 01:35:07.434220 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.434399 kubelet[2538]: W1213 01:35:07.434370 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.434399 kubelet[2538]: E1213 01:35:07.434392 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.436265 kubelet[2538]: E1213 01:35:07.436189 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.436265 kubelet[2538]: W1213 01:35:07.436206 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.436265 kubelet[2538]: E1213 01:35:07.436218 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.436500 kubelet[2538]: E1213 01:35:07.436477 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.436500 kubelet[2538]: W1213 01:35:07.436491 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.436500 kubelet[2538]: E1213 01:35:07.436527 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.529218 kubelet[2538]: E1213 01:35:07.529178 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.529218 kubelet[2538]: W1213 01:35:07.529206 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.529567 kubelet[2538]: E1213 01:35:07.529232 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.529567 kubelet[2538]: E1213 01:35:07.529488 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.529567 kubelet[2538]: W1213 01:35:07.529499 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.529567 kubelet[2538]: E1213 01:35:07.529534 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.529832 kubelet[2538]: E1213 01:35:07.529798 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.529832 kubelet[2538]: W1213 01:35:07.529821 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.529908 kubelet[2538]: E1213 01:35:07.529837 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.530077 kubelet[2538]: E1213 01:35:07.530057 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.530077 kubelet[2538]: W1213 01:35:07.530072 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.530133 kubelet[2538]: E1213 01:35:07.530088 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.530294 kubelet[2538]: E1213 01:35:07.530281 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.530294 kubelet[2538]: W1213 01:35:07.530290 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.530363 kubelet[2538]: E1213 01:35:07.530303 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.530528 kubelet[2538]: E1213 01:35:07.530489 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.530528 kubelet[2538]: W1213 01:35:07.530501 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.530528 kubelet[2538]: E1213 01:35:07.530525 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.530784 kubelet[2538]: E1213 01:35:07.530756 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.530784 kubelet[2538]: W1213 01:35:07.530767 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.530784 kubelet[2538]: E1213 01:35:07.530780 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.531066 kubelet[2538]: E1213 01:35:07.531047 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.531066 kubelet[2538]: W1213 01:35:07.531065 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.531137 kubelet[2538]: E1213 01:35:07.531086 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.531341 kubelet[2538]: E1213 01:35:07.531320 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.531341 kubelet[2538]: W1213 01:35:07.531337 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.531421 kubelet[2538]: E1213 01:35:07.531353 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.531625 kubelet[2538]: E1213 01:35:07.531606 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.531625 kubelet[2538]: W1213 01:35:07.531620 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.531687 kubelet[2538]: E1213 01:35:07.531636 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.531980 kubelet[2538]: E1213 01:35:07.531961 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.532021 kubelet[2538]: W1213 01:35:07.531992 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.532021 kubelet[2538]: E1213 01:35:07.532009 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.532265 kubelet[2538]: E1213 01:35:07.532249 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.532265 kubelet[2538]: W1213 01:35:07.532261 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.532326 kubelet[2538]: E1213 01:35:07.532275 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.532529 kubelet[2538]: E1213 01:35:07.532500 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.532585 kubelet[2538]: W1213 01:35:07.532531 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.532585 kubelet[2538]: E1213 01:35:07.532549 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.532771 kubelet[2538]: E1213 01:35:07.532751 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.532771 kubelet[2538]: W1213 01:35:07.532765 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.532849 kubelet[2538]: E1213 01:35:07.532780 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.533065 kubelet[2538]: E1213 01:35:07.533044 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.533065 kubelet[2538]: W1213 01:35:07.533057 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.533146 kubelet[2538]: E1213 01:35:07.533088 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.533400 kubelet[2538]: E1213 01:35:07.533352 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.533400 kubelet[2538]: W1213 01:35:07.533390 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.533613 kubelet[2538]: E1213 01:35:07.533427 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.533727 kubelet[2538]: E1213 01:35:07.533709 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.533727 kubelet[2538]: W1213 01:35:07.533722 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.533798 kubelet[2538]: E1213 01:35:07.533738 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:07.533964 kubelet[2538]: E1213 01:35:07.533941 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:07.533964 kubelet[2538]: W1213 01:35:07.533951 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:07.533964 kubelet[2538]: E1213 01:35:07.533959 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.390756 kubelet[2538]: I1213 01:35:08.390716 2538 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:35:08.391207 kubelet[2538]: E1213 01:35:08.391144 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:08.442114 kubelet[2538]: E1213 01:35:08.442072 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.442114 kubelet[2538]: W1213 01:35:08.442095 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.442114 kubelet[2538]: E1213 01:35:08.442121 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.442360 kubelet[2538]: E1213 01:35:08.442337 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.442360 kubelet[2538]: W1213 01:35:08.442351 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.442360 kubelet[2538]: E1213 01:35:08.442360 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.442637 kubelet[2538]: E1213 01:35:08.442615 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.442637 kubelet[2538]: W1213 01:35:08.442628 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.442695 kubelet[2538]: E1213 01:35:08.442637 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.442993 kubelet[2538]: E1213 01:35:08.442942 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.442993 kubelet[2538]: W1213 01:35:08.442973 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.443156 kubelet[2538]: E1213 01:35:08.443075 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.443423 kubelet[2538]: E1213 01:35:08.443404 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.443423 kubelet[2538]: W1213 01:35:08.443418 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.443498 kubelet[2538]: E1213 01:35:08.443428 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.443832 kubelet[2538]: E1213 01:35:08.443804 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.443832 kubelet[2538]: W1213 01:35:08.443825 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.443924 kubelet[2538]: E1213 01:35:08.443841 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.444169 kubelet[2538]: E1213 01:35:08.444145 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.444169 kubelet[2538]: W1213 01:35:08.444160 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.444169 kubelet[2538]: E1213 01:35:08.444173 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.444420 kubelet[2538]: E1213 01:35:08.444402 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.444420 kubelet[2538]: W1213 01:35:08.444416 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.444536 kubelet[2538]: E1213 01:35:08.444430 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.444686 kubelet[2538]: E1213 01:35:08.444672 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.444686 kubelet[2538]: W1213 01:35:08.444682 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.444772 kubelet[2538]: E1213 01:35:08.444691 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.444921 kubelet[2538]: E1213 01:35:08.444907 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.444921 kubelet[2538]: W1213 01:35:08.444916 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.445000 kubelet[2538]: E1213 01:35:08.444926 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.445150 kubelet[2538]: E1213 01:35:08.445133 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.445150 kubelet[2538]: W1213 01:35:08.445144 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.445150 kubelet[2538]: E1213 01:35:08.445151 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.445395 kubelet[2538]: E1213 01:35:08.445379 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.445395 kubelet[2538]: W1213 01:35:08.445390 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.445467 kubelet[2538]: E1213 01:35:08.445398 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.445634 kubelet[2538]: E1213 01:35:08.445621 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.445634 kubelet[2538]: W1213 01:35:08.445630 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.445709 kubelet[2538]: E1213 01:35:08.445637 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.445842 kubelet[2538]: E1213 01:35:08.445826 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.445842 kubelet[2538]: W1213 01:35:08.445835 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.445921 kubelet[2538]: E1213 01:35:08.445845 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.446045 kubelet[2538]: E1213 01:35:08.446032 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.446045 kubelet[2538]: W1213 01:35:08.446043 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.446104 kubelet[2538]: E1213 01:35:08.446050 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.535722 kubelet[2538]: E1213 01:35:08.535683 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.535722 kubelet[2538]: W1213 01:35:08.535708 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.535722 kubelet[2538]: E1213 01:35:08.535733 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.535995 kubelet[2538]: E1213 01:35:08.535976 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.535995 kubelet[2538]: W1213 01:35:08.535988 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.536050 kubelet[2538]: E1213 01:35:08.536002 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.536381 kubelet[2538]: E1213 01:35:08.536351 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.536420 kubelet[2538]: W1213 01:35:08.536378 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.536420 kubelet[2538]: E1213 01:35:08.536408 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.536674 kubelet[2538]: E1213 01:35:08.536655 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.536674 kubelet[2538]: W1213 01:35:08.536671 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.536725 kubelet[2538]: E1213 01:35:08.536690 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.536929 kubelet[2538]: E1213 01:35:08.536906 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.536929 kubelet[2538]: W1213 01:35:08.536923 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.536981 kubelet[2538]: E1213 01:35:08.536938 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.537168 kubelet[2538]: E1213 01:35:08.537151 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.537168 kubelet[2538]: W1213 01:35:08.537164 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.537223 kubelet[2538]: E1213 01:35:08.537178 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.537392 kubelet[2538]: E1213 01:35:08.537372 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.537392 kubelet[2538]: W1213 01:35:08.537386 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.537455 kubelet[2538]: E1213 01:35:08.537400 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.537653 kubelet[2538]: E1213 01:35:08.537633 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.537653 kubelet[2538]: W1213 01:35:08.537648 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.537706 kubelet[2538]: E1213 01:35:08.537662 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.537897 kubelet[2538]: E1213 01:35:08.537881 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.537897 kubelet[2538]: W1213 01:35:08.537893 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.537957 kubelet[2538]: E1213 01:35:08.537909 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.538140 kubelet[2538]: E1213 01:35:08.538128 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.538140 kubelet[2538]: W1213 01:35:08.538137 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.538186 kubelet[2538]: E1213 01:35:08.538150 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.538408 kubelet[2538]: E1213 01:35:08.538388 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.538408 kubelet[2538]: W1213 01:35:08.538402 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.538457 kubelet[2538]: E1213 01:35:08.538416 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.538643 kubelet[2538]: E1213 01:35:08.538624 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.538643 kubelet[2538]: W1213 01:35:08.538636 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.538709 kubelet[2538]: E1213 01:35:08.538668 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.538864 kubelet[2538]: E1213 01:35:08.538850 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.538864 kubelet[2538]: W1213 01:35:08.538860 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.538925 kubelet[2538]: E1213 01:35:08.538883 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.539116 kubelet[2538]: E1213 01:35:08.539102 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.539116 kubelet[2538]: W1213 01:35:08.539113 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.539166 kubelet[2538]: E1213 01:35:08.539127 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.539409 kubelet[2538]: E1213 01:35:08.539392 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.539409 kubelet[2538]: W1213 01:35:08.539405 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.539578 kubelet[2538]: E1213 01:35:08.539420 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.539661 kubelet[2538]: E1213 01:35:08.539644 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.539661 kubelet[2538]: W1213 01:35:08.539658 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.539708 kubelet[2538]: E1213 01:35:08.539673 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.539918 kubelet[2538]: E1213 01:35:08.539900 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.539918 kubelet[2538]: W1213 01:35:08.539913 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.539984 kubelet[2538]: E1213 01:35:08.539926 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:08.540119 kubelet[2538]: E1213 01:35:08.540104 2538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:35:08.540119 kubelet[2538]: W1213 01:35:08.540116 2538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:35:08.540167 kubelet[2538]: E1213 01:35:08.540124 2538 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:35:09.113550 containerd[1471]: time="2024-12-13T01:35:09.113479838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:09.114778 containerd[1471]: time="2024-12-13T01:35:09.114705552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 13 01:35:09.115955 containerd[1471]: time="2024-12-13T01:35:09.115925176Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:09.118472 containerd[1471]: time="2024-12-13T01:35:09.118431759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:09.119062 containerd[1471]: time="2024-12-13T01:35:09.119017450Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.937621674s" Dec 13 01:35:09.119095 containerd[1471]: time="2024-12-13T01:35:09.119060151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 01:35:09.120955 containerd[1471]: time="2024-12-13T01:35:09.120926660Z" level=info msg="CreateContainer within sandbox \"347f9f683b390c0a29a2a9962012c3284c8e82928749b81601ea3fccfd882507\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:35:09.138537 containerd[1471]: time="2024-12-13T01:35:09.138475187Z" level=info msg="CreateContainer within sandbox \"347f9f683b390c0a29a2a9962012c3284c8e82928749b81601ea3fccfd882507\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6f71d84bf590e774f02786e65f4d1c054a5c4d80678c663b0e1e4b4e257bc73d\"" Dec 13 01:35:09.139091 containerd[1471]: time="2024-12-13T01:35:09.138975418Z" level=info msg="StartContainer for \"6f71d84bf590e774f02786e65f4d1c054a5c4d80678c663b0e1e4b4e257bc73d\"" Dec 13 01:35:09.173770 systemd[1]: Started cri-containerd-6f71d84bf590e774f02786e65f4d1c054a5c4d80678c663b0e1e4b4e257bc73d.scope - libcontainer container 6f71d84bf590e774f02786e65f4d1c054a5c4d80678c663b0e1e4b4e257bc73d. Dec 13 01:35:09.213166 containerd[1471]: time="2024-12-13T01:35:09.212956375Z" level=info msg="StartContainer for \"6f71d84bf590e774f02786e65f4d1c054a5c4d80678c663b0e1e4b4e257bc73d\" returns successfully" Dec 13 01:35:09.228081 systemd[1]: cri-containerd-6f71d84bf590e774f02786e65f4d1c054a5c4d80678c663b0e1e4b4e257bc73d.scope: Deactivated successfully. Dec 13 01:35:09.252271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f71d84bf590e774f02786e65f4d1c054a5c4d80678c663b0e1e4b4e257bc73d-rootfs.mount: Deactivated successfully. Dec 13 01:35:09.335903 kubelet[2538]: E1213 01:35:09.335833 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b257v" podUID="40d19590-db9c-41bd-9d1d-bd10d8bd864c" Dec 13 01:35:09.513195 kubelet[2538]: E1213 01:35:09.512061 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:09.531991 kubelet[2538]: I1213 01:35:09.531897 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5976bc5f65-5l9f4" podStartSLOduration=3.467125176 podStartE2EDuration="6.531870743s" podCreationTimestamp="2024-12-13 01:35:03 +0000 UTC" firstStartedPulling="2024-12-13 01:35:04.11565715 +0000 UTC m=+11.871050564" lastFinishedPulling="2024-12-13 01:35:07.180402698 +0000 UTC m=+14.935796131" observedRunningTime="2024-12-13 01:35:07.408297907 +0000 UTC m=+15.163691330" watchObservedRunningTime="2024-12-13 01:35:09.531870743 +0000 UTC m=+17.287264156" Dec 13 01:35:10.132454 containerd[1471]: time="2024-12-13T01:35:10.132364411Z" level=info msg="shim disconnected" id=6f71d84bf590e774f02786e65f4d1c054a5c4d80678c663b0e1e4b4e257bc73d namespace=k8s.io Dec 13 01:35:10.132454 containerd[1471]: time="2024-12-13T01:35:10.132440253Z" level=warning msg="cleaning up after shim disconnected" id=6f71d84bf590e774f02786e65f4d1c054a5c4d80678c663b0e1e4b4e257bc73d namespace=k8s.io Dec 13 01:35:10.132454 containerd[1471]: time="2024-12-13T01:35:10.132456593Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:35:10.397081 kubelet[2538]: E1213 01:35:10.396934 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:10.397949 containerd[1471]: time="2024-12-13T01:35:10.397897112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:35:11.336379 kubelet[2538]: E1213 01:35:11.336295 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b257v" podUID="40d19590-db9c-41bd-9d1d-bd10d8bd864c" Dec 13 01:35:13.336785 kubelet[2538]: E1213 01:35:13.336311 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b257v" podUID="40d19590-db9c-41bd-9d1d-bd10d8bd864c" Dec 13 01:35:15.337493 kubelet[2538]: E1213 01:35:15.337403 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b257v" podUID="40d19590-db9c-41bd-9d1d-bd10d8bd864c" Dec 13 01:35:17.259017 containerd[1471]: time="2024-12-13T01:35:17.258928500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:17.259750 containerd[1471]: time="2024-12-13T01:35:17.259660887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 01:35:17.261092 containerd[1471]: time="2024-12-13T01:35:17.261026662Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:17.263373 containerd[1471]: time="2024-12-13T01:35:17.263338614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:17.264578 containerd[1471]: time="2024-12-13T01:35:17.264507089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.866569501s" Dec 13 01:35:17.264640 containerd[1471]: time="2024-12-13T01:35:17.264578463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 01:35:17.292728 containerd[1471]: time="2024-12-13T01:35:17.292669667Z" level=info msg="CreateContainer within sandbox \"347f9f683b390c0a29a2a9962012c3284c8e82928749b81601ea3fccfd882507\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:35:17.319430 containerd[1471]: time="2024-12-13T01:35:17.319378765Z" level=info msg="CreateContainer within sandbox \"347f9f683b390c0a29a2a9962012c3284c8e82928749b81601ea3fccfd882507\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ca5b9a0d4db7f3cb3b9ad3d6f5bec2ad0bcc620bbc7821929d37809b4210affe\"" Dec 13 01:35:17.322162 containerd[1471]: time="2024-12-13T01:35:17.322123421Z" level=info msg="StartContainer for \"ca5b9a0d4db7f3cb3b9ad3d6f5bec2ad0bcc620bbc7821929d37809b4210affe\"" Dec 13 01:35:17.349677 kubelet[2538]: E1213 01:35:17.349602 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b257v" podUID="40d19590-db9c-41bd-9d1d-bd10d8bd864c" Dec 13 01:35:17.363825 systemd[1]: Started cri-containerd-ca5b9a0d4db7f3cb3b9ad3d6f5bec2ad0bcc620bbc7821929d37809b4210affe.scope - libcontainer container ca5b9a0d4db7f3cb3b9ad3d6f5bec2ad0bcc620bbc7821929d37809b4210affe. Dec 13 01:35:17.403010 containerd[1471]: time="2024-12-13T01:35:17.402953565Z" level=info msg="StartContainer for \"ca5b9a0d4db7f3cb3b9ad3d6f5bec2ad0bcc620bbc7821929d37809b4210affe\" returns successfully" Dec 13 01:35:17.449056 kubelet[2538]: E1213 01:35:17.449004 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:18.451166 kubelet[2538]: E1213 01:35:18.451119 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:18.765998 systemd[1]: cri-containerd-ca5b9a0d4db7f3cb3b9ad3d6f5bec2ad0bcc620bbc7821929d37809b4210affe.scope: Deactivated successfully. Dec 13 01:35:18.781637 kubelet[2538]: I1213 01:35:18.781594 2538 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 01:35:18.794113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca5b9a0d4db7f3cb3b9ad3d6f5bec2ad0bcc620bbc7821929d37809b4210affe-rootfs.mount: Deactivated successfully. Dec 13 01:35:19.083362 kubelet[2538]: I1213 01:35:19.083280 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11de9ed5-e662-461f-9f1b-7ff39b760401-tigera-ca-bundle\") pod \"calico-kube-controllers-c4cc7c965-9dhjq\" (UID: \"11de9ed5-e662-461f-9f1b-7ff39b760401\") " pod="calico-system/calico-kube-controllers-c4cc7c965-9dhjq" Dec 13 01:35:19.083362 kubelet[2538]: I1213 01:35:19.083331 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz8vr\" (UniqueName: \"kubernetes.io/projected/332e1ee2-384e-4c56-b8b4-25490ccaf929-kube-api-access-wz8vr\") pod \"coredns-6f6b679f8f-xtl2z\" (UID: \"332e1ee2-384e-4c56-b8b4-25490ccaf929\") " pod="kube-system/coredns-6f6b679f8f-xtl2z" Dec 13 01:35:19.083362 kubelet[2538]: I1213 01:35:19.083371 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjbxl\" (UniqueName: \"kubernetes.io/projected/9d170bf4-a7af-412b-aafa-970b3da3b8b8-kube-api-access-jjbxl\") pod \"coredns-6f6b679f8f-dldnc\" (UID: \"9d170bf4-a7af-412b-aafa-970b3da3b8b8\") " pod="kube-system/coredns-6f6b679f8f-dldnc" Dec 13 01:35:19.083850 kubelet[2538]: I1213 01:35:19.083557 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l7th\" (UniqueName: \"kubernetes.io/projected/b40daba8-6884-4f86-8a32-cd23147e2173-kube-api-access-5l7th\") pod \"calico-apiserver-89794f578-dc8fd\" (UID: \"b40daba8-6884-4f86-8a32-cd23147e2173\") " pod="calico-apiserver/calico-apiserver-89794f578-dc8fd" Dec 13 01:35:19.083850 kubelet[2538]: I1213 01:35:19.083602 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d170bf4-a7af-412b-aafa-970b3da3b8b8-config-volume\") pod \"coredns-6f6b679f8f-dldnc\" (UID: \"9d170bf4-a7af-412b-aafa-970b3da3b8b8\") " pod="kube-system/coredns-6f6b679f8f-dldnc" Dec 13 01:35:19.083850 kubelet[2538]: I1213 01:35:19.083640 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/24589cea-6828-4164-a7da-b10ab65d700a-calico-apiserver-certs\") pod \"calico-apiserver-89794f578-hrxnn\" (UID: \"24589cea-6828-4164-a7da-b10ab65d700a\") " pod="calico-apiserver/calico-apiserver-89794f578-hrxnn" Dec 13 01:35:19.083850 kubelet[2538]: I1213 01:35:19.083666 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2xn8\" (UniqueName: \"kubernetes.io/projected/11de9ed5-e662-461f-9f1b-7ff39b760401-kube-api-access-x2xn8\") pod \"calico-kube-controllers-c4cc7c965-9dhjq\" (UID: \"11de9ed5-e662-461f-9f1b-7ff39b760401\") " pod="calico-system/calico-kube-controllers-c4cc7c965-9dhjq" Dec 13 01:35:19.083850 kubelet[2538]: I1213 01:35:19.083692 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brgbp\" (UniqueName: \"kubernetes.io/projected/24589cea-6828-4164-a7da-b10ab65d700a-kube-api-access-brgbp\") pod \"calico-apiserver-89794f578-hrxnn\" (UID: \"24589cea-6828-4164-a7da-b10ab65d700a\") " pod="calico-apiserver/calico-apiserver-89794f578-hrxnn" Dec 13 01:35:19.084041 kubelet[2538]: I1213 01:35:19.083717 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b40daba8-6884-4f86-8a32-cd23147e2173-calico-apiserver-certs\") pod \"calico-apiserver-89794f578-dc8fd\" (UID: \"b40daba8-6884-4f86-8a32-cd23147e2173\") " pod="calico-apiserver/calico-apiserver-89794f578-dc8fd" Dec 13 01:35:19.084041 kubelet[2538]: I1213 01:35:19.083741 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/332e1ee2-384e-4c56-b8b4-25490ccaf929-config-volume\") pod \"coredns-6f6b679f8f-xtl2z\" (UID: \"332e1ee2-384e-4c56-b8b4-25490ccaf929\") " pod="kube-system/coredns-6f6b679f8f-xtl2z" Dec 13 01:35:19.122502 systemd[1]: Created slice kubepods-burstable-pod9d170bf4_a7af_412b_aafa_970b3da3b8b8.slice - libcontainer container kubepods-burstable-pod9d170bf4_a7af_412b_aafa_970b3da3b8b8.slice. Dec 13 01:35:19.132486 systemd[1]: Created slice kubepods-besteffort-pod24589cea_6828_4164_a7da_b10ab65d700a.slice - libcontainer container kubepods-besteffort-pod24589cea_6828_4164_a7da_b10ab65d700a.slice. Dec 13 01:35:19.142761 systemd[1]: Created slice kubepods-besteffort-pod11de9ed5_e662_461f_9f1b_7ff39b760401.slice - libcontainer container kubepods-besteffort-pod11de9ed5_e662_461f_9f1b_7ff39b760401.slice. Dec 13 01:35:19.151243 systemd[1]: Created slice kubepods-burstable-pod332e1ee2_384e_4c56_b8b4_25490ccaf929.slice - libcontainer container kubepods-burstable-pod332e1ee2_384e_4c56_b8b4_25490ccaf929.slice. Dec 13 01:35:19.160596 systemd[1]: Created slice kubepods-besteffort-podb40daba8_6884_4f86_8a32_cd23147e2173.slice - libcontainer container kubepods-besteffort-podb40daba8_6884_4f86_8a32_cd23147e2173.slice. Dec 13 01:35:19.343752 systemd[1]: Created slice kubepods-besteffort-pod40d19590_db9c_41bd_9d1d_bd10d8bd864c.slice - libcontainer container kubepods-besteffort-pod40d19590_db9c_41bd_9d1d_bd10d8bd864c.slice. Dec 13 01:35:20.030813 kubelet[2538]: E1213 01:35:20.030686 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:20.037780 kubelet[2538]: E1213 01:35:20.037727 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:20.061934 containerd[1471]: time="2024-12-13T01:35:20.061873144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b257v,Uid:40d19590-db9c-41bd-9d1d-bd10d8bd864c,Namespace:calico-system,Attempt:0,}" Dec 13 01:35:20.062810 containerd[1471]: time="2024-12-13T01:35:20.061975456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xtl2z,Uid:332e1ee2-384e-4c56-b8b4-25490ccaf929,Namespace:kube-system,Attempt:0,}" Dec 13 01:35:20.062810 containerd[1471]: time="2024-12-13T01:35:20.062613314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4cc7c965-9dhjq,Uid:11de9ed5-e662-461f-9f1b-7ff39b760401,Namespace:calico-system,Attempt:0,}" Dec 13 01:35:20.062810 containerd[1471]: time="2024-12-13T01:35:20.062615839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dldnc,Uid:9d170bf4-a7af-412b-aafa-970b3da3b8b8,Namespace:kube-system,Attempt:0,}" Dec 13 01:35:20.062810 containerd[1471]: time="2024-12-13T01:35:20.062621910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89794f578-hrxnn,Uid:24589cea-6828-4164-a7da-b10ab65d700a,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:35:20.064740 containerd[1471]: time="2024-12-13T01:35:20.064694302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89794f578-dc8fd,Uid:b40daba8-6884-4f86-8a32-cd23147e2173,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:35:20.237676 containerd[1471]: time="2024-12-13T01:35:20.237572762Z" level=info msg="shim disconnected" id=ca5b9a0d4db7f3cb3b9ad3d6f5bec2ad0bcc620bbc7821929d37809b4210affe namespace=k8s.io Dec 13 01:35:20.237676 containerd[1471]: time="2024-12-13T01:35:20.237633997Z" level=warning msg="cleaning up after shim disconnected" id=ca5b9a0d4db7f3cb3b9ad3d6f5bec2ad0bcc620bbc7821929d37809b4210affe namespace=k8s.io Dec 13 01:35:20.237676 containerd[1471]: time="2024-12-13T01:35:20.237643214Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:35:20.458982 kubelet[2538]: E1213 01:35:20.458004 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:20.459141 containerd[1471]: time="2024-12-13T01:35:20.458845306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:35:20.498396 containerd[1471]: time="2024-12-13T01:35:20.498304812Z" level=error msg="Failed to destroy network for sandbox \"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.499610 containerd[1471]: time="2024-12-13T01:35:20.498975572Z" level=error msg="encountered an error cleaning up failed sandbox \"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.499610 containerd[1471]: time="2024-12-13T01:35:20.499036036Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89794f578-dc8fd,Uid:b40daba8-6884-4f86-8a32-cd23147e2173,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.499610 containerd[1471]: time="2024-12-13T01:35:20.499202850Z" level=error msg="Failed to destroy network for sandbox \"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.499610 containerd[1471]: time="2024-12-13T01:35:20.499441868Z" level=error msg="Failed to destroy network for sandbox \"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.499776 containerd[1471]: time="2024-12-13T01:35:20.499677571Z" level=error msg="encountered an error cleaning up failed sandbox \"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.499776 containerd[1471]: time="2024-12-13T01:35:20.499732955Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xtl2z,Uid:332e1ee2-384e-4c56-b8b4-25490ccaf929,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.500416 containerd[1471]: time="2024-12-13T01:35:20.500360262Z" level=error msg="encountered an error cleaning up failed sandbox \"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.500586 containerd[1471]: time="2024-12-13T01:35:20.500561881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89794f578-hrxnn,Uid:24589cea-6828-4164-a7da-b10ab65d700a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.500773 containerd[1471]: time="2024-12-13T01:35:20.500739584Z" level=error msg="Failed to destroy network for sandbox \"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.501173 containerd[1471]: time="2024-12-13T01:35:20.501133564Z" level=error msg="encountered an error cleaning up failed sandbox \"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.501243 containerd[1471]: time="2024-12-13T01:35:20.501197254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4cc7c965-9dhjq,Uid:11de9ed5-e662-461f-9f1b-7ff39b760401,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.505461 containerd[1471]: time="2024-12-13T01:35:20.505383986Z" level=error msg="Failed to destroy network for sandbox \"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.506062 containerd[1471]: time="2024-12-13T01:35:20.506001596Z" level=error msg="encountered an error cleaning up failed sandbox \"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.506242 containerd[1471]: time="2024-12-13T01:35:20.506078490Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dldnc,Uid:9d170bf4-a7af-412b-aafa-970b3da3b8b8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.516840 containerd[1471]: time="2024-12-13T01:35:20.516743567Z" level=error msg="Failed to destroy network for sandbox \"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.530117 kubelet[2538]: E1213 01:35:20.530037 2538 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.530356 kubelet[2538]: E1213 01:35:20.530119 2538 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.530356 kubelet[2538]: E1213 01:35:20.530172 2538 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.530356 kubelet[2538]: E1213 01:35:20.530198 2538 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4cc7c965-9dhjq" Dec 13 01:35:20.530356 kubelet[2538]: E1213 01:35:20.530220 2538 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-89794f578-hrxnn" Dec 13 01:35:20.530588 kubelet[2538]: E1213 01:35:20.530277 2538 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-89794f578-hrxnn" Dec 13 01:35:20.530588 kubelet[2538]: E1213 01:35:20.530338 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-89794f578-hrxnn_calico-apiserver(24589cea-6828-4164-a7da-b10ab65d700a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-89794f578-hrxnn_calico-apiserver(24589cea-6828-4164-a7da-b10ab65d700a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-89794f578-hrxnn" podUID="24589cea-6828-4164-a7da-b10ab65d700a" Dec 13 01:35:20.530733 kubelet[2538]: E1213 01:35:20.530145 2538 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-dldnc" Dec 13 01:35:20.530733 kubelet[2538]: E1213 01:35:20.530692 2538 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-dldnc" Dec 13 01:35:20.530825 kubelet[2538]: E1213 01:35:20.530781 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-dldnc_kube-system(9d170bf4-a7af-412b-aafa-970b3da3b8b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-dldnc_kube-system(9d170bf4-a7af-412b-aafa-970b3da3b8b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-dldnc" podUID="9d170bf4-a7af-412b-aafa-970b3da3b8b8" Dec 13 01:35:20.531455 kubelet[2538]: E1213 01:35:20.530051 2538 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.531455 kubelet[2538]: E1213 01:35:20.531048 2538 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-89794f578-dc8fd" Dec 13 01:35:20.531455 kubelet[2538]: E1213 01:35:20.531086 2538 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-89794f578-dc8fd" Dec 13 01:35:20.531609 kubelet[2538]: E1213 01:35:20.531165 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-89794f578-dc8fd_calico-apiserver(b40daba8-6884-4f86-8a32-cd23147e2173)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-89794f578-dc8fd_calico-apiserver(b40daba8-6884-4f86-8a32-cd23147e2173)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-89794f578-dc8fd" podUID="b40daba8-6884-4f86-8a32-cd23147e2173" Dec 13 01:35:20.531609 kubelet[2538]: E1213 01:35:20.530933 2538 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.531609 kubelet[2538]: E1213 01:35:20.531264 2538 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xtl2z" Dec 13 01:35:20.531768 kubelet[2538]: E1213 01:35:20.531288 2538 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xtl2z" Dec 13 01:35:20.531768 kubelet[2538]: E1213 01:35:20.531338 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xtl2z_kube-system(332e1ee2-384e-4c56-b8b4-25490ccaf929)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xtl2z_kube-system(332e1ee2-384e-4c56-b8b4-25490ccaf929)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-xtl2z" podUID="332e1ee2-384e-4c56-b8b4-25490ccaf929" Dec 13 01:35:20.534339 kubelet[2538]: E1213 01:35:20.534276 2538 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4cc7c965-9dhjq" Dec 13 01:35:20.534500 kubelet[2538]: E1213 01:35:20.534380 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c4cc7c965-9dhjq_calico-system(11de9ed5-e662-461f-9f1b-7ff39b760401)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c4cc7c965-9dhjq_calico-system(11de9ed5-e662-461f-9f1b-7ff39b760401)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c4cc7c965-9dhjq" podUID="11de9ed5-e662-461f-9f1b-7ff39b760401" Dec 13 01:35:20.558595 containerd[1471]: time="2024-12-13T01:35:20.558491761Z" level=error msg="encountered an error cleaning up failed sandbox \"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.558747 containerd[1471]: time="2024-12-13T01:35:20.558615985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b257v,Uid:40d19590-db9c-41bd-9d1d-bd10d8bd864c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.558943 kubelet[2538]: E1213 01:35:20.558893 2538 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:20.559032 kubelet[2538]: E1213 01:35:20.558971 2538 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b257v" Dec 13 01:35:20.559032 kubelet[2538]: E1213 01:35:20.558997 2538 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b257v" Dec 13 01:35:20.559107 kubelet[2538]: E1213 01:35:20.559052 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b257v_calico-system(40d19590-db9c-41bd-9d1d-bd10d8bd864c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b257v_calico-system(40d19590-db9c-41bd-9d1d-bd10d8bd864c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b257v" podUID="40d19590-db9c-41bd-9d1d-bd10d8bd864c" Dec 13 01:35:21.040606 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040-shm.mount: Deactivated successfully. Dec 13 01:35:21.040748 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050-shm.mount: Deactivated successfully. Dec 13 01:35:21.460725 kubelet[2538]: I1213 01:35:21.460576 2538 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Dec 13 01:35:21.461960 kubelet[2538]: I1213 01:35:21.461929 2538 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Dec 13 01:35:21.464101 containerd[1471]: time="2024-12-13T01:35:21.464051916Z" level=info msg="StopPodSandbox for \"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\"" Dec 13 01:35:21.465026 containerd[1471]: time="2024-12-13T01:35:21.464971903Z" level=info msg="StopPodSandbox for \"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\"" Dec 13 01:35:21.465950 kubelet[2538]: I1213 01:35:21.465923 2538 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Dec 13 01:35:21.468849 containerd[1471]: time="2024-12-13T01:35:21.468495048Z" level=info msg="StopPodSandbox for \"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\"" Dec 13 01:35:21.470364 kubelet[2538]: I1213 01:35:21.470315 2538 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Dec 13 01:35:21.470986 containerd[1471]: time="2024-12-13T01:35:21.470942163Z" level=info msg="StopPodSandbox for \"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\"" Dec 13 01:35:21.472075 containerd[1471]: time="2024-12-13T01:35:21.471782962Z" level=info msg="Ensure that sandbox 08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7 in task-service has been cleanup successfully" Dec 13 01:35:21.472324 containerd[1471]: time="2024-12-13T01:35:21.471782141Z" level=info msg="Ensure that sandbox b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040 in task-service has been cleanup successfully" Dec 13 01:35:21.472572 kubelet[2538]: I1213 01:35:21.472550 2538 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Dec 13 01:35:21.473885 containerd[1471]: time="2024-12-13T01:35:21.473843712Z" level=info msg="StopPodSandbox for \"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\"" Dec 13 01:35:21.474051 containerd[1471]: time="2024-12-13T01:35:21.474029881Z" level=info msg="Ensure that sandbox cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b in task-service has been cleanup successfully" Dec 13 01:35:21.474083 containerd[1471]: time="2024-12-13T01:35:21.471787160Z" level=info msg="Ensure that sandbox 92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58 in task-service has been cleanup successfully" Dec 13 01:35:21.474466 containerd[1471]: time="2024-12-13T01:35:21.474407480Z" level=info msg="Ensure that sandbox 533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050 in task-service has been cleanup successfully" Dec 13 01:35:21.475969 kubelet[2538]: I1213 01:35:21.475578 2538 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Dec 13 01:35:21.476837 containerd[1471]: time="2024-12-13T01:35:21.476391896Z" level=info msg="StopPodSandbox for \"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\"" Dec 13 01:35:21.477087 containerd[1471]: time="2024-12-13T01:35:21.477036446Z" level=info msg="Ensure that sandbox 9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a in task-service has been cleanup successfully" Dec 13 01:35:21.553174 containerd[1471]: time="2024-12-13T01:35:21.553101339Z" level=error msg="StopPodSandbox for \"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\" failed" error="failed to destroy network for sandbox \"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:21.553505 containerd[1471]: time="2024-12-13T01:35:21.553124101Z" level=error msg="StopPodSandbox for \"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\" failed" error="failed to destroy network for sandbox \"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:21.553505 containerd[1471]: time="2024-12-13T01:35:21.553143117Z" level=error msg="StopPodSandbox for \"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\" failed" error="failed to destroy network for sandbox \"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:21.553505 containerd[1471]: time="2024-12-13T01:35:21.553210554Z" level=error msg="StopPodSandbox for \"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\" failed" error="failed to destroy network for sandbox \"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:21.553505 containerd[1471]: time="2024-12-13T01:35:21.553343744Z" level=error msg="StopPodSandbox for \"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\" failed" error="failed to destroy network for sandbox \"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:21.553951 kubelet[2538]: E1213 01:35:21.553905 2538 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Dec 13 01:35:21.554011 kubelet[2538]: E1213 01:35:21.553950 2538 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Dec 13 01:35:21.554049 kubelet[2538]: E1213 01:35:21.553986 2538 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b"} Dec 13 01:35:21.554049 kubelet[2538]: E1213 01:35:21.554031 2538 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Dec 13 01:35:21.554128 kubelet[2538]: E1213 01:35:21.554052 2538 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040"} Dec 13 01:35:21.554128 kubelet[2538]: E1213 01:35:21.554067 2538 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"24589cea-6828-4164-a7da-b10ab65d700a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:35:21.554128 kubelet[2538]: E1213 01:35:21.554094 2538 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40d19590-db9c-41bd-9d1d-bd10d8bd864c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:35:21.554128 kubelet[2538]: E1213 01:35:21.554100 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"24589cea-6828-4164-a7da-b10ab65d700a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-89794f578-hrxnn" podUID="24589cea-6828-4164-a7da-b10ab65d700a" Dec 13 01:35:21.554352 kubelet[2538]: E1213 01:35:21.554008 2538 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58"} Dec 13 01:35:21.554352 kubelet[2538]: E1213 01:35:21.554135 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40d19590-db9c-41bd-9d1d-bd10d8bd864c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b257v" podUID="40d19590-db9c-41bd-9d1d-bd10d8bd864c" Dec 13 01:35:21.554352 kubelet[2538]: E1213 01:35:21.554160 2538 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"11de9ed5-e662-461f-9f1b-7ff39b760401\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:35:21.554352 kubelet[2538]: E1213 01:35:21.554181 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"11de9ed5-e662-461f-9f1b-7ff39b760401\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c4cc7c965-9dhjq" podUID="11de9ed5-e662-461f-9f1b-7ff39b760401" Dec 13 01:35:21.554583 kubelet[2538]: E1213 01:35:21.553906 2538 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Dec 13 01:35:21.554583 kubelet[2538]: E1213 01:35:21.554180 2538 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Dec 13 01:35:21.554583 kubelet[2538]: E1213 01:35:21.554211 2538 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a"} Dec 13 01:35:21.554583 kubelet[2538]: E1213 01:35:21.554215 2538 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7"} Dec 13 01:35:21.554583 kubelet[2538]: E1213 01:35:21.554238 2538 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9d170bf4-a7af-412b-aafa-970b3da3b8b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:35:21.554757 kubelet[2538]: E1213 01:35:21.554263 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9d170bf4-a7af-412b-aafa-970b3da3b8b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-dldnc" podUID="9d170bf4-a7af-412b-aafa-970b3da3b8b8" Dec 13 01:35:21.554757 kubelet[2538]: E1213 01:35:21.554243 2538 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b40daba8-6884-4f86-8a32-cd23147e2173\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:35:21.554757 kubelet[2538]: E1213 01:35:21.554295 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b40daba8-6884-4f86-8a32-cd23147e2173\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-89794f578-dc8fd" podUID="b40daba8-6884-4f86-8a32-cd23147e2173" Dec 13 01:35:21.557952 containerd[1471]: time="2024-12-13T01:35:21.557910529Z" level=error msg="StopPodSandbox for \"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\" failed" error="failed to destroy network for sandbox \"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:35:21.558133 kubelet[2538]: E1213 01:35:21.558094 2538 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Dec 13 01:35:21.558183 kubelet[2538]: E1213 01:35:21.558136 2538 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050"} Dec 13 01:35:21.558183 kubelet[2538]: E1213 01:35:21.558167 2538 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"332e1ee2-384e-4c56-b8b4-25490ccaf929\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:35:21.558296 kubelet[2538]: E1213 01:35:21.558199 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"332e1ee2-384e-4c56-b8b4-25490ccaf929\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-xtl2z" podUID="332e1ee2-384e-4c56-b8b4-25490ccaf929" Dec 13 01:35:22.783188 systemd[1]: Started sshd@7-10.0.0.100:22-10.0.0.1:46202.service - OpenSSH per-connection server daemon (10.0.0.1:46202). Dec 13 01:35:22.831797 sshd[3741]: Accepted publickey for core from 10.0.0.1 port 46202 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:35:22.834396 sshd[3741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:22.840040 systemd-logind[1455]: New session 8 of user core. Dec 13 01:35:22.844886 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:35:22.994451 sshd[3741]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:23.000203 systemd[1]: sshd@7-10.0.0.100:22-10.0.0.1:46202.service: Deactivated successfully. Dec 13 01:35:23.002757 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:35:23.003833 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:35:23.005286 systemd-logind[1455]: Removed session 8. Dec 13 01:35:26.195338 kubelet[2538]: I1213 01:35:26.195115 2538 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:35:26.198140 kubelet[2538]: E1213 01:35:26.195716 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:26.487301 kubelet[2538]: E1213 01:35:26.487134 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:28.014846 systemd[1]: Started sshd@8-10.0.0.100:22-10.0.0.1:53316.service - OpenSSH per-connection server daemon (10.0.0.1:53316). Dec 13 01:35:28.386736 sshd[3767]: Accepted publickey for core from 10.0.0.1 port 53316 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:35:28.388778 sshd[3767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:28.395210 systemd-logind[1455]: New session 9 of user core. Dec 13 01:35:28.400717 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:35:28.585546 sshd[3767]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:28.592069 systemd[1]: sshd@8-10.0.0.100:22-10.0.0.1:53316.service: Deactivated successfully. Dec 13 01:35:28.595019 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:35:28.598011 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:35:28.599671 systemd-logind[1455]: Removed session 9. Dec 13 01:35:29.408922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2166860348.mount: Deactivated successfully. Dec 13 01:35:31.515363 containerd[1471]: time="2024-12-13T01:35:31.515222622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:31.519040 containerd[1471]: time="2024-12-13T01:35:31.518945517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 01:35:31.521099 containerd[1471]: time="2024-12-13T01:35:31.521023858Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:31.530688 containerd[1471]: time="2024-12-13T01:35:31.530587063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:31.531659 containerd[1471]: time="2024-12-13T01:35:31.531608832Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 11.072714282s" Dec 13 01:35:31.531659 containerd[1471]: time="2024-12-13T01:35:31.531655389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 01:35:31.546587 containerd[1471]: time="2024-12-13T01:35:31.544559424Z" level=info msg="CreateContainer within sandbox \"347f9f683b390c0a29a2a9962012c3284c8e82928749b81601ea3fccfd882507\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:35:31.580194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1316414543.mount: Deactivated successfully. Dec 13 01:35:31.587036 containerd[1471]: time="2024-12-13T01:35:31.586973426Z" level=info msg="CreateContainer within sandbox \"347f9f683b390c0a29a2a9962012c3284c8e82928749b81601ea3fccfd882507\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"976d60d601122dff5560ffce015a9429bd093fb25015e2deb5720caec50335c7\"" Dec 13 01:35:31.587885 containerd[1471]: time="2024-12-13T01:35:31.587828430Z" level=info msg="StartContainer for \"976d60d601122dff5560ffce015a9429bd093fb25015e2deb5720caec50335c7\"" Dec 13 01:35:31.678814 systemd[1]: Started cri-containerd-976d60d601122dff5560ffce015a9429bd093fb25015e2deb5720caec50335c7.scope - libcontainer container 976d60d601122dff5560ffce015a9429bd093fb25015e2deb5720caec50335c7. Dec 13 01:35:31.815803 containerd[1471]: time="2024-12-13T01:35:31.815737805Z" level=info msg="StartContainer for \"976d60d601122dff5560ffce015a9429bd093fb25015e2deb5720caec50335c7\" returns successfully" Dec 13 01:35:31.860478 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:35:31.861654 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:35:32.509860 kubelet[2538]: E1213 01:35:32.509776 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:33.338313 containerd[1471]: time="2024-12-13T01:35:33.338233448Z" level=info msg="StopPodSandbox for \"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\"" Dec 13 01:35:33.457564 kernel: bpftool[4002]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:35:33.483558 kubelet[2538]: I1213 01:35:33.483457 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4krg2" podStartSLOduration=3.088764857 podStartE2EDuration="30.483433563s" podCreationTimestamp="2024-12-13 01:35:03 +0000 UTC" firstStartedPulling="2024-12-13 01:35:04.138138324 +0000 UTC m=+11.893531737" lastFinishedPulling="2024-12-13 01:35:31.53280703 +0000 UTC m=+39.288200443" observedRunningTime="2024-12-13 01:35:32.534279067 +0000 UTC m=+40.289672480" watchObservedRunningTime="2024-12-13 01:35:33.483433563 +0000 UTC m=+41.238826966" Dec 13 01:35:33.512161 kubelet[2538]: E1213 01:35:33.512094 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:33.604900 systemd[1]: Started sshd@9-10.0.0.100:22-10.0.0.1:53324.service - OpenSSH per-connection server daemon (10.0.0.1:53324). Dec 13 01:35:33.667082 sshd[4053]: Accepted publickey for core from 10.0.0.1 port 53324 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:35:33.669437 sshd[4053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:33.679289 systemd-logind[1455]: New session 10 of user core. Dec 13 01:35:33.680088 containerd[1471]: 2024-12-13 01:35:33.483 [INFO][3985] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Dec 13 01:35:33.680088 containerd[1471]: 2024-12-13 01:35:33.485 [INFO][3985] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" iface="eth0" netns="/var/run/netns/cni-1ca196c8-7f73-049b-3e83-7559bfd95666" Dec 13 01:35:33.680088 containerd[1471]: 2024-12-13 01:35:33.485 [INFO][3985] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" iface="eth0" netns="/var/run/netns/cni-1ca196c8-7f73-049b-3e83-7559bfd95666" Dec 13 01:35:33.680088 containerd[1471]: 2024-12-13 01:35:33.486 [INFO][3985] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" iface="eth0" netns="/var/run/netns/cni-1ca196c8-7f73-049b-3e83-7559bfd95666" Dec 13 01:35:33.680088 containerd[1471]: 2024-12-13 01:35:33.486 [INFO][3985] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Dec 13 01:35:33.680088 containerd[1471]: 2024-12-13 01:35:33.486 [INFO][3985] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Dec 13 01:35:33.680088 containerd[1471]: 2024-12-13 01:35:33.657 [INFO][4015] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" HandleID="k8s-pod-network.533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Workload="localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0" Dec 13 01:35:33.680088 containerd[1471]: 2024-12-13 01:35:33.658 [INFO][4015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:33.680088 containerd[1471]: 2024-12-13 01:35:33.659 [INFO][4015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:33.680088 containerd[1471]: 2024-12-13 01:35:33.669 [WARNING][4015] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" HandleID="k8s-pod-network.533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Workload="localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0" Dec 13 01:35:33.680088 containerd[1471]: 2024-12-13 01:35:33.669 [INFO][4015] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" HandleID="k8s-pod-network.533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Workload="localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0" Dec 13 01:35:33.680088 containerd[1471]: 2024-12-13 01:35:33.671 [INFO][4015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:33.680088 containerd[1471]: 2024-12-13 01:35:33.675 [INFO][3985] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Dec 13 01:35:33.682725 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:35:33.684423 kubelet[2538]: E1213 01:35:33.683204 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:33.684500 containerd[1471]: time="2024-12-13T01:35:33.682716138Z" level=info msg="TearDown network for sandbox \"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\" successfully" Dec 13 01:35:33.684500 containerd[1471]: time="2024-12-13T01:35:33.682752366Z" level=info msg="StopPodSandbox for \"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\" returns successfully" Dec 13 01:35:33.685373 containerd[1471]: time="2024-12-13T01:35:33.684949018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xtl2z,Uid:332e1ee2-384e-4c56-b8b4-25490ccaf929,Namespace:kube-system,Attempt:1,}" Dec 13 01:35:33.686346 systemd[1]: run-netns-cni\x2d1ca196c8\x2d7f73\x2d049b\x2d3e83\x2d7559bfd95666.mount: Deactivated successfully. Dec 13 01:35:33.809261 systemd-networkd[1406]: vxlan.calico: Link UP Dec 13 01:35:33.809273 systemd-networkd[1406]: vxlan.calico: Gained carrier Dec 13 01:35:33.899155 sshd[4053]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:33.905631 systemd[1]: sshd@9-10.0.0.100:22-10.0.0.1:53324.service: Deactivated successfully. Dec 13 01:35:33.910807 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:35:33.917645 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:35:33.919495 systemd-logind[1455]: Removed session 10. Dec 13 01:35:33.958960 systemd-networkd[1406]: calie9ac21084e7: Link UP Dec 13 01:35:33.959972 systemd-networkd[1406]: calie9ac21084e7: Gained carrier Dec 13 01:35:34.060924 containerd[1471]: 2024-12-13 01:35:33.775 [INFO][4059] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0 coredns-6f6b679f8f- kube-system 332e1ee2-384e-4c56-b8b4-25490ccaf929 887 0 2024-12-13 01:34:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-xtl2z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie9ac21084e7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747" Namespace="kube-system" Pod="coredns-6f6b679f8f-xtl2z" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xtl2z-" Dec 13 01:35:34.060924 containerd[1471]: 2024-12-13 01:35:33.776 [INFO][4059] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747" Namespace="kube-system" Pod="coredns-6f6b679f8f-xtl2z" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0" Dec 13 01:35:34.060924 containerd[1471]: 2024-12-13 01:35:33.844 [INFO][4099] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747" HandleID="k8s-pod-network.d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747" Workload="localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0" Dec 13 01:35:34.060924 containerd[1471]: 2024-12-13 01:35:33.867 [INFO][4099] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747" HandleID="k8s-pod-network.d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747" Workload="localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003096d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-xtl2z", "timestamp":"2024-12-13 01:35:33.844321085 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:35:34.060924 containerd[1471]: 2024-12-13 01:35:33.867 [INFO][4099] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:34.060924 containerd[1471]: 2024-12-13 01:35:33.867 [INFO][4099] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:34.060924 containerd[1471]: 2024-12-13 01:35:33.867 [INFO][4099] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:35:34.060924 containerd[1471]: 2024-12-13 01:35:33.873 [INFO][4099] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747" host="localhost" Dec 13 01:35:34.060924 containerd[1471]: 2024-12-13 01:35:33.890 [INFO][4099] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:35:34.060924 containerd[1471]: 2024-12-13 01:35:33.897 [INFO][4099] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:35:34.060924 containerd[1471]: 2024-12-13 01:35:33.899 [INFO][4099] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:35:34.060924 containerd[1471]: 2024-12-13 01:35:33.906 [INFO][4099] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:35:34.060924 containerd[1471]: 2024-12-13 01:35:33.906 [INFO][4099] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747" host="localhost" Dec 13 01:35:34.060924 containerd[1471]: 2024-12-13 01:35:33.912 [INFO][4099] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747 Dec 13 01:35:34.060924 containerd[1471]: 2024-12-13 01:35:33.930 [INFO][4099] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747" host="localhost" Dec 13 01:35:34.060924 containerd[1471]: 2024-12-13 01:35:33.944 [INFO][4099] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747" host="localhost" Dec 13 01:35:34.060924 containerd[1471]: 2024-12-13 01:35:33.944 [INFO][4099] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747" host="localhost" Dec 13 01:35:34.060924 containerd[1471]: 2024-12-13 01:35:33.944 [INFO][4099] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:34.060924 containerd[1471]: 2024-12-13 01:35:33.944 [INFO][4099] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747" HandleID="k8s-pod-network.d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747" Workload="localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0" Dec 13 01:35:34.061914 containerd[1471]: 2024-12-13 01:35:33.950 [INFO][4059] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747" Namespace="kube-system" Pod="coredns-6f6b679f8f-xtl2z" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"332e1ee2-384e-4c56-b8b4-25490ccaf929", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 34, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-xtl2z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9ac21084e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:34.061914 containerd[1471]: 2024-12-13 01:35:33.950 [INFO][4059] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747" Namespace="kube-system" Pod="coredns-6f6b679f8f-xtl2z" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0" Dec 13 01:35:34.061914 containerd[1471]: 2024-12-13 01:35:33.950 [INFO][4059] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9ac21084e7 ContainerID="d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747" Namespace="kube-system" Pod="coredns-6f6b679f8f-xtl2z" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0" Dec 13 01:35:34.061914 containerd[1471]: 2024-12-13 01:35:33.960 [INFO][4059] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747" Namespace="kube-system" Pod="coredns-6f6b679f8f-xtl2z" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0" Dec 13 01:35:34.061914 containerd[1471]: 2024-12-13 01:35:33.961 [INFO][4059] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747" Namespace="kube-system" Pod="coredns-6f6b679f8f-xtl2z" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"332e1ee2-384e-4c56-b8b4-25490ccaf929", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 34, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747", Pod:"coredns-6f6b679f8f-xtl2z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9ac21084e7", MAC:"fa:b6:54:69:08:f2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:34.061914 containerd[1471]: 2024-12-13 01:35:34.053 [INFO][4059] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747" Namespace="kube-system" Pod="coredns-6f6b679f8f-xtl2z" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0" Dec 13 01:35:34.188496 containerd[1471]: time="2024-12-13T01:35:34.187904447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:35:34.188496 containerd[1471]: time="2024-12-13T01:35:34.188020635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:35:34.188496 containerd[1471]: time="2024-12-13T01:35:34.188037506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:34.189747 containerd[1471]: time="2024-12-13T01:35:34.188395457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:34.244860 systemd[1]: Started cri-containerd-d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747.scope - libcontainer container d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747. Dec 13 01:35:34.266688 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:35:34.307370 containerd[1471]: time="2024-12-13T01:35:34.307263856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xtl2z,Uid:332e1ee2-384e-4c56-b8b4-25490ccaf929,Namespace:kube-system,Attempt:1,} returns sandbox id \"d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747\"" Dec 13 01:35:34.308251 kubelet[2538]: E1213 01:35:34.308206 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:34.311201 containerd[1471]: time="2024-12-13T01:35:34.311130480Z" level=info msg="CreateContainer within sandbox \"d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:35:34.337644 containerd[1471]: time="2024-12-13T01:35:34.337583356Z" level=info msg="StopPodSandbox for \"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\"" Dec 13 01:35:34.337772 containerd[1471]: time="2024-12-13T01:35:34.337637337Z" level=info msg="StopPodSandbox for \"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\"" Dec 13 01:35:34.358397 containerd[1471]: time="2024-12-13T01:35:34.358337036Z" level=info msg="CreateContainer within sandbox \"d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2fd22b3663d86a97f390a73ec857787a853c3bf789fd47ef81ecb94c90b4e67d\"" Dec 13 01:35:34.360053 containerd[1471]: time="2024-12-13T01:35:34.360014404Z" level=info msg="StartContainer for \"2fd22b3663d86a97f390a73ec857787a853c3bf789fd47ef81ecb94c90b4e67d\"" Dec 13 01:35:34.423265 systemd[1]: Started cri-containerd-2fd22b3663d86a97f390a73ec857787a853c3bf789fd47ef81ecb94c90b4e67d.scope - libcontainer container 2fd22b3663d86a97f390a73ec857787a853c3bf789fd47ef81ecb94c90b4e67d. Dec 13 01:35:34.499705 containerd[1471]: time="2024-12-13T01:35:34.497474516Z" level=info msg="StartContainer for \"2fd22b3663d86a97f390a73ec857787a853c3bf789fd47ef81ecb94c90b4e67d\" returns successfully" Dec 13 01:35:34.508991 containerd[1471]: 2024-12-13 01:35:34.431 [INFO][4252] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Dec 13 01:35:34.508991 containerd[1471]: 2024-12-13 01:35:34.432 [INFO][4252] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" iface="eth0" netns="/var/run/netns/cni-ea279788-bf55-4df5-957a-fbcc74e19874" Dec 13 01:35:34.508991 containerd[1471]: 2024-12-13 01:35:34.435 [INFO][4252] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" iface="eth0" netns="/var/run/netns/cni-ea279788-bf55-4df5-957a-fbcc74e19874" Dec 13 01:35:34.508991 containerd[1471]: 2024-12-13 01:35:34.438 [INFO][4252] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" iface="eth0" netns="/var/run/netns/cni-ea279788-bf55-4df5-957a-fbcc74e19874" Dec 13 01:35:34.508991 containerd[1471]: 2024-12-13 01:35:34.438 [INFO][4252] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Dec 13 01:35:34.508991 containerd[1471]: 2024-12-13 01:35:34.438 [INFO][4252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Dec 13 01:35:34.508991 containerd[1471]: 2024-12-13 01:35:34.481 [INFO][4295] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" HandleID="k8s-pod-network.b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Workload="localhost-k8s-csi--node--driver--b257v-eth0" Dec 13 01:35:34.508991 containerd[1471]: 2024-12-13 01:35:34.481 [INFO][4295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:34.508991 containerd[1471]: 2024-12-13 01:35:34.482 [INFO][4295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:34.508991 containerd[1471]: 2024-12-13 01:35:34.492 [WARNING][4295] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" HandleID="k8s-pod-network.b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Workload="localhost-k8s-csi--node--driver--b257v-eth0" Dec 13 01:35:34.508991 containerd[1471]: 2024-12-13 01:35:34.493 [INFO][4295] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" HandleID="k8s-pod-network.b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Workload="localhost-k8s-csi--node--driver--b257v-eth0" Dec 13 01:35:34.508991 containerd[1471]: 2024-12-13 01:35:34.495 [INFO][4295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:34.508991 containerd[1471]: 2024-12-13 01:35:34.505 [INFO][4252] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Dec 13 01:35:34.510157 containerd[1471]: time="2024-12-13T01:35:34.510117559Z" level=info msg="TearDown network for sandbox \"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\" successfully" Dec 13 01:35:34.510245 containerd[1471]: time="2024-12-13T01:35:34.510225050Z" level=info msg="StopPodSandbox for \"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\" returns successfully" Dec 13 01:35:34.511249 containerd[1471]: time="2024-12-13T01:35:34.511220067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b257v,Uid:40d19590-db9c-41bd-9d1d-bd10d8bd864c,Namespace:calico-system,Attempt:1,}" Dec 13 01:35:34.517786 kubelet[2538]: E1213 01:35:34.517669 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:34.535675 containerd[1471]: 2024-12-13 01:35:34.451 [INFO][4259] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Dec 13 01:35:34.535675 containerd[1471]: 2024-12-13 01:35:34.451 [INFO][4259] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" iface="eth0" netns="/var/run/netns/cni-c9d5fb90-390f-4fb0-2a51-0e8b4f67ae63" Dec 13 01:35:34.535675 containerd[1471]: 2024-12-13 01:35:34.452 [INFO][4259] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" iface="eth0" netns="/var/run/netns/cni-c9d5fb90-390f-4fb0-2a51-0e8b4f67ae63" Dec 13 01:35:34.535675 containerd[1471]: 2024-12-13 01:35:34.453 [INFO][4259] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" iface="eth0" netns="/var/run/netns/cni-c9d5fb90-390f-4fb0-2a51-0e8b4f67ae63" Dec 13 01:35:34.535675 containerd[1471]: 2024-12-13 01:35:34.453 [INFO][4259] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Dec 13 01:35:34.535675 containerd[1471]: 2024-12-13 01:35:34.454 [INFO][4259] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Dec 13 01:35:34.535675 containerd[1471]: 2024-12-13 01:35:34.512 [INFO][4303] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" HandleID="k8s-pod-network.9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Workload="localhost-k8s-coredns--6f6b679f8f--dldnc-eth0" Dec 13 01:35:34.535675 containerd[1471]: 2024-12-13 01:35:34.513 [INFO][4303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:34.535675 containerd[1471]: 2024-12-13 01:35:34.513 [INFO][4303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:34.535675 containerd[1471]: 2024-12-13 01:35:34.523 [WARNING][4303] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" HandleID="k8s-pod-network.9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Workload="localhost-k8s-coredns--6f6b679f8f--dldnc-eth0" Dec 13 01:35:34.535675 containerd[1471]: 2024-12-13 01:35:34.523 [INFO][4303] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" HandleID="k8s-pod-network.9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Workload="localhost-k8s-coredns--6f6b679f8f--dldnc-eth0" Dec 13 01:35:34.535675 containerd[1471]: 2024-12-13 01:35:34.527 [INFO][4303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:34.535675 containerd[1471]: 2024-12-13 01:35:34.531 [INFO][4259] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Dec 13 01:35:34.536171 containerd[1471]: time="2024-12-13T01:35:34.535992059Z" level=info msg="TearDown network for sandbox \"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\" successfully" Dec 13 01:35:34.536171 containerd[1471]: time="2024-12-13T01:35:34.536030821Z" level=info msg="StopPodSandbox for \"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\" returns successfully" Dec 13 01:35:34.536466 kubelet[2538]: E1213 01:35:34.536419 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:34.537169 containerd[1471]: time="2024-12-13T01:35:34.536917094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dldnc,Uid:9d170bf4-a7af-412b-aafa-970b3da3b8b8,Namespace:kube-system,Attempt:1,}" Dec 13 01:35:34.548016 kubelet[2538]: I1213 01:35:34.547157 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-xtl2z" podStartSLOduration=38.547131269 podStartE2EDuration="38.547131269s" podCreationTimestamp="2024-12-13 01:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:35:34.546996236 +0000 UTC m=+42.302389669" watchObservedRunningTime="2024-12-13 01:35:34.547131269 +0000 UTC m=+42.302524693" Dec 13 01:35:34.693300 systemd[1]: run-netns-cni\x2dc9d5fb90\x2d390f\x2d4fb0\x2d2a51\x2d0e8b4f67ae63.mount: Deactivated successfully. Dec 13 01:35:34.693427 systemd[1]: run-netns-cni\x2dea279788\x2dbf55\x2d4df5\x2d957a\x2dfbcc74e19874.mount: Deactivated successfully. Dec 13 01:35:34.782255 systemd-networkd[1406]: calic3e6020a254: Link UP Dec 13 01:35:34.782674 systemd-networkd[1406]: calic3e6020a254: Gained carrier Dec 13 01:35:34.806346 containerd[1471]: 2024-12-13 01:35:34.665 [INFO][4342] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--dldnc-eth0 coredns-6f6b679f8f- kube-system 9d170bf4-a7af-412b-aafa-970b3da3b8b8 903 0 2024-12-13 01:34:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-dldnc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic3e6020a254 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151" Namespace="kube-system" Pod="coredns-6f6b679f8f-dldnc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dldnc-" Dec 13 01:35:34.806346 containerd[1471]: 2024-12-13 01:35:34.666 [INFO][4342] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151" Namespace="kube-system" Pod="coredns-6f6b679f8f-dldnc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dldnc-eth0" Dec 13 01:35:34.806346 containerd[1471]: 2024-12-13 01:35:34.725 [INFO][4354] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151" HandleID="k8s-pod-network.07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151" Workload="localhost-k8s-coredns--6f6b679f8f--dldnc-eth0" Dec 13 01:35:34.806346 containerd[1471]: 2024-12-13 01:35:34.736 [INFO][4354] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151" HandleID="k8s-pod-network.07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151" Workload="localhost-k8s-coredns--6f6b679f8f--dldnc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334c40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-dldnc", "timestamp":"2024-12-13 01:35:34.725321916 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:35:34.806346 containerd[1471]: 2024-12-13 01:35:34.736 [INFO][4354] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:34.806346 containerd[1471]: 2024-12-13 01:35:34.736 [INFO][4354] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:34.806346 containerd[1471]: 2024-12-13 01:35:34.736 [INFO][4354] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:35:34.806346 containerd[1471]: 2024-12-13 01:35:34.739 [INFO][4354] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151" host="localhost" Dec 13 01:35:34.806346 containerd[1471]: 2024-12-13 01:35:34.743 [INFO][4354] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:35:34.806346 containerd[1471]: 2024-12-13 01:35:34.749 [INFO][4354] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:35:34.806346 containerd[1471]: 2024-12-13 01:35:34.751 [INFO][4354] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:35:34.806346 containerd[1471]: 2024-12-13 01:35:34.753 [INFO][4354] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:35:34.806346 containerd[1471]: 2024-12-13 01:35:34.753 [INFO][4354] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151" host="localhost" Dec 13 01:35:34.806346 containerd[1471]: 2024-12-13 01:35:34.756 [INFO][4354] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151 Dec 13 01:35:34.806346 containerd[1471]: 2024-12-13 01:35:34.765 [INFO][4354] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151" host="localhost" Dec 13 01:35:34.806346 containerd[1471]: 2024-12-13 01:35:34.774 [INFO][4354] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151" host="localhost" Dec 13 01:35:34.806346 containerd[1471]: 2024-12-13 01:35:34.774 [INFO][4354] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151" host="localhost" Dec 13 01:35:34.806346 containerd[1471]: 2024-12-13 01:35:34.774 [INFO][4354] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:34.806346 containerd[1471]: 2024-12-13 01:35:34.774 [INFO][4354] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151" HandleID="k8s-pod-network.07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151" Workload="localhost-k8s-coredns--6f6b679f8f--dldnc-eth0" Dec 13 01:35:34.816271 containerd[1471]: 2024-12-13 01:35:34.778 [INFO][4342] cni-plugin/k8s.go 386: Populated endpoint ContainerID="07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151" Namespace="kube-system" Pod="coredns-6f6b679f8f-dldnc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dldnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--dldnc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9d170bf4-a7af-412b-aafa-970b3da3b8b8", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 34, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-dldnc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3e6020a254", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:34.816271 containerd[1471]: 2024-12-13 01:35:34.778 [INFO][4342] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151" Namespace="kube-system" Pod="coredns-6f6b679f8f-dldnc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dldnc-eth0" Dec 13 01:35:34.816271 containerd[1471]: 2024-12-13 01:35:34.779 [INFO][4342] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3e6020a254 ContainerID="07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151" Namespace="kube-system" Pod="coredns-6f6b679f8f-dldnc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dldnc-eth0" Dec 13 01:35:34.816271 containerd[1471]: 2024-12-13 01:35:34.782 [INFO][4342] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151" Namespace="kube-system" Pod="coredns-6f6b679f8f-dldnc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dldnc-eth0" Dec 13 01:35:34.816271 containerd[1471]: 2024-12-13 01:35:34.783 [INFO][4342] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151" Namespace="kube-system" Pod="coredns-6f6b679f8f-dldnc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dldnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--dldnc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9d170bf4-a7af-412b-aafa-970b3da3b8b8", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 34, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151", Pod:"coredns-6f6b679f8f-dldnc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3e6020a254", MAC:"32:d6:80:dc:e3:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:34.816271 containerd[1471]: 2024-12-13 01:35:34.800 [INFO][4342] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151" Namespace="kube-system" Pod="coredns-6f6b679f8f-dldnc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dldnc-eth0" Dec 13 01:35:34.977133 containerd[1471]: time="2024-12-13T01:35:34.976948231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:35:34.977133 containerd[1471]: time="2024-12-13T01:35:34.977034564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:35:34.977133 containerd[1471]: time="2024-12-13T01:35:34.977049783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:34.977403 containerd[1471]: time="2024-12-13T01:35:34.977185198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:35.003703 systemd[1]: Started cri-containerd-07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151.scope - libcontainer container 07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151. Dec 13 01:35:35.021705 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:35:35.054902 containerd[1471]: time="2024-12-13T01:35:35.054746015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dldnc,Uid:9d170bf4-a7af-412b-aafa-970b3da3b8b8,Namespace:kube-system,Attempt:1,} returns sandbox id \"07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151\"" Dec 13 01:35:35.055699 kubelet[2538]: E1213 01:35:35.055664 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:35.058192 containerd[1471]: time="2024-12-13T01:35:35.058148589Z" level=info msg="CreateContainer within sandbox \"07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:35:35.095621 systemd-networkd[1406]: cali2f31204ca05: Link UP Dec 13 01:35:35.095873 systemd-networkd[1406]: cali2f31204ca05: Gained carrier Dec 13 01:35:35.162740 containerd[1471]: 2024-12-13 01:35:34.666 [INFO][4319] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--b257v-eth0 csi-node-driver- calico-system 40d19590-db9c-41bd-9d1d-bd10d8bd864c 902 0 2024-12-13 01:35:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-b257v eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2f31204ca05 [] []}} ContainerID="8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c" Namespace="calico-system" Pod="csi-node-driver-b257v" WorkloadEndpoint="localhost-k8s-csi--node--driver--b257v-" Dec 13 01:35:35.162740 containerd[1471]: 2024-12-13 01:35:34.666 [INFO][4319] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c" Namespace="calico-system" Pod="csi-node-driver-b257v" WorkloadEndpoint="localhost-k8s-csi--node--driver--b257v-eth0" Dec 13 01:35:35.162740 containerd[1471]: 2024-12-13 01:35:34.729 [INFO][4358] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c" HandleID="k8s-pod-network.8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c" Workload="localhost-k8s-csi--node--driver--b257v-eth0" Dec 13 01:35:35.162740 containerd[1471]: 2024-12-13 01:35:34.737 [INFO][4358] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c" HandleID="k8s-pod-network.8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c" Workload="localhost-k8s-csi--node--driver--b257v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309200), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-b257v", "timestamp":"2024-12-13 01:35:34.728984097 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:35:35.162740 containerd[1471]: 2024-12-13 01:35:34.738 [INFO][4358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:35.162740 containerd[1471]: 2024-12-13 01:35:34.774 [INFO][4358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:35.162740 containerd[1471]: 2024-12-13 01:35:34.774 [INFO][4358] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:35:35.162740 containerd[1471]: 2024-12-13 01:35:34.840 [INFO][4358] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c" host="localhost" Dec 13 01:35:35.162740 containerd[1471]: 2024-12-13 01:35:34.846 [INFO][4358] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:35:35.162740 containerd[1471]: 2024-12-13 01:35:34.852 [INFO][4358] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:35:35.162740 containerd[1471]: 2024-12-13 01:35:34.854 [INFO][4358] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:35:35.162740 containerd[1471]: 2024-12-13 01:35:34.857 [INFO][4358] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:35:35.162740 containerd[1471]: 2024-12-13 01:35:34.857 [INFO][4358] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c" host="localhost" Dec 13 01:35:35.162740 containerd[1471]: 2024-12-13 01:35:34.859 [INFO][4358] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c Dec 13 01:35:35.162740 containerd[1471]: 2024-12-13 01:35:34.875 [INFO][4358] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c" host="localhost" Dec 13 01:35:35.162740 containerd[1471]: 2024-12-13 01:35:35.088 [INFO][4358] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c" host="localhost" Dec 13 01:35:35.162740 containerd[1471]: 2024-12-13 01:35:35.088 [INFO][4358] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c" host="localhost" Dec 13 01:35:35.162740 containerd[1471]: 2024-12-13 01:35:35.088 [INFO][4358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:35.162740 containerd[1471]: 2024-12-13 01:35:35.088 [INFO][4358] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c" HandleID="k8s-pod-network.8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c" Workload="localhost-k8s-csi--node--driver--b257v-eth0" Dec 13 01:35:35.163628 containerd[1471]: 2024-12-13 01:35:35.092 [INFO][4319] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c" Namespace="calico-system" Pod="csi-node-driver-b257v" WorkloadEndpoint="localhost-k8s-csi--node--driver--b257v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b257v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40d19590-db9c-41bd-9d1d-bd10d8bd864c", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 35, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-b257v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2f31204ca05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:35.163628 containerd[1471]: 2024-12-13 01:35:35.093 [INFO][4319] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c" Namespace="calico-system" Pod="csi-node-driver-b257v" WorkloadEndpoint="localhost-k8s-csi--node--driver--b257v-eth0" Dec 13 01:35:35.163628 containerd[1471]: 2024-12-13 01:35:35.093 [INFO][4319] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f31204ca05 ContainerID="8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c" Namespace="calico-system" Pod="csi-node-driver-b257v" WorkloadEndpoint="localhost-k8s-csi--node--driver--b257v-eth0" Dec 13 01:35:35.163628 containerd[1471]: 2024-12-13 01:35:35.095 [INFO][4319] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c" Namespace="calico-system" Pod="csi-node-driver-b257v" WorkloadEndpoint="localhost-k8s-csi--node--driver--b257v-eth0" Dec 13 01:35:35.163628 containerd[1471]: 2024-12-13 01:35:35.096 [INFO][4319] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c" Namespace="calico-system" Pod="csi-node-driver-b257v" WorkloadEndpoint="localhost-k8s-csi--node--driver--b257v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b257v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40d19590-db9c-41bd-9d1d-bd10d8bd864c", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 35, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c", Pod:"csi-node-driver-b257v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2f31204ca05", MAC:"06:70:59:fc:a2:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:35.163628 containerd[1471]: 2024-12-13 01:35:35.156 [INFO][4319] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c" Namespace="calico-system" Pod="csi-node-driver-b257v" WorkloadEndpoint="localhost-k8s-csi--node--driver--b257v-eth0" Dec 13 01:35:35.291730 containerd[1471]: time="2024-12-13T01:35:35.291550451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:35:35.291730 containerd[1471]: time="2024-12-13T01:35:35.291648131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:35:35.291730 containerd[1471]: time="2024-12-13T01:35:35.291664303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:35.291963 containerd[1471]: time="2024-12-13T01:35:35.291777983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:35.315071 containerd[1471]: time="2024-12-13T01:35:35.314728337Z" level=info msg="CreateContainer within sandbox \"07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2a3217da186fffd810825af42d975697a5789260cc35cb62dda89a7b3f2fb4da\"" Dec 13 01:35:35.316772 containerd[1471]: time="2024-12-13T01:35:35.316685848Z" level=info msg="StartContainer for \"2a3217da186fffd810825af42d975697a5789260cc35cb62dda89a7b3f2fb4da\"" Dec 13 01:35:35.321820 systemd[1]: Started cri-containerd-8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c.scope - libcontainer container 8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c. Dec 13 01:35:35.337637 containerd[1471]: time="2024-12-13T01:35:35.336848360Z" level=info msg="StopPodSandbox for \"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\"" Dec 13 01:35:35.343561 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:35:35.375837 systemd[1]: Started cri-containerd-2a3217da186fffd810825af42d975697a5789260cc35cb62dda89a7b3f2fb4da.scope - libcontainer container 2a3217da186fffd810825af42d975697a5789260cc35cb62dda89a7b3f2fb4da. Dec 13 01:35:35.386842 containerd[1471]: time="2024-12-13T01:35:35.386650002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b257v,Uid:40d19590-db9c-41bd-9d1d-bd10d8bd864c,Namespace:calico-system,Attempt:1,} returns sandbox id \"8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c\"" Dec 13 01:35:35.393570 containerd[1471]: time="2024-12-13T01:35:35.391608808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:35:35.425881 containerd[1471]: time="2024-12-13T01:35:35.425790837Z" level=info msg="StartContainer for \"2a3217da186fffd810825af42d975697a5789260cc35cb62dda89a7b3f2fb4da\" returns successfully" Dec 13 01:35:35.492702 systemd-networkd[1406]: vxlan.calico: Gained IPv6LL Dec 13 01:35:35.514613 containerd[1471]: 2024-12-13 01:35:35.436 [INFO][4519] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Dec 13 01:35:35.514613 containerd[1471]: 2024-12-13 01:35:35.436 [INFO][4519] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" iface="eth0" netns="/var/run/netns/cni-b4ae2754-c0a3-5993-0912-be1f6a9c40db" Dec 13 01:35:35.514613 containerd[1471]: 2024-12-13 01:35:35.436 [INFO][4519] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" iface="eth0" netns="/var/run/netns/cni-b4ae2754-c0a3-5993-0912-be1f6a9c40db" Dec 13 01:35:35.514613 containerd[1471]: 2024-12-13 01:35:35.437 [INFO][4519] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" iface="eth0" netns="/var/run/netns/cni-b4ae2754-c0a3-5993-0912-be1f6a9c40db" Dec 13 01:35:35.514613 containerd[1471]: 2024-12-13 01:35:35.437 [INFO][4519] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Dec 13 01:35:35.514613 containerd[1471]: 2024-12-13 01:35:35.437 [INFO][4519] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Dec 13 01:35:35.514613 containerd[1471]: 2024-12-13 01:35:35.485 [INFO][4546] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" HandleID="k8s-pod-network.92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Workload="localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0" Dec 13 01:35:35.514613 containerd[1471]: 2024-12-13 01:35:35.485 [INFO][4546] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:35.514613 containerd[1471]: 2024-12-13 01:35:35.485 [INFO][4546] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:35.514613 containerd[1471]: 2024-12-13 01:35:35.504 [WARNING][4546] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" HandleID="k8s-pod-network.92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Workload="localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0" Dec 13 01:35:35.514613 containerd[1471]: 2024-12-13 01:35:35.505 [INFO][4546] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" HandleID="k8s-pod-network.92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Workload="localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0" Dec 13 01:35:35.514613 containerd[1471]: 2024-12-13 01:35:35.507 [INFO][4546] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:35.514613 containerd[1471]: 2024-12-13 01:35:35.510 [INFO][4519] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Dec 13 01:35:35.515379 containerd[1471]: time="2024-12-13T01:35:35.514844601Z" level=info msg="TearDown network for sandbox \"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\" successfully" Dec 13 01:35:35.515379 containerd[1471]: time="2024-12-13T01:35:35.514881472Z" level=info msg="StopPodSandbox for \"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\" returns successfully" Dec 13 01:35:35.515829 containerd[1471]: time="2024-12-13T01:35:35.515799414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4cc7c965-9dhjq,Uid:11de9ed5-e662-461f-9f1b-7ff39b760401,Namespace:calico-system,Attempt:1,}" Dec 13 01:35:35.520120 kubelet[2538]: E1213 01:35:35.520057 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:35.521773 kubelet[2538]: E1213 01:35:35.521730 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:35.547379 kubelet[2538]: I1213 01:35:35.545463 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-dldnc" podStartSLOduration=39.545408275 podStartE2EDuration="39.545408275s" podCreationTimestamp="2024-12-13 01:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:35:35.532750742 +0000 UTC m=+43.288144166" watchObservedRunningTime="2024-12-13 01:35:35.545408275 +0000 UTC m=+43.300801688" Dec 13 01:35:35.685621 systemd-networkd[1406]: calie9ac21084e7: Gained IPv6LL Dec 13 01:35:35.690222 systemd[1]: run-netns-cni\x2db4ae2754\x2dc0a3\x2d5993\x2d0912\x2dbe1f6a9c40db.mount: Deactivated successfully. Dec 13 01:35:35.695969 systemd-networkd[1406]: calif95cd89016f: Link UP Dec 13 01:35:35.697158 systemd-networkd[1406]: calif95cd89016f: Gained carrier Dec 13 01:35:35.715080 containerd[1471]: 2024-12-13 01:35:35.606 [INFO][4559] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0 calico-kube-controllers-c4cc7c965- calico-system 11de9ed5-e662-461f-9f1b-7ff39b760401 925 0 2024-12-13 01:35:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c4cc7c965 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-c4cc7c965-9dhjq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif95cd89016f [] []}} ContainerID="845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173" Namespace="calico-system" Pod="calico-kube-controllers-c4cc7c965-9dhjq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-" Dec 13 01:35:35.715080 containerd[1471]: 2024-12-13 01:35:35.607 [INFO][4559] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173" Namespace="calico-system" Pod="calico-kube-controllers-c4cc7c965-9dhjq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0" Dec 13 01:35:35.715080 containerd[1471]: 2024-12-13 01:35:35.648 [INFO][4578] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173" HandleID="k8s-pod-network.845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173" Workload="localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0" Dec 13 01:35:35.715080 containerd[1471]: 2024-12-13 01:35:35.656 [INFO][4578] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173" HandleID="k8s-pod-network.845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173" Workload="localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dcc80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-c4cc7c965-9dhjq", "timestamp":"2024-12-13 01:35:35.648615394 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:35:35.715080 containerd[1471]: 2024-12-13 01:35:35.657 [INFO][4578] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:35.715080 containerd[1471]: 2024-12-13 01:35:35.657 [INFO][4578] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:35.715080 containerd[1471]: 2024-12-13 01:35:35.657 [INFO][4578] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:35:35.715080 containerd[1471]: 2024-12-13 01:35:35.659 [INFO][4578] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173" host="localhost" Dec 13 01:35:35.715080 containerd[1471]: 2024-12-13 01:35:35.663 [INFO][4578] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:35:35.715080 containerd[1471]: 2024-12-13 01:35:35.667 [INFO][4578] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:35:35.715080 containerd[1471]: 2024-12-13 01:35:35.669 [INFO][4578] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:35:35.715080 containerd[1471]: 2024-12-13 01:35:35.672 [INFO][4578] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:35:35.715080 containerd[1471]: 2024-12-13 01:35:35.672 [INFO][4578] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173" host="localhost" Dec 13 01:35:35.715080 containerd[1471]: 2024-12-13 01:35:35.673 [INFO][4578] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173 Dec 13 01:35:35.715080 containerd[1471]: 2024-12-13 01:35:35.676 [INFO][4578] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173" host="localhost" Dec 13 01:35:35.715080 containerd[1471]: 2024-12-13 01:35:35.682 [INFO][4578] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173" host="localhost" Dec 13 01:35:35.715080 containerd[1471]: 2024-12-13 01:35:35.682 [INFO][4578] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173" host="localhost" Dec 13 01:35:35.715080 containerd[1471]: 2024-12-13 01:35:35.682 [INFO][4578] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:35.715080 containerd[1471]: 2024-12-13 01:35:35.682 [INFO][4578] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173" HandleID="k8s-pod-network.845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173" Workload="localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0" Dec 13 01:35:35.715765 containerd[1471]: 2024-12-13 01:35:35.691 [INFO][4559] cni-plugin/k8s.go 386: Populated endpoint ContainerID="845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173" Namespace="calico-system" Pod="calico-kube-controllers-c4cc7c965-9dhjq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0", GenerateName:"calico-kube-controllers-c4cc7c965-", Namespace:"calico-system", SelfLink:"", UID:"11de9ed5-e662-461f-9f1b-7ff39b760401", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 35, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c4cc7c965", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-c4cc7c965-9dhjq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif95cd89016f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:35.715765 containerd[1471]: 2024-12-13 01:35:35.691 [INFO][4559] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173" Namespace="calico-system" Pod="calico-kube-controllers-c4cc7c965-9dhjq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0" Dec 13 01:35:35.715765 containerd[1471]: 2024-12-13 01:35:35.691 [INFO][4559] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif95cd89016f ContainerID="845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173" Namespace="calico-system" Pod="calico-kube-controllers-c4cc7c965-9dhjq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0" Dec 13 01:35:35.715765 containerd[1471]: 2024-12-13 01:35:35.696 [INFO][4559] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173" Namespace="calico-system" Pod="calico-kube-controllers-c4cc7c965-9dhjq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0" Dec 13 01:35:35.715765 containerd[1471]: 2024-12-13 01:35:35.697 [INFO][4559] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173" Namespace="calico-system" Pod="calico-kube-controllers-c4cc7c965-9dhjq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0", GenerateName:"calico-kube-controllers-c4cc7c965-", Namespace:"calico-system", SelfLink:"", UID:"11de9ed5-e662-461f-9f1b-7ff39b760401", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 35, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c4cc7c965", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173", Pod:"calico-kube-controllers-c4cc7c965-9dhjq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif95cd89016f", MAC:"a2:26:2c:d3:53:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:35.715765 containerd[1471]: 2024-12-13 01:35:35.711 [INFO][4559] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173" Namespace="calico-system" Pod="calico-kube-controllers-c4cc7c965-9dhjq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0" Dec 13 01:35:35.785032 containerd[1471]: time="2024-12-13T01:35:35.784626377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:35:35.785032 containerd[1471]: time="2024-12-13T01:35:35.784720932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:35:35.785032 containerd[1471]: time="2024-12-13T01:35:35.784739137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:35.785032 containerd[1471]: time="2024-12-13T01:35:35.784866844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:35.809254 systemd[1]: run-containerd-runc-k8s.io-845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173-runc.pE5YN9.mount: Deactivated successfully. Dec 13 01:35:35.819722 systemd[1]: Started cri-containerd-845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173.scope - libcontainer container 845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173. Dec 13 01:35:35.835005 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:35:35.910594 containerd[1471]: time="2024-12-13T01:35:35.910553315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4cc7c965-9dhjq,Uid:11de9ed5-e662-461f-9f1b-7ff39b760401,Namespace:calico-system,Attempt:1,} returns sandbox id \"845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173\"" Dec 13 01:35:36.338288 containerd[1471]: time="2024-12-13T01:35:36.337046023Z" level=info msg="StopPodSandbox for \"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\"" Dec 13 01:35:36.454631 systemd-networkd[1406]: cali2f31204ca05: Gained IPv6LL Dec 13 01:35:36.467350 containerd[1471]: 2024-12-13 01:35:36.398 [INFO][4657] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Dec 13 01:35:36.467350 containerd[1471]: 2024-12-13 01:35:36.399 [INFO][4657] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" iface="eth0" netns="/var/run/netns/cni-561f1812-7daf-d73d-7c40-6b0c1c28dc53" Dec 13 01:35:36.467350 containerd[1471]: 2024-12-13 01:35:36.399 [INFO][4657] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" iface="eth0" netns="/var/run/netns/cni-561f1812-7daf-d73d-7c40-6b0c1c28dc53" Dec 13 01:35:36.467350 containerd[1471]: 2024-12-13 01:35:36.399 [INFO][4657] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" iface="eth0" netns="/var/run/netns/cni-561f1812-7daf-d73d-7c40-6b0c1c28dc53" Dec 13 01:35:36.467350 containerd[1471]: 2024-12-13 01:35:36.399 [INFO][4657] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Dec 13 01:35:36.467350 containerd[1471]: 2024-12-13 01:35:36.399 [INFO][4657] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Dec 13 01:35:36.467350 containerd[1471]: 2024-12-13 01:35:36.454 [INFO][4664] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" HandleID="k8s-pod-network.08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Workload="localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0" Dec 13 01:35:36.467350 containerd[1471]: 2024-12-13 01:35:36.455 [INFO][4664] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:36.467350 containerd[1471]: 2024-12-13 01:35:36.455 [INFO][4664] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:36.467350 containerd[1471]: 2024-12-13 01:35:36.460 [WARNING][4664] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" HandleID="k8s-pod-network.08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Workload="localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0" Dec 13 01:35:36.467350 containerd[1471]: 2024-12-13 01:35:36.460 [INFO][4664] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" HandleID="k8s-pod-network.08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Workload="localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0" Dec 13 01:35:36.467350 containerd[1471]: 2024-12-13 01:35:36.462 [INFO][4664] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:36.467350 containerd[1471]: 2024-12-13 01:35:36.464 [INFO][4657] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Dec 13 01:35:36.469159 containerd[1471]: time="2024-12-13T01:35:36.467596426Z" level=info msg="TearDown network for sandbox \"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\" successfully" Dec 13 01:35:36.469159 containerd[1471]: time="2024-12-13T01:35:36.467626344Z" level=info msg="StopPodSandbox for \"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\" returns successfully" Dec 13 01:35:36.469159 containerd[1471]: time="2024-12-13T01:35:36.468559384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89794f578-dc8fd,Uid:b40daba8-6884-4f86-8a32-cd23147e2173,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:35:36.471729 systemd[1]: run-netns-cni\x2d561f1812\x2d7daf\x2dd73d\x2d7c40\x2d6b0c1c28dc53.mount: Deactivated successfully. Dec 13 01:35:36.525648 kubelet[2538]: E1213 01:35:36.525577 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:36.526231 kubelet[2538]: E1213 01:35:36.525926 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:36.773788 systemd-networkd[1406]: calic3e6020a254: Gained IPv6LL Dec 13 01:35:36.829769 systemd-networkd[1406]: calie137d162225: Link UP Dec 13 01:35:36.830067 systemd-networkd[1406]: calie137d162225: Gained carrier Dec 13 01:35:36.844222 containerd[1471]: 2024-12-13 01:35:36.529 [INFO][4672] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0 calico-apiserver-89794f578- calico-apiserver b40daba8-6884-4f86-8a32-cd23147e2173 951 0 2024-12-13 01:35:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:89794f578 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-89794f578-dc8fd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie137d162225 [] []}} ContainerID="93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d" Namespace="calico-apiserver" Pod="calico-apiserver-89794f578-dc8fd" WorkloadEndpoint="localhost-k8s-calico--apiserver--89794f578--dc8fd-" Dec 13 01:35:36.844222 containerd[1471]: 2024-12-13 01:35:36.529 [INFO][4672] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d" Namespace="calico-apiserver" Pod="calico-apiserver-89794f578-dc8fd" WorkloadEndpoint="localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0" Dec 13 01:35:36.844222 containerd[1471]: 2024-12-13 01:35:36.737 [INFO][4686] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d" HandleID="k8s-pod-network.93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d" Workload="localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0" Dec 13 01:35:36.844222 containerd[1471]: 2024-12-13 01:35:36.748 [INFO][4686] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d" HandleID="k8s-pod-network.93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d" Workload="localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000284230), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-89794f578-dc8fd", "timestamp":"2024-12-13 01:35:36.737598967 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:35:36.844222 containerd[1471]: 2024-12-13 01:35:36.749 [INFO][4686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:36.844222 containerd[1471]: 2024-12-13 01:35:36.749 [INFO][4686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:36.844222 containerd[1471]: 2024-12-13 01:35:36.749 [INFO][4686] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:35:36.844222 containerd[1471]: 2024-12-13 01:35:36.750 [INFO][4686] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d" host="localhost" Dec 13 01:35:36.844222 containerd[1471]: 2024-12-13 01:35:36.754 [INFO][4686] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:35:36.844222 containerd[1471]: 2024-12-13 01:35:36.758 [INFO][4686] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:35:36.844222 containerd[1471]: 2024-12-13 01:35:36.760 [INFO][4686] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:35:36.844222 containerd[1471]: 2024-12-13 01:35:36.762 [INFO][4686] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:35:36.844222 containerd[1471]: 2024-12-13 01:35:36.762 [INFO][4686] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d" host="localhost" Dec 13 01:35:36.844222 containerd[1471]: 2024-12-13 01:35:36.763 [INFO][4686] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d Dec 13 01:35:36.844222 containerd[1471]: 2024-12-13 01:35:36.796 [INFO][4686] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d" host="localhost" Dec 13 01:35:36.844222 containerd[1471]: 2024-12-13 01:35:36.822 [INFO][4686] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d" host="localhost" Dec 13 01:35:36.844222 containerd[1471]: 2024-12-13 01:35:36.822 [INFO][4686] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d" host="localhost" Dec 13 01:35:36.844222 containerd[1471]: 2024-12-13 01:35:36.822 [INFO][4686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:36.844222 containerd[1471]: 2024-12-13 01:35:36.822 [INFO][4686] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d" HandleID="k8s-pod-network.93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d" Workload="localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0" Dec 13 01:35:36.845490 containerd[1471]: 2024-12-13 01:35:36.825 [INFO][4672] cni-plugin/k8s.go 386: Populated endpoint ContainerID="93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d" Namespace="calico-apiserver" Pod="calico-apiserver-89794f578-dc8fd" WorkloadEndpoint="localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0", GenerateName:"calico-apiserver-89794f578-", Namespace:"calico-apiserver", SelfLink:"", UID:"b40daba8-6884-4f86-8a32-cd23147e2173", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 35, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89794f578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-89794f578-dc8fd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie137d162225", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:36.845490 containerd[1471]: 2024-12-13 01:35:36.826 [INFO][4672] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d" Namespace="calico-apiserver" Pod="calico-apiserver-89794f578-dc8fd" WorkloadEndpoint="localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0" Dec 13 01:35:36.845490 containerd[1471]: 2024-12-13 01:35:36.826 [INFO][4672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie137d162225 ContainerID="93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d" Namespace="calico-apiserver" Pod="calico-apiserver-89794f578-dc8fd" WorkloadEndpoint="localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0" Dec 13 01:35:36.845490 containerd[1471]: 2024-12-13 01:35:36.828 [INFO][4672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d" Namespace="calico-apiserver" Pod="calico-apiserver-89794f578-dc8fd" WorkloadEndpoint="localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0" Dec 13 01:35:36.845490 containerd[1471]: 2024-12-13 01:35:36.828 [INFO][4672] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d" Namespace="calico-apiserver" Pod="calico-apiserver-89794f578-dc8fd" WorkloadEndpoint="localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0", GenerateName:"calico-apiserver-89794f578-", Namespace:"calico-apiserver", SelfLink:"", UID:"b40daba8-6884-4f86-8a32-cd23147e2173", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 35, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89794f578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d", Pod:"calico-apiserver-89794f578-dc8fd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie137d162225", MAC:"36:af:e8:35:65:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:36.845490 containerd[1471]: 2024-12-13 01:35:36.839 [INFO][4672] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d" Namespace="calico-apiserver" Pod="calico-apiserver-89794f578-dc8fd" WorkloadEndpoint="localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0" Dec 13 01:35:36.964757 systemd-networkd[1406]: calif95cd89016f: Gained IPv6LL Dec 13 01:35:36.985200 containerd[1471]: time="2024-12-13T01:35:36.985029780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:35:36.985200 containerd[1471]: time="2024-12-13T01:35:36.985100237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:35:36.985200 containerd[1471]: time="2024-12-13T01:35:36.985112210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:36.985392 containerd[1471]: time="2024-12-13T01:35:36.985240529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:37.012837 systemd[1]: Started cri-containerd-93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d.scope - libcontainer container 93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d. Dec 13 01:35:37.028838 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:35:37.057709 containerd[1471]: time="2024-12-13T01:35:37.057641790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89794f578-dc8fd,Uid:b40daba8-6884-4f86-8a32-cd23147e2173,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d\"" Dec 13 01:35:37.337392 containerd[1471]: time="2024-12-13T01:35:37.337214906Z" level=info msg="StopPodSandbox for \"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\"" Dec 13 01:35:37.534100 kubelet[2538]: E1213 01:35:37.534045 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:35:37.692104 containerd[1471]: 2024-12-13 01:35:37.592 [INFO][4763] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Dec 13 01:35:37.692104 containerd[1471]: 2024-12-13 01:35:37.593 [INFO][4763] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" iface="eth0" netns="/var/run/netns/cni-55a5a09a-de30-ebfb-e0e3-2a71d90a5246" Dec 13 01:35:37.692104 containerd[1471]: 2024-12-13 01:35:37.593 [INFO][4763] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" iface="eth0" netns="/var/run/netns/cni-55a5a09a-de30-ebfb-e0e3-2a71d90a5246" Dec 13 01:35:37.692104 containerd[1471]: 2024-12-13 01:35:37.593 [INFO][4763] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" iface="eth0" netns="/var/run/netns/cni-55a5a09a-de30-ebfb-e0e3-2a71d90a5246" Dec 13 01:35:37.692104 containerd[1471]: 2024-12-13 01:35:37.593 [INFO][4763] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Dec 13 01:35:37.692104 containerd[1471]: 2024-12-13 01:35:37.594 [INFO][4763] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Dec 13 01:35:37.692104 containerd[1471]: 2024-12-13 01:35:37.679 [INFO][4770] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" HandleID="k8s-pod-network.cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Workload="localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0" Dec 13 01:35:37.692104 containerd[1471]: 2024-12-13 01:35:37.679 [INFO][4770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:37.692104 containerd[1471]: 2024-12-13 01:35:37.679 [INFO][4770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:37.692104 containerd[1471]: 2024-12-13 01:35:37.684 [WARNING][4770] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" HandleID="k8s-pod-network.cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Workload="localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0" Dec 13 01:35:37.692104 containerd[1471]: 2024-12-13 01:35:37.684 [INFO][4770] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" HandleID="k8s-pod-network.cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Workload="localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0" Dec 13 01:35:37.692104 containerd[1471]: 2024-12-13 01:35:37.686 [INFO][4770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:37.692104 containerd[1471]: 2024-12-13 01:35:37.689 [INFO][4763] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Dec 13 01:35:37.692923 containerd[1471]: time="2024-12-13T01:35:37.692245289Z" level=info msg="TearDown network for sandbox \"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\" successfully" Dec 13 01:35:37.692923 containerd[1471]: time="2024-12-13T01:35:37.692282110Z" level=info msg="StopPodSandbox for \"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\" returns successfully" Dec 13 01:35:37.693757 containerd[1471]: time="2024-12-13T01:35:37.693728082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89794f578-hrxnn,Uid:24589cea-6828-4164-a7da-b10ab65d700a,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:35:37.695935 systemd[1]: run-netns-cni\x2d55a5a09a\x2dde30\x2debfb\x2de0e3\x2d2a71d90a5246.mount: Deactivated successfully. Dec 13 01:35:38.164635 containerd[1471]: time="2024-12-13T01:35:38.164565646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:38.169476 containerd[1471]: time="2024-12-13T01:35:38.165849512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 01:35:38.169476 containerd[1471]: time="2024-12-13T01:35:38.167580764Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:38.166976 systemd-networkd[1406]: cali42baf1eea7c: Link UP Dec 13 01:35:38.168178 systemd-networkd[1406]: cali42baf1eea7c: Gained carrier Dec 13 01:35:38.171043 containerd[1471]: time="2024-12-13T01:35:38.171004522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:38.171565 containerd[1471]: time="2024-12-13T01:35:38.171511183Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.7796831s" Dec 13 01:35:38.171565 containerd[1471]: time="2024-12-13T01:35:38.171560087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 01:35:38.175296 containerd[1471]: time="2024-12-13T01:35:38.175248399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:35:38.177443 containerd[1471]: time="2024-12-13T01:35:38.177396106Z" level=info msg="CreateContainer within sandbox \"8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:35:38.192917 containerd[1471]: 2024-12-13 01:35:37.767 [INFO][4782] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0 calico-apiserver-89794f578- calico-apiserver 24589cea-6828-4164-a7da-b10ab65d700a 965 0 2024-12-13 01:35:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:89794f578 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-89794f578-hrxnn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali42baf1eea7c [] []}} ContainerID="220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95" Namespace="calico-apiserver" Pod="calico-apiserver-89794f578-hrxnn" WorkloadEndpoint="localhost-k8s-calico--apiserver--89794f578--hrxnn-" Dec 13 01:35:38.192917 containerd[1471]: 2024-12-13 01:35:37.767 [INFO][4782] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95" Namespace="calico-apiserver" Pod="calico-apiserver-89794f578-hrxnn" WorkloadEndpoint="localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0" Dec 13 01:35:38.192917 containerd[1471]: 2024-12-13 01:35:37.799 [INFO][4794] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95" HandleID="k8s-pod-network.220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95" Workload="localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0" Dec 13 01:35:38.192917 containerd[1471]: 2024-12-13 01:35:37.836 [INFO][4794] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95" HandleID="k8s-pod-network.220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95" Workload="localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dcdf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-89794f578-hrxnn", "timestamp":"2024-12-13 01:35:37.799400256 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:35:38.192917 containerd[1471]: 2024-12-13 01:35:37.839 [INFO][4794] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:38.192917 containerd[1471]: 2024-12-13 01:35:37.839 [INFO][4794] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:38.192917 containerd[1471]: 2024-12-13 01:35:37.840 [INFO][4794] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:35:38.192917 containerd[1471]: 2024-12-13 01:35:37.841 [INFO][4794] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95" host="localhost" Dec 13 01:35:38.192917 containerd[1471]: 2024-12-13 01:35:37.845 [INFO][4794] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:35:38.192917 containerd[1471]: 2024-12-13 01:35:37.850 [INFO][4794] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:35:38.192917 containerd[1471]: 2024-12-13 01:35:37.852 [INFO][4794] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:35:38.192917 containerd[1471]: 2024-12-13 01:35:37.854 [INFO][4794] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:35:38.192917 containerd[1471]: 2024-12-13 01:35:37.854 [INFO][4794] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95" host="localhost" Dec 13 01:35:38.192917 containerd[1471]: 2024-12-13 01:35:37.856 [INFO][4794] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95 Dec 13 01:35:38.192917 containerd[1471]: 2024-12-13 01:35:38.133 [INFO][4794] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95" host="localhost" Dec 13 01:35:38.192917 containerd[1471]: 2024-12-13 01:35:38.160 [INFO][4794] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95" host="localhost" Dec 13 01:35:38.192917 containerd[1471]: 2024-12-13 01:35:38.160 [INFO][4794] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95" host="localhost" Dec 13 01:35:38.192917 containerd[1471]: 2024-12-13 01:35:38.160 [INFO][4794] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:38.192917 containerd[1471]: 2024-12-13 01:35:38.160 [INFO][4794] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95" HandleID="k8s-pod-network.220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95" Workload="localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0" Dec 13 01:35:38.193625 containerd[1471]: 2024-12-13 01:35:38.164 [INFO][4782] cni-plugin/k8s.go 386: Populated endpoint ContainerID="220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95" Namespace="calico-apiserver" Pod="calico-apiserver-89794f578-hrxnn" WorkloadEndpoint="localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0", GenerateName:"calico-apiserver-89794f578-", Namespace:"calico-apiserver", SelfLink:"", UID:"24589cea-6828-4164-a7da-b10ab65d700a", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 35, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89794f578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-89794f578-hrxnn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali42baf1eea7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:38.193625 containerd[1471]: 2024-12-13 01:35:38.164 [INFO][4782] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95" Namespace="calico-apiserver" Pod="calico-apiserver-89794f578-hrxnn" WorkloadEndpoint="localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0" Dec 13 01:35:38.193625 containerd[1471]: 2024-12-13 01:35:38.164 [INFO][4782] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42baf1eea7c ContainerID="220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95" Namespace="calico-apiserver" Pod="calico-apiserver-89794f578-hrxnn" WorkloadEndpoint="localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0" Dec 13 01:35:38.193625 containerd[1471]: 2024-12-13 01:35:38.167 [INFO][4782] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95" Namespace="calico-apiserver" Pod="calico-apiserver-89794f578-hrxnn" WorkloadEndpoint="localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0" Dec 13 01:35:38.193625 containerd[1471]: 2024-12-13 01:35:38.168 [INFO][4782] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95" Namespace="calico-apiserver" Pod="calico-apiserver-89794f578-hrxnn" WorkloadEndpoint="localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0", GenerateName:"calico-apiserver-89794f578-", Namespace:"calico-apiserver", SelfLink:"", UID:"24589cea-6828-4164-a7da-b10ab65d700a", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 35, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89794f578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95", Pod:"calico-apiserver-89794f578-hrxnn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali42baf1eea7c", MAC:"16:a8:42:71:01:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:38.193625 containerd[1471]: 2024-12-13 01:35:38.184 [INFO][4782] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95" Namespace="calico-apiserver" Pod="calico-apiserver-89794f578-hrxnn" WorkloadEndpoint="localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0" Dec 13 01:35:38.215021 containerd[1471]: time="2024-12-13T01:35:38.214963637Z" level=info msg="CreateContainer within sandbox \"8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0f1fe96a7e537fa1cd41a5979b3cd81de7f1e0aab154e7f4b279f8fa342d0020\"" Dec 13 01:35:38.216584 containerd[1471]: time="2024-12-13T01:35:38.216536463Z" level=info msg="StartContainer for \"0f1fe96a7e537fa1cd41a5979b3cd81de7f1e0aab154e7f4b279f8fa342d0020\"" Dec 13 01:35:38.228082 containerd[1471]: time="2024-12-13T01:35:38.227947885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:35:38.228082 containerd[1471]: time="2024-12-13T01:35:38.228032008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:35:38.228082 containerd[1471]: time="2024-12-13T01:35:38.228052878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:38.233017 containerd[1471]: time="2024-12-13T01:35:38.228320707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:38.258491 systemd[1]: Started cri-containerd-220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95.scope - libcontainer container 220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95. Dec 13 01:35:38.266077 systemd[1]: Started cri-containerd-0f1fe96a7e537fa1cd41a5979b3cd81de7f1e0aab154e7f4b279f8fa342d0020.scope - libcontainer container 0f1fe96a7e537fa1cd41a5979b3cd81de7f1e0aab154e7f4b279f8fa342d0020. Dec 13 01:35:38.281432 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:35:38.325439 containerd[1471]: time="2024-12-13T01:35:38.324306566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89794f578-hrxnn,Uid:24589cea-6828-4164-a7da-b10ab65d700a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95\"" Dec 13 01:35:38.332385 containerd[1471]: time="2024-12-13T01:35:38.332327806Z" level=info msg="StartContainer for \"0f1fe96a7e537fa1cd41a5979b3cd81de7f1e0aab154e7f4b279f8fa342d0020\" returns successfully" Dec 13 01:35:38.820784 systemd-networkd[1406]: calie137d162225: Gained IPv6LL Dec 13 01:35:38.908232 systemd[1]: Started sshd@10-10.0.0.100:22-10.0.0.1:55154.service - OpenSSH per-connection server daemon (10.0.0.1:55154). Dec 13 01:35:38.946183 sshd[4892]: Accepted publickey for core from 10.0.0.1 port 55154 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:35:38.948029 sshd[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:38.952063 systemd-logind[1455]: New session 11 of user core. Dec 13 01:35:38.960699 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:35:39.078753 sshd[4892]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:39.088609 systemd[1]: sshd@10-10.0.0.100:22-10.0.0.1:55154.service: Deactivated successfully. Dec 13 01:35:39.090641 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:35:39.092296 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:35:39.099811 systemd[1]: Started sshd@11-10.0.0.100:22-10.0.0.1:55158.service - OpenSSH per-connection server daemon (10.0.0.1:55158). Dec 13 01:35:39.100851 systemd-logind[1455]: Removed session 11. Dec 13 01:35:39.129712 sshd[4908]: Accepted publickey for core from 10.0.0.1 port 55158 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:35:39.131602 sshd[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:39.135963 systemd-logind[1455]: New session 12 of user core. Dec 13 01:35:39.143665 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:35:39.314445 sshd[4908]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:39.325177 systemd[1]: sshd@11-10.0.0.100:22-10.0.0.1:55158.service: Deactivated successfully. Dec 13 01:35:39.327495 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:35:39.330247 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:35:39.332666 systemd-networkd[1406]: cali42baf1eea7c: Gained IPv6LL Dec 13 01:35:39.339257 systemd[1]: Started sshd@12-10.0.0.100:22-10.0.0.1:55160.service - OpenSSH per-connection server daemon (10.0.0.1:55160). Dec 13 01:35:39.341160 systemd-logind[1455]: Removed session 12. Dec 13 01:35:39.380006 sshd[4927]: Accepted publickey for core from 10.0.0.1 port 55160 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:35:39.382118 sshd[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:39.386875 systemd-logind[1455]: New session 13 of user core. Dec 13 01:35:39.397901 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:35:39.524730 sshd[4927]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:39.529379 systemd[1]: sshd@12-10.0.0.100:22-10.0.0.1:55160.service: Deactivated successfully. Dec 13 01:35:39.532128 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:35:39.532935 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:35:39.533950 systemd-logind[1455]: Removed session 13. Dec 13 01:35:42.415870 containerd[1471]: time="2024-12-13T01:35:42.415777200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:42.447302 containerd[1471]: time="2024-12-13T01:35:42.447176590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 01:35:42.454077 containerd[1471]: time="2024-12-13T01:35:42.454004886Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:42.460262 containerd[1471]: time="2024-12-13T01:35:42.460172507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:42.461552 containerd[1471]: time="2024-12-13T01:35:42.461438129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.286129264s" Dec 13 01:35:42.461760 containerd[1471]: time="2024-12-13T01:35:42.461544554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 01:35:42.463678 containerd[1471]: time="2024-12-13T01:35:42.463639116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:35:42.472423 containerd[1471]: time="2024-12-13T01:35:42.472372418Z" level=info msg="CreateContainer within sandbox \"845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:35:42.507659 containerd[1471]: time="2024-12-13T01:35:42.507586068Z" level=info msg="CreateContainer within sandbox \"845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3355da6ac8b96933fdf4d0927c98835b66e375a88bb21e70bff88e8849a1c106\"" Dec 13 01:35:42.508853 containerd[1471]: time="2024-12-13T01:35:42.508794350Z" level=info msg="StartContainer for \"3355da6ac8b96933fdf4d0927c98835b66e375a88bb21e70bff88e8849a1c106\"" Dec 13 01:35:42.544223 systemd[1]: Started cri-containerd-3355da6ac8b96933fdf4d0927c98835b66e375a88bb21e70bff88e8849a1c106.scope - libcontainer container 3355da6ac8b96933fdf4d0927c98835b66e375a88bb21e70bff88e8849a1c106. Dec 13 01:35:42.600127 containerd[1471]: time="2024-12-13T01:35:42.600070671Z" level=info msg="StartContainer for \"3355da6ac8b96933fdf4d0927c98835b66e375a88bb21e70bff88e8849a1c106\" returns successfully" Dec 13 01:35:43.626587 kubelet[2538]: I1213 01:35:43.626197 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c4cc7c965-9dhjq" podStartSLOduration=34.075066301 podStartE2EDuration="40.62617669s" podCreationTimestamp="2024-12-13 01:35:03 +0000 UTC" firstStartedPulling="2024-12-13 01:35:35.912330404 +0000 UTC m=+43.667723817" lastFinishedPulling="2024-12-13 01:35:42.463440783 +0000 UTC m=+50.218834206" observedRunningTime="2024-12-13 01:35:43.569792151 +0000 UTC m=+51.325185564" watchObservedRunningTime="2024-12-13 01:35:43.62617669 +0000 UTC m=+51.381570103" Dec 13 01:35:44.537123 systemd[1]: Started sshd@13-10.0.0.100:22-10.0.0.1:55176.service - OpenSSH per-connection server daemon (10.0.0.1:55176). Dec 13 01:35:44.579563 sshd[5018]: Accepted publickey for core from 10.0.0.1 port 55176 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:35:44.581586 sshd[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:44.586144 systemd-logind[1455]: New session 14 of user core. Dec 13 01:35:44.596741 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:35:44.740710 sshd[5018]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:44.745587 systemd[1]: sshd@13-10.0.0.100:22-10.0.0.1:55176.service: Deactivated successfully. Dec 13 01:35:44.748206 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:35:44.749015 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:35:44.749990 systemd-logind[1455]: Removed session 14. Dec 13 01:35:46.231548 containerd[1471]: time="2024-12-13T01:35:46.231462446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:46.234087 containerd[1471]: time="2024-12-13T01:35:46.234014085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 01:35:46.237341 containerd[1471]: time="2024-12-13T01:35:46.237296481Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:46.240502 containerd[1471]: time="2024-12-13T01:35:46.240442184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:46.241225 containerd[1471]: time="2024-12-13T01:35:46.241180474Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.777493535s" Dec 13 01:35:46.241225 containerd[1471]: time="2024-12-13T01:35:46.241221583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:35:46.242692 containerd[1471]: time="2024-12-13T01:35:46.242255883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:35:46.243688 containerd[1471]: time="2024-12-13T01:35:46.243648883Z" level=info msg="CreateContainer within sandbox \"93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:35:46.267661 containerd[1471]: time="2024-12-13T01:35:46.267597963Z" level=info msg="CreateContainer within sandbox \"93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"eb1685e5d5506af90ada8ab8c2a5cbcccb325647d98bb73dfd6f3be9cc3788f9\"" Dec 13 01:35:46.268253 containerd[1471]: time="2024-12-13T01:35:46.268224217Z" level=info msg="StartContainer for \"eb1685e5d5506af90ada8ab8c2a5cbcccb325647d98bb73dfd6f3be9cc3788f9\"" Dec 13 01:35:46.301822 systemd[1]: Started cri-containerd-eb1685e5d5506af90ada8ab8c2a5cbcccb325647d98bb73dfd6f3be9cc3788f9.scope - libcontainer container eb1685e5d5506af90ada8ab8c2a5cbcccb325647d98bb73dfd6f3be9cc3788f9. Dec 13 01:35:46.350846 containerd[1471]: time="2024-12-13T01:35:46.350792342Z" level=info msg="StartContainer for \"eb1685e5d5506af90ada8ab8c2a5cbcccb325647d98bb73dfd6f3be9cc3788f9\" returns successfully" Dec 13 01:35:47.123168 containerd[1471]: time="2024-12-13T01:35:47.123098100Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:47.124526 containerd[1471]: time="2024-12-13T01:35:47.124380576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:35:47.127823 containerd[1471]: time="2024-12-13T01:35:47.127780713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 885.488662ms" Dec 13 01:35:47.127899 containerd[1471]: time="2024-12-13T01:35:47.127834367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:35:47.130203 containerd[1471]: time="2024-12-13T01:35:47.129541780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:35:47.132835 containerd[1471]: time="2024-12-13T01:35:47.132794043Z" level=info msg="CreateContainer within sandbox \"220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:35:47.565205 kubelet[2538]: I1213 01:35:47.565159 2538 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:35:47.743010 containerd[1471]: time="2024-12-13T01:35:47.742828961Z" level=info msg="CreateContainer within sandbox \"220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d335bb8017c01b850faa47096f7cb1f6937a463f23903d3752e2b4fd19b7b025\"" Dec 13 01:35:47.744774 containerd[1471]: time="2024-12-13T01:35:47.743698974Z" level=info msg="StartContainer for \"d335bb8017c01b850faa47096f7cb1f6937a463f23903d3752e2b4fd19b7b025\"" Dec 13 01:35:47.786766 systemd[1]: Started cri-containerd-d335bb8017c01b850faa47096f7cb1f6937a463f23903d3752e2b4fd19b7b025.scope - libcontainer container d335bb8017c01b850faa47096f7cb1f6937a463f23903d3752e2b4fd19b7b025. Dec 13 01:35:47.850609 containerd[1471]: time="2024-12-13T01:35:47.849043186Z" level=info msg="StartContainer for \"d335bb8017c01b850faa47096f7cb1f6937a463f23903d3752e2b4fd19b7b025\" returns successfully" Dec 13 01:35:48.834593 kubelet[2538]: I1213 01:35:48.834474 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-89794f578-dc8fd" podStartSLOduration=36.651516601 podStartE2EDuration="45.834433178s" podCreationTimestamp="2024-12-13 01:35:03 +0000 UTC" firstStartedPulling="2024-12-13 01:35:37.059140003 +0000 UTC m=+44.814533416" lastFinishedPulling="2024-12-13 01:35:46.24205649 +0000 UTC m=+53.997449993" observedRunningTime="2024-12-13 01:35:46.581568377 +0000 UTC m=+54.336961810" watchObservedRunningTime="2024-12-13 01:35:48.834433178 +0000 UTC m=+56.589826601" Dec 13 01:35:49.573980 kubelet[2538]: I1213 01:35:49.573921 2538 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:35:49.764054 systemd[1]: Started sshd@14-10.0.0.100:22-10.0.0.1:55646.service - OpenSSH per-connection server daemon (10.0.0.1:55646). Dec 13 01:35:49.821131 sshd[5133]: Accepted publickey for core from 10.0.0.1 port 55646 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:35:49.823640 sshd[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:49.829306 containerd[1471]: time="2024-12-13T01:35:49.828615981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:49.830640 containerd[1471]: time="2024-12-13T01:35:49.830569753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 01:35:49.833488 containerd[1471]: time="2024-12-13T01:35:49.833064654Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:49.833194 systemd-logind[1455]: New session 15 of user core. Dec 13 01:35:49.841848 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:35:49.844573 containerd[1471]: time="2024-12-13T01:35:49.843408637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:49.844573 containerd[1471]: time="2024-12-13T01:35:49.844462812Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.714876837s" Dec 13 01:35:49.844573 containerd[1471]: time="2024-12-13T01:35:49.844556502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 01:35:49.849812 containerd[1471]: time="2024-12-13T01:35:49.849656485Z" level=info msg="CreateContainer within sandbox \"8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:35:49.889670 containerd[1471]: time="2024-12-13T01:35:49.889578648Z" level=info msg="CreateContainer within sandbox \"8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a5c34c9873368ae213c36b3d7bed3ef357c32e68a1a17041ba07c2735e6f5ef8\"" Dec 13 01:35:49.890608 containerd[1471]: time="2024-12-13T01:35:49.890538561Z" level=info msg="StartContainer for \"a5c34c9873368ae213c36b3d7bed3ef357c32e68a1a17041ba07c2735e6f5ef8\"" Dec 13 01:35:49.964844 systemd[1]: Started cri-containerd-a5c34c9873368ae213c36b3d7bed3ef357c32e68a1a17041ba07c2735e6f5ef8.scope - libcontainer container a5c34c9873368ae213c36b3d7bed3ef357c32e68a1a17041ba07c2735e6f5ef8. Dec 13 01:35:50.026809 containerd[1471]: time="2024-12-13T01:35:50.026734137Z" level=info msg="StartContainer for \"a5c34c9873368ae213c36b3d7bed3ef357c32e68a1a17041ba07c2735e6f5ef8\" returns successfully" Dec 13 01:35:50.056252 sshd[5133]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:50.061856 systemd[1]: sshd@14-10.0.0.100:22-10.0.0.1:55646.service: Deactivated successfully. Dec 13 01:35:50.065337 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:35:50.066767 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:35:50.067968 systemd-logind[1455]: Removed session 15. Dec 13 01:35:50.423261 kubelet[2538]: I1213 01:35:50.423174 2538 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:35:50.423946 kubelet[2538]: I1213 01:35:50.423293 2538 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:35:50.595648 kubelet[2538]: I1213 01:35:50.595470 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-b257v" podStartSLOduration=33.138861337 podStartE2EDuration="47.595417226s" podCreationTimestamp="2024-12-13 01:35:03 +0000 UTC" firstStartedPulling="2024-12-13 01:35:35.389385843 +0000 UTC m=+43.144779256" lastFinishedPulling="2024-12-13 01:35:49.845941732 +0000 UTC m=+57.601335145" observedRunningTime="2024-12-13 01:35:50.594806845 +0000 UTC m=+58.350200278" watchObservedRunningTime="2024-12-13 01:35:50.595417226 +0000 UTC m=+58.350810659" Dec 13 01:35:50.596207 kubelet[2538]: I1213 01:35:50.595790 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-89794f578-hrxnn" podStartSLOduration=38.793394568 podStartE2EDuration="47.595780944s" podCreationTimestamp="2024-12-13 01:35:03 +0000 UTC" firstStartedPulling="2024-12-13 01:35:38.32660192 +0000 UTC m=+46.081995333" lastFinishedPulling="2024-12-13 01:35:47.128988296 +0000 UTC m=+54.884381709" observedRunningTime="2024-12-13 01:35:48.835060824 +0000 UTC m=+56.590454237" watchObservedRunningTime="2024-12-13 01:35:50.595780944 +0000 UTC m=+58.351174377" Dec 13 01:35:52.334727 containerd[1471]: time="2024-12-13T01:35:52.334595036Z" level=info msg="StopPodSandbox for \"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\"" Dec 13 01:35:52.468676 containerd[1471]: 2024-12-13 01:35:52.405 [WARNING][5200] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0", GenerateName:"calico-kube-controllers-c4cc7c965-", Namespace:"calico-system", SelfLink:"", UID:"11de9ed5-e662-461f-9f1b-7ff39b760401", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 35, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c4cc7c965", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173", Pod:"calico-kube-controllers-c4cc7c965-9dhjq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif95cd89016f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:52.468676 containerd[1471]: 2024-12-13 01:35:52.405 [INFO][5200] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Dec 13 01:35:52.468676 containerd[1471]: 2024-12-13 01:35:52.405 [INFO][5200] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" iface="eth0" netns="" Dec 13 01:35:52.468676 containerd[1471]: 2024-12-13 01:35:52.405 [INFO][5200] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Dec 13 01:35:52.468676 containerd[1471]: 2024-12-13 01:35:52.405 [INFO][5200] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Dec 13 01:35:52.468676 containerd[1471]: 2024-12-13 01:35:52.444 [INFO][5208] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" HandleID="k8s-pod-network.92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Workload="localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0" Dec 13 01:35:52.468676 containerd[1471]: 2024-12-13 01:35:52.445 [INFO][5208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:52.468676 containerd[1471]: 2024-12-13 01:35:52.445 [INFO][5208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:52.468676 containerd[1471]: 2024-12-13 01:35:52.458 [WARNING][5208] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" HandleID="k8s-pod-network.92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Workload="localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0" Dec 13 01:35:52.468676 containerd[1471]: 2024-12-13 01:35:52.458 [INFO][5208] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" HandleID="k8s-pod-network.92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Workload="localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0" Dec 13 01:35:52.468676 containerd[1471]: 2024-12-13 01:35:52.460 [INFO][5208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:52.468676 containerd[1471]: 2024-12-13 01:35:52.464 [INFO][5200] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Dec 13 01:35:52.469260 containerd[1471]: time="2024-12-13T01:35:52.468719196Z" level=info msg="TearDown network for sandbox \"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\" successfully" Dec 13 01:35:52.469260 containerd[1471]: time="2024-12-13T01:35:52.468757309Z" level=info msg="StopPodSandbox for \"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\" returns successfully" Dec 13 01:35:52.477426 containerd[1471]: time="2024-12-13T01:35:52.477343247Z" level=info msg="RemovePodSandbox for \"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\"" Dec 13 01:35:52.480023 containerd[1471]: time="2024-12-13T01:35:52.479761512Z" level=info msg="Forcibly stopping sandbox \"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\"" Dec 13 01:35:52.574060 containerd[1471]: 2024-12-13 01:35:52.528 [WARNING][5231] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0", GenerateName:"calico-kube-controllers-c4cc7c965-", Namespace:"calico-system", SelfLink:"", UID:"11de9ed5-e662-461f-9f1b-7ff39b760401", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 35, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c4cc7c965", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"845ec5a6a0591a0ebbed87a3e0029fcb949e12f29e6ca5dc7edc490d45531173", Pod:"calico-kube-controllers-c4cc7c965-9dhjq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif95cd89016f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:52.574060 containerd[1471]: 2024-12-13 01:35:52.528 [INFO][5231] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Dec 13 01:35:52.574060 containerd[1471]: 2024-12-13 01:35:52.528 [INFO][5231] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" iface="eth0" netns="" Dec 13 01:35:52.574060 containerd[1471]: 2024-12-13 01:35:52.528 [INFO][5231] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Dec 13 01:35:52.574060 containerd[1471]: 2024-12-13 01:35:52.528 [INFO][5231] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Dec 13 01:35:52.574060 containerd[1471]: 2024-12-13 01:35:52.558 [INFO][5238] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" HandleID="k8s-pod-network.92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Workload="localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0" Dec 13 01:35:52.574060 containerd[1471]: 2024-12-13 01:35:52.558 [INFO][5238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:52.574060 containerd[1471]: 2024-12-13 01:35:52.558 [INFO][5238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:52.574060 containerd[1471]: 2024-12-13 01:35:52.564 [WARNING][5238] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" HandleID="k8s-pod-network.92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Workload="localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0" Dec 13 01:35:52.574060 containerd[1471]: 2024-12-13 01:35:52.564 [INFO][5238] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" HandleID="k8s-pod-network.92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Workload="localhost-k8s-calico--kube--controllers--c4cc7c965--9dhjq-eth0" Dec 13 01:35:52.574060 containerd[1471]: 2024-12-13 01:35:52.567 [INFO][5238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:52.574060 containerd[1471]: 2024-12-13 01:35:52.570 [INFO][5231] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58" Dec 13 01:35:52.574788 containerd[1471]: time="2024-12-13T01:35:52.574120046Z" level=info msg="TearDown network for sandbox \"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\" successfully" Dec 13 01:35:52.584095 containerd[1471]: time="2024-12-13T01:35:52.584033820Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:35:52.584272 containerd[1471]: time="2024-12-13T01:35:52.584107471Z" level=info msg="RemovePodSandbox \"92171ed449e99d9802a68ca0ab8217f6a3286a51902045142dd3b8074b61bd58\" returns successfully" Dec 13 01:35:52.584599 containerd[1471]: time="2024-12-13T01:35:52.584573624Z" level=info msg="StopPodSandbox for \"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\"" Dec 13 01:35:52.690340 containerd[1471]: 2024-12-13 01:35:52.636 [WARNING][5261] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b257v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40d19590-db9c-41bd-9d1d-bd10d8bd864c", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 35, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c", Pod:"csi-node-driver-b257v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2f31204ca05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:52.690340 containerd[1471]: 2024-12-13 01:35:52.636 [INFO][5261] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Dec 13 01:35:52.690340 containerd[1471]: 2024-12-13 01:35:52.636 [INFO][5261] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" iface="eth0" netns="" Dec 13 01:35:52.690340 containerd[1471]: 2024-12-13 01:35:52.636 [INFO][5261] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Dec 13 01:35:52.690340 containerd[1471]: 2024-12-13 01:35:52.636 [INFO][5261] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Dec 13 01:35:52.690340 containerd[1471]: 2024-12-13 01:35:52.673 [INFO][5268] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" HandleID="k8s-pod-network.b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Workload="localhost-k8s-csi--node--driver--b257v-eth0" Dec 13 01:35:52.690340 containerd[1471]: 2024-12-13 01:35:52.674 [INFO][5268] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:52.690340 containerd[1471]: 2024-12-13 01:35:52.674 [INFO][5268] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:52.690340 containerd[1471]: 2024-12-13 01:35:52.681 [WARNING][5268] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" HandleID="k8s-pod-network.b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Workload="localhost-k8s-csi--node--driver--b257v-eth0" Dec 13 01:35:52.690340 containerd[1471]: 2024-12-13 01:35:52.682 [INFO][5268] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" HandleID="k8s-pod-network.b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Workload="localhost-k8s-csi--node--driver--b257v-eth0" Dec 13 01:35:52.690340 containerd[1471]: 2024-12-13 01:35:52.684 [INFO][5268] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:52.690340 containerd[1471]: 2024-12-13 01:35:52.687 [INFO][5261] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Dec 13 01:35:52.691712 containerd[1471]: time="2024-12-13T01:35:52.690354823Z" level=info msg="TearDown network for sandbox \"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\" successfully" Dec 13 01:35:52.691712 containerd[1471]: time="2024-12-13T01:35:52.690393358Z" level=info msg="StopPodSandbox for \"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\" returns successfully" Dec 13 01:35:52.691712 containerd[1471]: time="2024-12-13T01:35:52.691016943Z" level=info msg="RemovePodSandbox for \"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\"" Dec 13 01:35:52.691712 containerd[1471]: time="2024-12-13T01:35:52.691055186Z" level=info msg="Forcibly stopping sandbox \"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\"" Dec 13 01:35:52.825073 containerd[1471]: 2024-12-13 01:35:52.782 [WARNING][5293] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b257v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40d19590-db9c-41bd-9d1d-bd10d8bd864c", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 35, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8503a41a166bc040d469f0eaeacebc33020a653f143ce0ad5d55356ed182422c", Pod:"csi-node-driver-b257v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2f31204ca05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:52.825073 containerd[1471]: 2024-12-13 01:35:52.783 [INFO][5293] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Dec 13 01:35:52.825073 containerd[1471]: 2024-12-13 01:35:52.783 [INFO][5293] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" iface="eth0" netns="" Dec 13 01:35:52.825073 containerd[1471]: 2024-12-13 01:35:52.783 [INFO][5293] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Dec 13 01:35:52.825073 containerd[1471]: 2024-12-13 01:35:52.783 [INFO][5293] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Dec 13 01:35:52.825073 containerd[1471]: 2024-12-13 01:35:52.810 [INFO][5301] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" HandleID="k8s-pod-network.b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Workload="localhost-k8s-csi--node--driver--b257v-eth0" Dec 13 01:35:52.825073 containerd[1471]: 2024-12-13 01:35:52.810 [INFO][5301] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:52.825073 containerd[1471]: 2024-12-13 01:35:52.810 [INFO][5301] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:52.825073 containerd[1471]: 2024-12-13 01:35:52.816 [WARNING][5301] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" HandleID="k8s-pod-network.b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Workload="localhost-k8s-csi--node--driver--b257v-eth0" Dec 13 01:35:52.825073 containerd[1471]: 2024-12-13 01:35:52.816 [INFO][5301] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" HandleID="k8s-pod-network.b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Workload="localhost-k8s-csi--node--driver--b257v-eth0" Dec 13 01:35:52.825073 containerd[1471]: 2024-12-13 01:35:52.818 [INFO][5301] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:52.825073 containerd[1471]: 2024-12-13 01:35:52.821 [INFO][5293] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040" Dec 13 01:35:52.825877 containerd[1471]: time="2024-12-13T01:35:52.825140842Z" level=info msg="TearDown network for sandbox \"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\" successfully" Dec 13 01:35:52.928605 containerd[1471]: time="2024-12-13T01:35:52.928426116Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:35:52.928605 containerd[1471]: time="2024-12-13T01:35:52.928616501Z" level=info msg="RemovePodSandbox \"b8237c666404fa8562b0cfd5bc1917e8c01eb376275364f218c2f82a79e94040\" returns successfully" Dec 13 01:35:52.929588 containerd[1471]: time="2024-12-13T01:35:52.929380125Z" level=info msg="StopPodSandbox for \"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\"" Dec 13 01:35:53.029407 kubelet[2538]: I1213 01:35:53.029350 2538 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:35:53.058830 containerd[1471]: 2024-12-13 01:35:52.991 [WARNING][5323] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0", GenerateName:"calico-apiserver-89794f578-", Namespace:"calico-apiserver", SelfLink:"", UID:"b40daba8-6884-4f86-8a32-cd23147e2173", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 35, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89794f578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d", Pod:"calico-apiserver-89794f578-dc8fd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie137d162225", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:53.058830 containerd[1471]: 2024-12-13 01:35:52.991 [INFO][5323] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Dec 13 01:35:53.058830 containerd[1471]: 2024-12-13 01:35:52.991 [INFO][5323] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" iface="eth0" netns="" Dec 13 01:35:53.058830 containerd[1471]: 2024-12-13 01:35:52.991 [INFO][5323] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Dec 13 01:35:53.058830 containerd[1471]: 2024-12-13 01:35:52.991 [INFO][5323] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Dec 13 01:35:53.058830 containerd[1471]: 2024-12-13 01:35:53.033 [INFO][5331] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" HandleID="k8s-pod-network.08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Workload="localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0" Dec 13 01:35:53.058830 containerd[1471]: 2024-12-13 01:35:53.033 [INFO][5331] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:53.058830 containerd[1471]: 2024-12-13 01:35:53.033 [INFO][5331] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:53.058830 containerd[1471]: 2024-12-13 01:35:53.047 [WARNING][5331] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" HandleID="k8s-pod-network.08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Workload="localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0" Dec 13 01:35:53.058830 containerd[1471]: 2024-12-13 01:35:53.047 [INFO][5331] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" HandleID="k8s-pod-network.08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Workload="localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0" Dec 13 01:35:53.058830 containerd[1471]: 2024-12-13 01:35:53.050 [INFO][5331] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:53.058830 containerd[1471]: 2024-12-13 01:35:53.054 [INFO][5323] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Dec 13 01:35:53.058830 containerd[1471]: time="2024-12-13T01:35:53.058737525Z" level=info msg="TearDown network for sandbox \"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\" successfully" Dec 13 01:35:53.058830 containerd[1471]: time="2024-12-13T01:35:53.058777752Z" level=info msg="StopPodSandbox for \"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\" returns successfully" Dec 13 01:35:53.060950 containerd[1471]: time="2024-12-13T01:35:53.059784611Z" level=info msg="RemovePodSandbox for \"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\"" Dec 13 01:35:53.060950 containerd[1471]: time="2024-12-13T01:35:53.059830519Z" level=info msg="Forcibly stopping sandbox \"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\"" Dec 13 01:35:53.455917 containerd[1471]: 2024-12-13 01:35:53.197 [WARNING][5352] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0", GenerateName:"calico-apiserver-89794f578-", Namespace:"calico-apiserver", SelfLink:"", UID:"b40daba8-6884-4f86-8a32-cd23147e2173", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 35, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89794f578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93ab7fedc21d7b364240c417a6ea6e2fa7fce8ae2c874ae8ac088d69bb400f2d", Pod:"calico-apiserver-89794f578-dc8fd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie137d162225", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:53.455917 containerd[1471]: 2024-12-13 01:35:53.197 [INFO][5352] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Dec 13 01:35:53.455917 containerd[1471]: 2024-12-13 01:35:53.197 [INFO][5352] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" iface="eth0" netns="" Dec 13 01:35:53.455917 containerd[1471]: 2024-12-13 01:35:53.197 [INFO][5352] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Dec 13 01:35:53.455917 containerd[1471]: 2024-12-13 01:35:53.197 [INFO][5352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Dec 13 01:35:53.455917 containerd[1471]: 2024-12-13 01:35:53.228 [INFO][5359] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" HandleID="k8s-pod-network.08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Workload="localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0" Dec 13 01:35:53.455917 containerd[1471]: 2024-12-13 01:35:53.228 [INFO][5359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:53.455917 containerd[1471]: 2024-12-13 01:35:53.228 [INFO][5359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:53.455917 containerd[1471]: 2024-12-13 01:35:53.362 [WARNING][5359] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" HandleID="k8s-pod-network.08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Workload="localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0" Dec 13 01:35:53.455917 containerd[1471]: 2024-12-13 01:35:53.363 [INFO][5359] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" HandleID="k8s-pod-network.08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Workload="localhost-k8s-calico--apiserver--89794f578--dc8fd-eth0" Dec 13 01:35:53.455917 containerd[1471]: 2024-12-13 01:35:53.449 [INFO][5359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:53.455917 containerd[1471]: 2024-12-13 01:35:53.452 [INFO][5352] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7" Dec 13 01:35:53.455917 containerd[1471]: time="2024-12-13T01:35:53.455819647Z" level=info msg="TearDown network for sandbox \"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\" successfully" Dec 13 01:35:53.747397 kubelet[2538]: I1213 01:35:53.746960 2538 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:35:53.795398 containerd[1471]: time="2024-12-13T01:35:53.790843439Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:35:53.795398 containerd[1471]: time="2024-12-13T01:35:53.790954532Z" level=info msg="RemovePodSandbox \"08fb52641e66273b17fd53b04aabbbfea359af2ad53b93558ad36a652cda48b7\" returns successfully" Dec 13 01:35:53.795398 containerd[1471]: time="2024-12-13T01:35:53.791636067Z" level=info msg="StopPodSandbox for \"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\"" Dec 13 01:35:53.902193 containerd[1471]: 2024-12-13 01:35:53.857 [WARNING][5385] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--dldnc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9d170bf4-a7af-412b-aafa-970b3da3b8b8", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 34, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151", Pod:"coredns-6f6b679f8f-dldnc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3e6020a254", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:53.902193 containerd[1471]: 2024-12-13 01:35:53.857 [INFO][5385] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Dec 13 01:35:53.902193 containerd[1471]: 2024-12-13 01:35:53.857 [INFO][5385] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" iface="eth0" netns="" Dec 13 01:35:53.902193 containerd[1471]: 2024-12-13 01:35:53.857 [INFO][5385] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Dec 13 01:35:53.902193 containerd[1471]: 2024-12-13 01:35:53.857 [INFO][5385] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Dec 13 01:35:53.902193 containerd[1471]: 2024-12-13 01:35:53.889 [INFO][5393] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" HandleID="k8s-pod-network.9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Workload="localhost-k8s-coredns--6f6b679f8f--dldnc-eth0" Dec 13 01:35:53.902193 containerd[1471]: 2024-12-13 01:35:53.889 [INFO][5393] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:53.902193 containerd[1471]: 2024-12-13 01:35:53.889 [INFO][5393] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:53.902193 containerd[1471]: 2024-12-13 01:35:53.894 [WARNING][5393] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" HandleID="k8s-pod-network.9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Workload="localhost-k8s-coredns--6f6b679f8f--dldnc-eth0" Dec 13 01:35:53.902193 containerd[1471]: 2024-12-13 01:35:53.894 [INFO][5393] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" HandleID="k8s-pod-network.9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Workload="localhost-k8s-coredns--6f6b679f8f--dldnc-eth0" Dec 13 01:35:53.902193 containerd[1471]: 2024-12-13 01:35:53.896 [INFO][5393] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:53.902193 containerd[1471]: 2024-12-13 01:35:53.899 [INFO][5385] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Dec 13 01:35:53.902753 containerd[1471]: time="2024-12-13T01:35:53.902254160Z" level=info msg="TearDown network for sandbox \"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\" successfully" Dec 13 01:35:53.902753 containerd[1471]: time="2024-12-13T01:35:53.902291593Z" level=info msg="StopPodSandbox for \"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\" returns successfully" Dec 13 01:35:53.902942 containerd[1471]: time="2024-12-13T01:35:53.902904066Z" level=info msg="RemovePodSandbox for \"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\"" Dec 13 01:35:53.902975 containerd[1471]: time="2024-12-13T01:35:53.902942048Z" level=info msg="Forcibly stopping sandbox \"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\"" Dec 13 01:35:53.981586 containerd[1471]: 2024-12-13 01:35:53.947 [WARNING][5415] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--dldnc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9d170bf4-a7af-412b-aafa-970b3da3b8b8", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 34, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"07438f528541cfaace29e15399d84b3a7cd811c7d1ec093231998f17e0daf151", Pod:"coredns-6f6b679f8f-dldnc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3e6020a254", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:53.981586 containerd[1471]: 2024-12-13 01:35:53.947 [INFO][5415] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Dec 13 01:35:53.981586 containerd[1471]: 2024-12-13 01:35:53.947 [INFO][5415] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" iface="eth0" netns="" Dec 13 01:35:53.981586 containerd[1471]: 2024-12-13 01:35:53.947 [INFO][5415] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Dec 13 01:35:53.981586 containerd[1471]: 2024-12-13 01:35:53.947 [INFO][5415] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Dec 13 01:35:53.981586 containerd[1471]: 2024-12-13 01:35:53.967 [INFO][5422] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" HandleID="k8s-pod-network.9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Workload="localhost-k8s-coredns--6f6b679f8f--dldnc-eth0" Dec 13 01:35:53.981586 containerd[1471]: 2024-12-13 01:35:53.967 [INFO][5422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:53.981586 containerd[1471]: 2024-12-13 01:35:53.967 [INFO][5422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:53.981586 containerd[1471]: 2024-12-13 01:35:53.973 [WARNING][5422] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" HandleID="k8s-pod-network.9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Workload="localhost-k8s-coredns--6f6b679f8f--dldnc-eth0" Dec 13 01:35:53.981586 containerd[1471]: 2024-12-13 01:35:53.973 [INFO][5422] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" HandleID="k8s-pod-network.9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Workload="localhost-k8s-coredns--6f6b679f8f--dldnc-eth0" Dec 13 01:35:53.981586 containerd[1471]: 2024-12-13 01:35:53.976 [INFO][5422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:53.981586 containerd[1471]: 2024-12-13 01:35:53.978 [INFO][5415] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a" Dec 13 01:35:53.982137 containerd[1471]: time="2024-12-13T01:35:53.981630470Z" level=info msg="TearDown network for sandbox \"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\" successfully" Dec 13 01:35:53.986335 containerd[1471]: time="2024-12-13T01:35:53.986201758Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:35:53.986335 containerd[1471]: time="2024-12-13T01:35:53.986267033Z" level=info msg="RemovePodSandbox \"9a1b2601453b1de63c973e3939959800f28c2f60bf1803f38b5df354bc295a1a\" returns successfully" Dec 13 01:35:53.986765 containerd[1471]: time="2024-12-13T01:35:53.986739739Z" level=info msg="StopPodSandbox for \"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\"" Dec 13 01:35:54.075496 containerd[1471]: 2024-12-13 01:35:54.028 [WARNING][5445] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"332e1ee2-384e-4c56-b8b4-25490ccaf929", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 34, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747", Pod:"coredns-6f6b679f8f-xtl2z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9ac21084e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:54.075496 containerd[1471]: 2024-12-13 01:35:54.029 [INFO][5445] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Dec 13 01:35:54.075496 containerd[1471]: 2024-12-13 01:35:54.029 [INFO][5445] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" iface="eth0" netns="" Dec 13 01:35:54.075496 containerd[1471]: 2024-12-13 01:35:54.029 [INFO][5445] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Dec 13 01:35:54.075496 containerd[1471]: 2024-12-13 01:35:54.029 [INFO][5445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Dec 13 01:35:54.075496 containerd[1471]: 2024-12-13 01:35:54.058 [INFO][5453] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" HandleID="k8s-pod-network.533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Workload="localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0" Dec 13 01:35:54.075496 containerd[1471]: 2024-12-13 01:35:54.058 [INFO][5453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:54.075496 containerd[1471]: 2024-12-13 01:35:54.058 [INFO][5453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:54.075496 containerd[1471]: 2024-12-13 01:35:54.066 [WARNING][5453] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" HandleID="k8s-pod-network.533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Workload="localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0" Dec 13 01:35:54.075496 containerd[1471]: 2024-12-13 01:35:54.066 [INFO][5453] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" HandleID="k8s-pod-network.533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Workload="localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0" Dec 13 01:35:54.075496 containerd[1471]: 2024-12-13 01:35:54.068 [INFO][5453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:54.075496 containerd[1471]: 2024-12-13 01:35:54.072 [INFO][5445] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Dec 13 01:35:54.076194 containerd[1471]: time="2024-12-13T01:35:54.075576165Z" level=info msg="TearDown network for sandbox \"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\" successfully" Dec 13 01:35:54.076194 containerd[1471]: time="2024-12-13T01:35:54.075612444Z" level=info msg="StopPodSandbox for \"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\" returns successfully" Dec 13 01:35:54.076341 containerd[1471]: time="2024-12-13T01:35:54.076298628Z" level=info msg="RemovePodSandbox for \"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\"" Dec 13 01:35:54.076418 containerd[1471]: time="2024-12-13T01:35:54.076348203Z" level=info msg="Forcibly stopping sandbox \"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\"" Dec 13 01:35:54.154435 containerd[1471]: 2024-12-13 01:35:54.117 [WARNING][5475] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"332e1ee2-384e-4c56-b8b4-25490ccaf929", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 34, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d385f7852d7e5b8e85bcae474bb97f33b40b7a750bd0ef9971060c4e809ff747", Pod:"coredns-6f6b679f8f-xtl2z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9ac21084e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:54.154435 containerd[1471]: 2024-12-13 01:35:54.118 [INFO][5475] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Dec 13 01:35:54.154435 containerd[1471]: 2024-12-13 01:35:54.118 [INFO][5475] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" iface="eth0" netns="" Dec 13 01:35:54.154435 containerd[1471]: 2024-12-13 01:35:54.118 [INFO][5475] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Dec 13 01:35:54.154435 containerd[1471]: 2024-12-13 01:35:54.118 [INFO][5475] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Dec 13 01:35:54.154435 containerd[1471]: 2024-12-13 01:35:54.139 [INFO][5482] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" HandleID="k8s-pod-network.533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Workload="localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0" Dec 13 01:35:54.154435 containerd[1471]: 2024-12-13 01:35:54.139 [INFO][5482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:54.154435 containerd[1471]: 2024-12-13 01:35:54.139 [INFO][5482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:54.154435 containerd[1471]: 2024-12-13 01:35:54.146 [WARNING][5482] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" HandleID="k8s-pod-network.533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Workload="localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0" Dec 13 01:35:54.154435 containerd[1471]: 2024-12-13 01:35:54.147 [INFO][5482] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" HandleID="k8s-pod-network.533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Workload="localhost-k8s-coredns--6f6b679f8f--xtl2z-eth0" Dec 13 01:35:54.154435 containerd[1471]: 2024-12-13 01:35:54.149 [INFO][5482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:54.154435 containerd[1471]: 2024-12-13 01:35:54.151 [INFO][5475] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050" Dec 13 01:35:54.154988 containerd[1471]: time="2024-12-13T01:35:54.154494946Z" level=info msg="TearDown network for sandbox \"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\" successfully" Dec 13 01:35:54.159402 containerd[1471]: time="2024-12-13T01:35:54.159357747Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:35:54.159473 containerd[1471]: time="2024-12-13T01:35:54.159439303Z" level=info msg="RemovePodSandbox \"533b407cb906135193b243e31d5e05dea6a667c0ab58c742c58d1e086999e050\" returns successfully" Dec 13 01:35:54.160176 containerd[1471]: time="2024-12-13T01:35:54.160134375Z" level=info msg="StopPodSandbox for \"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\"" Dec 13 01:35:54.234601 containerd[1471]: 2024-12-13 01:35:54.198 [WARNING][5506] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0", GenerateName:"calico-apiserver-89794f578-", Namespace:"calico-apiserver", SelfLink:"", UID:"24589cea-6828-4164-a7da-b10ab65d700a", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 35, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89794f578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95", Pod:"calico-apiserver-89794f578-hrxnn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali42baf1eea7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:54.234601 containerd[1471]: 2024-12-13 01:35:54.198 [INFO][5506] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Dec 13 01:35:54.234601 containerd[1471]: 2024-12-13 01:35:54.198 [INFO][5506] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" iface="eth0" netns="" Dec 13 01:35:54.234601 containerd[1471]: 2024-12-13 01:35:54.198 [INFO][5506] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Dec 13 01:35:54.234601 containerd[1471]: 2024-12-13 01:35:54.198 [INFO][5506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Dec 13 01:35:54.234601 containerd[1471]: 2024-12-13 01:35:54.221 [INFO][5513] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" HandleID="k8s-pod-network.cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Workload="localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0" Dec 13 01:35:54.234601 containerd[1471]: 2024-12-13 01:35:54.221 [INFO][5513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:54.234601 containerd[1471]: 2024-12-13 01:35:54.221 [INFO][5513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:54.234601 containerd[1471]: 2024-12-13 01:35:54.227 [WARNING][5513] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" HandleID="k8s-pod-network.cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Workload="localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0" Dec 13 01:35:54.234601 containerd[1471]: 2024-12-13 01:35:54.227 [INFO][5513] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" HandleID="k8s-pod-network.cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Workload="localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0" Dec 13 01:35:54.234601 containerd[1471]: 2024-12-13 01:35:54.229 [INFO][5513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:54.234601 containerd[1471]: 2024-12-13 01:35:54.231 [INFO][5506] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Dec 13 01:35:54.235181 containerd[1471]: time="2024-12-13T01:35:54.234661377Z" level=info msg="TearDown network for sandbox \"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\" successfully" Dec 13 01:35:54.235181 containerd[1471]: time="2024-12-13T01:35:54.234690492Z" level=info msg="StopPodSandbox for \"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\" returns successfully" Dec 13 01:35:54.235238 containerd[1471]: time="2024-12-13T01:35:54.235222010Z" level=info msg="RemovePodSandbox for \"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\"" Dec 13 01:35:54.235279 containerd[1471]: time="2024-12-13T01:35:54.235247970Z" level=info msg="Forcibly stopping sandbox \"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\"" Dec 13 01:35:54.320220 containerd[1471]: 2024-12-13 01:35:54.277 [WARNING][5537] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0", GenerateName:"calico-apiserver-89794f578-", Namespace:"calico-apiserver", SelfLink:"", UID:"24589cea-6828-4164-a7da-b10ab65d700a", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 35, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89794f578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"220956721c33670d6bcac32e2e4017dba01f4c84f7fa831860a7d122f1059e95", Pod:"calico-apiserver-89794f578-hrxnn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali42baf1eea7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:35:54.320220 containerd[1471]: 2024-12-13 01:35:54.278 [INFO][5537] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Dec 13 01:35:54.320220 containerd[1471]: 2024-12-13 01:35:54.278 [INFO][5537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" iface="eth0" netns="" Dec 13 01:35:54.320220 containerd[1471]: 2024-12-13 01:35:54.278 [INFO][5537] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Dec 13 01:35:54.320220 containerd[1471]: 2024-12-13 01:35:54.278 [INFO][5537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Dec 13 01:35:54.320220 containerd[1471]: 2024-12-13 01:35:54.306 [INFO][5550] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" HandleID="k8s-pod-network.cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Workload="localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0" Dec 13 01:35:54.320220 containerd[1471]: 2024-12-13 01:35:54.306 [INFO][5550] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:35:54.320220 containerd[1471]: 2024-12-13 01:35:54.306 [INFO][5550] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:35:54.320220 containerd[1471]: 2024-12-13 01:35:54.313 [WARNING][5550] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" HandleID="k8s-pod-network.cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Workload="localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0" Dec 13 01:35:54.320220 containerd[1471]: 2024-12-13 01:35:54.313 [INFO][5550] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" HandleID="k8s-pod-network.cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Workload="localhost-k8s-calico--apiserver--89794f578--hrxnn-eth0" Dec 13 01:35:54.320220 containerd[1471]: 2024-12-13 01:35:54.314 [INFO][5550] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:35:54.320220 containerd[1471]: 2024-12-13 01:35:54.317 [INFO][5537] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b" Dec 13 01:35:54.320841 containerd[1471]: time="2024-12-13T01:35:54.320290296Z" level=info msg="TearDown network for sandbox \"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\" successfully" Dec 13 01:35:54.393136 containerd[1471]: time="2024-12-13T01:35:54.392949633Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:35:54.393136 containerd[1471]: time="2024-12-13T01:35:54.393058762Z" level=info msg="RemovePodSandbox \"cd7cf43003b15c7c1dce2f836bc9cd86dbe1867ffa3522c37d4243210ad2ca3b\" returns successfully" Dec 13 01:35:55.073913 systemd[1]: Started sshd@15-10.0.0.100:22-10.0.0.1:55650.service - OpenSSH per-connection server daemon (10.0.0.1:55650). Dec 13 01:35:55.131327 sshd[5559]: Accepted publickey for core from 10.0.0.1 port 55650 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:35:55.133575 sshd[5559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:55.139344 systemd-logind[1455]: New session 16 of user core. Dec 13 01:35:55.147733 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:35:55.272142 sshd[5559]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:55.277125 systemd[1]: sshd@15-10.0.0.100:22-10.0.0.1:55650.service: Deactivated successfully. Dec 13 01:35:55.279872 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:35:55.280614 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:35:55.281739 systemd-logind[1455]: Removed session 16. Dec 13 01:35:59.819439 kubelet[2538]: E1213 01:35:59.819377 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:00.295454 systemd[1]: Started sshd@16-10.0.0.100:22-10.0.0.1:51318.service - OpenSSH per-connection server daemon (10.0.0.1:51318). Dec 13 01:36:00.354225 sshd[5598]: Accepted publickey for core from 10.0.0.1 port 51318 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:36:00.356483 sshd[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:00.362069 systemd-logind[1455]: New session 17 of user core. Dec 13 01:36:00.377882 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:36:00.515326 sshd[5598]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:00.519802 systemd[1]: sshd@16-10.0.0.100:22-10.0.0.1:51318.service: Deactivated successfully. Dec 13 01:36:00.522387 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:36:00.523192 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:36:00.524206 systemd-logind[1455]: Removed session 17. Dec 13 01:36:05.529312 systemd[1]: Started sshd@17-10.0.0.100:22-10.0.0.1:51330.service - OpenSSH per-connection server daemon (10.0.0.1:51330). Dec 13 01:36:05.593445 sshd[5612]: Accepted publickey for core from 10.0.0.1 port 51330 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:36:05.595881 sshd[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:05.601664 systemd-logind[1455]: New session 18 of user core. Dec 13 01:36:05.608733 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:36:05.745982 sshd[5612]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:05.756896 systemd[1]: sshd@17-10.0.0.100:22-10.0.0.1:51330.service: Deactivated successfully. Dec 13 01:36:05.766199 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:36:05.768819 systemd-logind[1455]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:36:05.783554 systemd[1]: Started sshd@18-10.0.0.100:22-10.0.0.1:51346.service - OpenSSH per-connection server daemon (10.0.0.1:51346). Dec 13 01:36:05.786871 systemd-logind[1455]: Removed session 18. Dec 13 01:36:05.821308 sshd[5643]: Accepted publickey for core from 10.0.0.1 port 51346 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:36:05.823733 sshd[5643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:05.828381 systemd-logind[1455]: New session 19 of user core. Dec 13 01:36:05.835718 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:36:06.709722 sshd[5643]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:06.717754 systemd[1]: sshd@18-10.0.0.100:22-10.0.0.1:51346.service: Deactivated successfully. Dec 13 01:36:06.719742 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:36:06.720622 systemd-logind[1455]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:36:06.727003 systemd[1]: Started sshd@19-10.0.0.100:22-10.0.0.1:50250.service - OpenSSH per-connection server daemon (10.0.0.1:50250). Dec 13 01:36:06.729298 systemd-logind[1455]: Removed session 19. Dec 13 01:36:06.775794 sshd[5659]: Accepted publickey for core from 10.0.0.1 port 50250 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:36:06.777817 sshd[5659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:06.782906 systemd-logind[1455]: New session 20 of user core. Dec 13 01:36:06.790680 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:36:07.673502 systemd[1]: run-containerd-runc-k8s.io-3355da6ac8b96933fdf4d0927c98835b66e375a88bb21e70bff88e8849a1c106-runc.Topipg.mount: Deactivated successfully. Dec 13 01:36:09.149023 sshd[5659]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:09.162361 systemd[1]: sshd@19-10.0.0.100:22-10.0.0.1:50250.service: Deactivated successfully. Dec 13 01:36:09.168054 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:36:09.171562 systemd-logind[1455]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:36:09.183172 systemd[1]: Started sshd@20-10.0.0.100:22-10.0.0.1:50264.service - OpenSSH per-connection server daemon (10.0.0.1:50264). Dec 13 01:36:09.190200 systemd-logind[1455]: Removed session 20. Dec 13 01:36:09.235500 sshd[5712]: Accepted publickey for core from 10.0.0.1 port 50264 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:36:09.238209 sshd[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:09.243869 systemd-logind[1455]: New session 21 of user core. Dec 13 01:36:09.252735 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:36:09.733914 sshd[5712]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:09.748680 systemd[1]: sshd@20-10.0.0.100:22-10.0.0.1:50264.service: Deactivated successfully. Dec 13 01:36:09.751099 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:36:09.754204 systemd-logind[1455]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:36:09.759907 systemd[1]: Started sshd@21-10.0.0.100:22-10.0.0.1:50268.service - OpenSSH per-connection server daemon (10.0.0.1:50268). Dec 13 01:36:09.761233 systemd-logind[1455]: Removed session 21. Dec 13 01:36:09.791773 sshd[5724]: Accepted publickey for core from 10.0.0.1 port 50268 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:36:09.793712 sshd[5724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:09.798496 systemd-logind[1455]: New session 22 of user core. Dec 13 01:36:09.805685 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:36:09.960404 sshd[5724]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:09.964997 systemd[1]: sshd@21-10.0.0.100:22-10.0.0.1:50268.service: Deactivated successfully. Dec 13 01:36:09.967144 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:36:09.968086 systemd-logind[1455]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:36:09.969318 systemd-logind[1455]: Removed session 22. Dec 13 01:36:14.981008 systemd[1]: Started sshd@22-10.0.0.100:22-10.0.0.1:50278.service - OpenSSH per-connection server daemon (10.0.0.1:50278). Dec 13 01:36:15.040412 sshd[5744]: Accepted publickey for core from 10.0.0.1 port 50278 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:36:15.043047 sshd[5744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:15.051093 systemd-logind[1455]: New session 23 of user core. Dec 13 01:36:15.059278 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:36:15.242049 sshd[5744]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:15.246709 systemd[1]: sshd@22-10.0.0.100:22-10.0.0.1:50278.service: Deactivated successfully. Dec 13 01:36:15.249341 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:36:15.252021 systemd-logind[1455]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:36:15.253411 systemd-logind[1455]: Removed session 23. Dec 13 01:36:16.336586 kubelet[2538]: E1213 01:36:16.336500 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:20.254366 systemd[1]: Started sshd@23-10.0.0.100:22-10.0.0.1:54944.service - OpenSSH per-connection server daemon (10.0.0.1:54944). Dec 13 01:36:20.289766 sshd[5765]: Accepted publickey for core from 10.0.0.1 port 54944 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:36:20.291408 sshd[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:20.295799 systemd-logind[1455]: New session 24 of user core. Dec 13 01:36:20.302670 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:36:20.414610 sshd[5765]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:20.418682 systemd[1]: sshd@23-10.0.0.100:22-10.0.0.1:54944.service: Deactivated successfully. Dec 13 01:36:20.421022 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:36:20.421740 systemd-logind[1455]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:36:20.422741 systemd-logind[1455]: Removed session 24. Dec 13 01:36:23.336094 kubelet[2538]: E1213 01:36:23.336041 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:24.336760 kubelet[2538]: E1213 01:36:24.336724 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:25.427021 systemd[1]: Started sshd@24-10.0.0.100:22-10.0.0.1:54946.service - OpenSSH per-connection server daemon (10.0.0.1:54946). Dec 13 01:36:25.469042 sshd[5779]: Accepted publickey for core from 10.0.0.1 port 54946 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:36:25.470821 sshd[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:25.475556 systemd-logind[1455]: New session 25 of user core. Dec 13 01:36:25.485675 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:36:25.596590 sshd[5779]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:25.601174 systemd[1]: sshd@24-10.0.0.100:22-10.0.0.1:54946.service: Deactivated successfully. Dec 13 01:36:25.603457 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:36:25.604166 systemd-logind[1455]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:36:25.605130 systemd-logind[1455]: Removed session 25. Dec 13 01:36:30.626378 systemd[1]: Started sshd@25-10.0.0.100:22-10.0.0.1:49138.service - OpenSSH per-connection server daemon (10.0.0.1:49138). Dec 13 01:36:30.669747 sshd[5818]: Accepted publickey for core from 10.0.0.1 port 49138 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:36:30.673295 sshd[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:30.679913 systemd-logind[1455]: New session 26 of user core. Dec 13 01:36:30.687878 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:36:30.844859 sshd[5818]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:30.850160 systemd[1]: sshd@25-10.0.0.100:22-10.0.0.1:49138.service: Deactivated successfully. Dec 13 01:36:30.853040 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:36:30.855774 systemd-logind[1455]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:36:30.859771 systemd-logind[1455]: Removed session 26. Dec 13 01:36:31.336736 kubelet[2538]: E1213 01:36:31.336641 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:32.337646 kubelet[2538]: E1213 01:36:32.337583 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"