Nov 12 17:37:09.898308 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 12 17:37:09.898329 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Nov 12 16:24:35 -00 2024 Nov 12 17:37:09.898339 kernel: KASLR enabled Nov 12 17:37:09.898345 kernel: efi: EFI v2.7 by EDK II Nov 12 17:37:09.898351 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Nov 12 17:37:09.898356 kernel: random: crng init done Nov 12 17:37:09.898364 kernel: ACPI: Early table checksum verification disabled Nov 12 17:37:09.898369 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Nov 12 17:37:09.898376 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 12 17:37:09.898383 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:37:09.898389 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:37:09.898395 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:37:09.898401 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:37:09.898407 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:37:09.898415 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:37:09.898422 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:37:09.898429 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:37:09.898435 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:37:09.898442 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 12 17:37:09.898448 kernel: NUMA: Failed to initialise from firmware Nov 12 17:37:09.898454 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 17:37:09.898461 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Nov 12 17:37:09.898467 kernel: Zone ranges: Nov 12 17:37:09.898473 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 17:37:09.898479 kernel: DMA32 empty Nov 12 17:37:09.898487 kernel: Normal empty Nov 12 17:37:09.898493 kernel: Movable zone start for each node Nov 12 17:37:09.898499 kernel: Early memory node ranges Nov 12 17:37:09.898506 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Nov 12 17:37:09.898512 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Nov 12 17:37:09.898518 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Nov 12 17:37:09.898525 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 12 17:37:09.898531 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 12 17:37:09.898537 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 12 17:37:09.898543 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 12 17:37:09.898550 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 17:37:09.898556 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 12 17:37:09.898564 kernel: psci: probing for conduit method from ACPI. Nov 12 17:37:09.898580 kernel: psci: PSCIv1.1 detected in firmware. Nov 12 17:37:09.898587 kernel: psci: Using standard PSCI v0.2 function IDs Nov 12 17:37:09.898598 kernel: psci: Trusted OS migration not required Nov 12 17:37:09.898605 kernel: psci: SMC Calling Convention v1.1 Nov 12 17:37:09.898613 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 12 17:37:09.898621 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Nov 12 17:37:09.898628 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Nov 12 17:37:09.898635 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 12 17:37:09.898642 kernel: Detected PIPT I-cache on CPU0 Nov 12 17:37:09.898649 kernel: CPU features: detected: GIC system register CPU interface Nov 12 17:37:09.898656 kernel: CPU features: detected: Hardware dirty bit management Nov 12 17:37:09.898663 kernel: CPU features: detected: Spectre-v4 Nov 12 17:37:09.898669 kernel: CPU features: detected: Spectre-BHB Nov 12 17:37:09.898676 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 12 17:37:09.898683 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 12 17:37:09.898691 kernel: CPU features: detected: ARM erratum 1418040 Nov 12 17:37:09.898699 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 12 17:37:09.898705 kernel: alternatives: applying boot alternatives Nov 12 17:37:09.898714 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8c276c03cfeb31103ba0b5f1af613bdc698463ad3d29e6750e34154929bf187e Nov 12 17:37:09.898721 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 17:37:09.898728 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 17:37:09.898735 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 17:37:09.898742 kernel: Fallback order for Node 0: 0 Nov 12 17:37:09.898748 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Nov 12 17:37:09.898755 kernel: Policy zone: DMA Nov 12 17:37:09.898762 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 17:37:09.898770 kernel: software IO TLB: area num 4. Nov 12 17:37:09.898776 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Nov 12 17:37:09.898783 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Nov 12 17:37:09.898790 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 17:37:09.898797 kernel: trace event string verifier disabled Nov 12 17:37:09.898804 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 17:37:09.898811 kernel: rcu: RCU event tracing is enabled. Nov 12 17:37:09.898818 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 17:37:09.898825 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 17:37:09.898832 kernel: Tracing variant of Tasks RCU enabled. Nov 12 17:37:09.898839 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 17:37:09.898846 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 17:37:09.898855 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 12 17:37:09.898861 kernel: GICv3: 256 SPIs implemented Nov 12 17:37:09.898868 kernel: GICv3: 0 Extended SPIs implemented Nov 12 17:37:09.898875 kernel: Root IRQ handler: gic_handle_irq Nov 12 17:37:09.898881 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 12 17:37:09.898889 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 12 17:37:09.898895 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 12 17:37:09.898902 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Nov 12 17:37:09.898909 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Nov 12 17:37:09.898916 kernel: GICv3: using LPI property table @0x00000000400f0000 Nov 12 17:37:09.898923 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Nov 12 17:37:09.898931 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 17:37:09.898938 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 17:37:09.898945 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 12 17:37:09.898952 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 12 17:37:09.898959 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 12 17:37:09.898966 kernel: arm-pv: using stolen time PV Nov 12 17:37:09.898973 kernel: Console: colour dummy device 80x25 Nov 12 17:37:09.898980 kernel: ACPI: Core revision 20230628 Nov 12 17:37:09.898988 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 12 17:37:09.898999 kernel: pid_max: default: 32768 minimum: 301 Nov 12 17:37:09.899008 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 17:37:09.899016 kernel: landlock: Up and running. Nov 12 17:37:09.899023 kernel: SELinux: Initializing. Nov 12 17:37:09.899030 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 17:37:09.899037 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 17:37:09.899044 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 17:37:09.899051 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 17:37:09.899058 kernel: rcu: Hierarchical SRCU implementation. Nov 12 17:37:09.899065 kernel: rcu: Max phase no-delay instances is 400. Nov 12 17:37:09.899073 kernel: Platform MSI: ITS@0x8080000 domain created Nov 12 17:37:09.899080 kernel: PCI/MSI: ITS@0x8080000 domain created Nov 12 17:37:09.899087 kernel: Remapping and enabling EFI services. Nov 12 17:37:09.899094 kernel: smp: Bringing up secondary CPUs ... Nov 12 17:37:09.899101 kernel: Detected PIPT I-cache on CPU1 Nov 12 17:37:09.899108 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 12 17:37:09.899115 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Nov 12 17:37:09.899122 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 17:37:09.899128 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 12 17:37:09.899135 kernel: Detected PIPT I-cache on CPU2 Nov 12 17:37:09.899149 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 12 17:37:09.899156 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Nov 12 17:37:09.899169 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 17:37:09.899177 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 12 17:37:09.899184 kernel: Detected PIPT I-cache on CPU3 Nov 12 17:37:09.899192 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 12 17:37:09.899199 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Nov 12 17:37:09.899207 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 17:37:09.899214 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 12 17:37:09.899223 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 17:37:09.899230 kernel: SMP: Total of 4 processors activated. Nov 12 17:37:09.899237 kernel: CPU features: detected: 32-bit EL0 Support Nov 12 17:37:09.899244 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 12 17:37:09.899252 kernel: CPU features: detected: Common not Private translations Nov 12 17:37:09.899259 kernel: CPU features: detected: CRC32 instructions Nov 12 17:37:09.899266 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 12 17:37:09.899274 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 12 17:37:09.899282 kernel: CPU features: detected: LSE atomic instructions Nov 12 17:37:09.899289 kernel: CPU features: detected: Privileged Access Never Nov 12 17:37:09.899297 kernel: CPU features: detected: RAS Extension Support Nov 12 17:37:09.899304 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 12 17:37:09.899311 kernel: CPU: All CPU(s) started at EL1 Nov 12 17:37:09.899319 kernel: alternatives: applying system-wide alternatives Nov 12 17:37:09.899326 kernel: devtmpfs: initialized Nov 12 17:37:09.899333 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 17:37:09.899341 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 17:37:09.899349 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 17:37:09.899357 kernel: SMBIOS 3.0.0 present. Nov 12 17:37:09.899364 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Nov 12 17:37:09.899371 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 17:37:09.899379 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 12 17:37:09.899386 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 12 17:37:09.899394 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 12 17:37:09.899401 kernel: audit: initializing netlink subsys (disabled) Nov 12 17:37:09.899409 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Nov 12 17:37:09.899418 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 17:37:09.899425 kernel: cpuidle: using governor menu Nov 12 17:37:09.899432 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 12 17:37:09.899439 kernel: ASID allocator initialised with 32768 entries Nov 12 17:37:09.899447 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 17:37:09.899454 kernel: Serial: AMBA PL011 UART driver Nov 12 17:37:09.899461 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 12 17:37:09.899469 kernel: Modules: 0 pages in range for non-PLT usage Nov 12 17:37:09.899476 kernel: Modules: 509040 pages in range for PLT usage Nov 12 17:37:09.899485 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 17:37:09.899492 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 17:37:09.899499 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 12 17:37:09.899507 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 12 17:37:09.899514 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 17:37:09.899521 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 17:37:09.899529 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 12 17:37:09.899536 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 12 17:37:09.899543 kernel: ACPI: Added _OSI(Module Device) Nov 12 17:37:09.899552 kernel: ACPI: Added _OSI(Processor Device) Nov 12 17:37:09.899559 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 17:37:09.899566 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 17:37:09.899579 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 17:37:09.899586 kernel: ACPI: Interpreter enabled Nov 12 17:37:09.899594 kernel: ACPI: Using GIC for interrupt routing Nov 12 17:37:09.899602 kernel: ACPI: MCFG table detected, 1 entries Nov 12 17:37:09.899609 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 12 17:37:09.899617 kernel: printk: console [ttyAMA0] enabled Nov 12 17:37:09.899626 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 17:37:09.899756 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 17:37:09.899833 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 12 17:37:09.899901 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 12 17:37:09.899965 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 12 17:37:09.900030 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 12 17:37:09.900046 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 12 17:37:09.900056 kernel: PCI host bridge to bus 0000:00 Nov 12 17:37:09.900134 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 12 17:37:09.900204 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 12 17:37:09.900268 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 12 17:37:09.900326 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 17:37:09.900479 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Nov 12 17:37:09.900605 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 17:37:09.900686 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Nov 12 17:37:09.900753 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Nov 12 17:37:09.900819 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Nov 12 17:37:09.900885 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Nov 12 17:37:09.900953 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Nov 12 17:37:09.901019 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Nov 12 17:37:09.901096 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 12 17:37:09.901172 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 12 17:37:09.901242 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 12 17:37:09.901252 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 12 17:37:09.901260 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 12 17:37:09.901268 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 12 17:37:09.901275 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 12 17:37:09.901282 kernel: iommu: Default domain type: Translated Nov 12 17:37:09.901290 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 12 17:37:09.901300 kernel: efivars: Registered efivars operations Nov 12 17:37:09.901307 kernel: vgaarb: loaded Nov 12 17:37:09.901315 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 12 17:37:09.901322 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 17:37:09.901330 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 17:37:09.901342 kernel: pnp: PnP ACPI init Nov 12 17:37:09.901439 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 12 17:37:09.901451 kernel: pnp: PnP ACPI: found 1 devices Nov 12 17:37:09.901460 kernel: NET: Registered PF_INET protocol family Nov 12 17:37:09.901468 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 17:37:09.901475 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 17:37:09.901483 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 17:37:09.901490 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 17:37:09.901497 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 17:37:09.901505 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 17:37:09.901512 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 17:37:09.901520 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 17:37:09.901528 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 17:37:09.901536 kernel: PCI: CLS 0 bytes, default 64 Nov 12 17:37:09.901543 kernel: kvm [1]: HYP mode not available Nov 12 17:37:09.901550 kernel: Initialise system trusted keyrings Nov 12 17:37:09.901558 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 17:37:09.901565 kernel: Key type asymmetric registered Nov 12 17:37:09.901643 kernel: Asymmetric key parser 'x509' registered Nov 12 17:37:09.901651 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 12 17:37:09.901658 kernel: io scheduler mq-deadline registered Nov 12 17:37:09.901668 kernel: io scheduler kyber registered Nov 12 17:37:09.901675 kernel: io scheduler bfq registered Nov 12 17:37:09.901683 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 12 17:37:09.901690 kernel: ACPI: button: Power Button [PWRB] Nov 12 17:37:09.901698 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 12 17:37:09.901781 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 12 17:37:09.901792 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 17:37:09.901799 kernel: thunder_xcv, ver 1.0 Nov 12 17:37:09.901806 kernel: thunder_bgx, ver 1.0 Nov 12 17:37:09.901815 kernel: nicpf, ver 1.0 Nov 12 17:37:09.901823 kernel: nicvf, ver 1.0 Nov 12 17:37:09.901899 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 12 17:37:09.901965 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-11-12T17:37:09 UTC (1731433029) Nov 12 17:37:09.901975 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 12 17:37:09.901983 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Nov 12 17:37:09.901990 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 12 17:37:09.901998 kernel: watchdog: Hard watchdog permanently disabled Nov 12 17:37:09.902007 kernel: NET: Registered PF_INET6 protocol family Nov 12 17:37:09.902014 kernel: Segment Routing with IPv6 Nov 12 17:37:09.902022 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 17:37:09.902029 kernel: NET: Registered PF_PACKET protocol family Nov 12 17:37:09.902036 kernel: Key type dns_resolver registered Nov 12 17:37:09.902043 kernel: registered taskstats version 1 Nov 12 17:37:09.902051 kernel: Loading compiled-in X.509 certificates Nov 12 17:37:09.902058 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 277bea35d8d47c9841f307ab609d4271c3622dcb' Nov 12 17:37:09.902066 kernel: Key type .fscrypt registered Nov 12 17:37:09.902074 kernel: Key type fscrypt-provisioning registered Nov 12 17:37:09.902082 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 17:37:09.902089 kernel: ima: Allocated hash algorithm: sha1 Nov 12 17:37:09.902097 kernel: ima: No architecture policies found Nov 12 17:37:09.902104 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 12 17:37:09.902113 kernel: clk: Disabling unused clocks Nov 12 17:37:09.902120 kernel: Freeing unused kernel memory: 39360K Nov 12 17:37:09.902128 kernel: Run /init as init process Nov 12 17:37:09.902155 kernel: with arguments: Nov 12 17:37:09.902166 kernel: /init Nov 12 17:37:09.902173 kernel: with environment: Nov 12 17:37:09.902181 kernel: HOME=/ Nov 12 17:37:09.902189 kernel: TERM=linux Nov 12 17:37:09.902196 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 17:37:09.902206 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 17:37:09.902216 systemd[1]: Detected virtualization kvm. Nov 12 17:37:09.902225 systemd[1]: Detected architecture arm64. Nov 12 17:37:09.902235 systemd[1]: Running in initrd. Nov 12 17:37:09.902242 systemd[1]: No hostname configured, using default hostname. Nov 12 17:37:09.902250 systemd[1]: Hostname set to . Nov 12 17:37:09.902258 systemd[1]: Initializing machine ID from VM UUID. Nov 12 17:37:09.902266 systemd[1]: Queued start job for default target initrd.target. Nov 12 17:37:09.902275 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 17:37:09.902283 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 17:37:09.902291 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 17:37:09.902301 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 17:37:09.902309 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 17:37:09.902317 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 17:37:09.902326 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 17:37:09.902335 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 17:37:09.902343 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 17:37:09.902352 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 17:37:09.902360 systemd[1]: Reached target paths.target - Path Units. Nov 12 17:37:09.902368 systemd[1]: Reached target slices.target - Slice Units. Nov 12 17:37:09.902376 systemd[1]: Reached target swap.target - Swaps. Nov 12 17:37:09.902384 systemd[1]: Reached target timers.target - Timer Units. Nov 12 17:37:09.902392 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 17:37:09.902400 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 17:37:09.902407 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 17:37:09.902415 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 17:37:09.902425 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 17:37:09.902433 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 17:37:09.902441 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 17:37:09.902449 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 17:37:09.902457 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 17:37:09.902465 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 17:37:09.902473 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 17:37:09.902481 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 17:37:09.902489 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 17:37:09.902499 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 17:37:09.902507 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:37:09.902514 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 17:37:09.902522 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 17:37:09.902530 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 17:37:09.902558 systemd-journald[237]: Collecting audit messages is disabled. Nov 12 17:37:09.902584 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 17:37:09.902593 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:37:09.902604 systemd-journald[237]: Journal started Nov 12 17:37:09.902623 systemd-journald[237]: Runtime Journal (/run/log/journal/90af9118f7944675a206855b32db8bd3) is 5.9M, max 47.3M, 41.4M free. Nov 12 17:37:09.895041 systemd-modules-load[238]: Inserted module 'overlay' Nov 12 17:37:09.905586 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 17:37:09.906636 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 17:37:09.909603 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 17:37:09.909667 kernel: Bridge firewalling registered Nov 12 17:37:09.909635 systemd-modules-load[238]: Inserted module 'br_netfilter' Nov 12 17:37:09.910742 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 17:37:09.911905 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 17:37:09.925770 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 17:37:09.927695 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 17:37:09.931112 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 17:37:09.932244 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:37:09.937626 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:37:09.940737 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 17:37:09.941628 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 17:37:09.943228 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 17:37:09.946634 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 17:37:09.954306 dracut-cmdline[274]: dracut-dracut-053 Nov 12 17:37:09.956791 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8c276c03cfeb31103ba0b5f1af613bdc698463ad3d29e6750e34154929bf187e Nov 12 17:37:09.972323 systemd-resolved[277]: Positive Trust Anchors: Nov 12 17:37:09.972341 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 17:37:09.972373 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 17:37:09.977216 systemd-resolved[277]: Defaulting to hostname 'linux'. Nov 12 17:37:09.978138 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 17:37:09.980159 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 17:37:10.022591 kernel: SCSI subsystem initialized Nov 12 17:37:10.026610 kernel: Loading iSCSI transport class v2.0-870. Nov 12 17:37:10.035613 kernel: iscsi: registered transport (tcp) Nov 12 17:37:10.046794 kernel: iscsi: registered transport (qla4xxx) Nov 12 17:37:10.046834 kernel: QLogic iSCSI HBA Driver Nov 12 17:37:10.087353 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 17:37:10.100736 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 17:37:10.118858 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 17:37:10.118905 kernel: device-mapper: uevent: version 1.0.3 Nov 12 17:37:10.119644 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 17:37:10.166629 kernel: raid6: neonx8 gen() 15758 MB/s Nov 12 17:37:10.183594 kernel: raid6: neonx4 gen() 15659 MB/s Nov 12 17:37:10.200593 kernel: raid6: neonx2 gen() 13221 MB/s Nov 12 17:37:10.217589 kernel: raid6: neonx1 gen() 10511 MB/s Nov 12 17:37:10.234587 kernel: raid6: int64x8 gen() 6952 MB/s Nov 12 17:37:10.251586 kernel: raid6: int64x4 gen() 7344 MB/s Nov 12 17:37:10.268590 kernel: raid6: int64x2 gen() 6130 MB/s Nov 12 17:37:10.285592 kernel: raid6: int64x1 gen() 5056 MB/s Nov 12 17:37:10.285612 kernel: raid6: using algorithm neonx8 gen() 15758 MB/s Nov 12 17:37:10.302596 kernel: raid6: .... xor() 12011 MB/s, rmw enabled Nov 12 17:37:10.302614 kernel: raid6: using neon recovery algorithm Nov 12 17:37:10.307588 kernel: xor: measuring software checksum speed Nov 12 17:37:10.307608 kernel: 8regs : 19793 MB/sec Nov 12 17:37:10.308980 kernel: 32regs : 18523 MB/sec Nov 12 17:37:10.308994 kernel: arm64_neon : 26945 MB/sec Nov 12 17:37:10.309003 kernel: xor: using function: arm64_neon (26945 MB/sec) Nov 12 17:37:10.357596 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 17:37:10.368248 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 17:37:10.378751 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 17:37:10.391212 systemd-udevd[459]: Using default interface naming scheme 'v255'. Nov 12 17:37:10.394371 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 17:37:10.396784 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 17:37:10.411051 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Nov 12 17:37:10.437368 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 17:37:10.447744 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 17:37:10.486062 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 17:37:10.495983 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 17:37:10.507238 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 17:37:10.508472 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 17:37:10.510514 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 17:37:10.512075 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 17:37:10.519769 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 17:37:10.529224 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 17:37:10.532587 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 12 17:37:10.541683 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 17:37:10.541802 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 17:37:10.541813 kernel: GPT:9289727 != 19775487 Nov 12 17:37:10.541823 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 17:37:10.541839 kernel: GPT:9289727 != 19775487 Nov 12 17:37:10.541849 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 17:37:10.541861 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 17:37:10.553681 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 17:37:10.553828 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:37:10.558072 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (514) Nov 12 17:37:10.558095 kernel: BTRFS: device fsid 93a9d474-e751-47b7-a65f-e39ca9abd47a devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (524) Nov 12 17:37:10.558104 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 17:37:10.558915 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 17:37:10.559035 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:37:10.560630 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:37:10.573847 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:37:10.578209 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 17:37:10.582622 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 17:37:10.584503 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:37:10.597344 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 17:37:10.599235 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 17:37:10.604438 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 17:37:10.616737 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 17:37:10.618231 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 17:37:10.623520 disk-uuid[550]: Primary Header is updated. Nov 12 17:37:10.623520 disk-uuid[550]: Secondary Entries is updated. Nov 12 17:37:10.623520 disk-uuid[550]: Secondary Header is updated. Nov 12 17:37:10.628598 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 17:37:10.637621 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:37:11.641605 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 17:37:11.641821 disk-uuid[551]: The operation has completed successfully. Nov 12 17:37:11.667090 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 17:37:11.668294 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 17:37:11.683754 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 17:37:11.686476 sh[573]: Success Nov 12 17:37:11.700697 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 12 17:37:11.741032 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 17:37:11.742529 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 17:37:11.743292 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 17:37:11.756714 kernel: BTRFS info (device dm-0): first mount of filesystem 93a9d474-e751-47b7-a65f-e39ca9abd47a Nov 12 17:37:11.756752 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:37:11.756763 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 17:37:11.759029 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 17:37:11.759046 kernel: BTRFS info (device dm-0): using free space tree Nov 12 17:37:11.762380 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 17:37:11.763477 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 17:37:11.772699 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 17:37:11.773935 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 17:37:11.784837 kernel: BTRFS info (device vda6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:37:11.784880 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:37:11.784891 kernel: BTRFS info (device vda6): using free space tree Nov 12 17:37:11.787588 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 17:37:11.795087 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 17:37:11.796660 kernel: BTRFS info (device vda6): last unmount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:37:11.801603 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 17:37:11.806724 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 17:37:11.877218 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 17:37:11.887746 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 17:37:11.893032 ignition[666]: Ignition 2.19.0 Nov 12 17:37:11.893042 ignition[666]: Stage: fetch-offline Nov 12 17:37:11.893072 ignition[666]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:37:11.893080 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:37:11.893243 ignition[666]: parsed url from cmdline: "" Nov 12 17:37:11.893246 ignition[666]: no config URL provided Nov 12 17:37:11.893251 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 17:37:11.893258 ignition[666]: no config at "/usr/lib/ignition/user.ign" Nov 12 17:37:11.893280 ignition[666]: op(1): [started] loading QEMU firmware config module Nov 12 17:37:11.893284 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 17:37:11.901993 ignition[666]: op(1): [finished] loading QEMU firmware config module Nov 12 17:37:11.918267 systemd-networkd[762]: lo: Link UP Nov 12 17:37:11.918276 systemd-networkd[762]: lo: Gained carrier Nov 12 17:37:11.918941 systemd-networkd[762]: Enumeration completed Nov 12 17:37:11.919352 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:37:11.919355 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 17:37:11.920182 systemd-networkd[762]: eth0: Link UP Nov 12 17:37:11.920185 systemd-networkd[762]: eth0: Gained carrier Nov 12 17:37:11.920192 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:37:11.921076 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 17:37:11.922374 systemd[1]: Reached target network.target - Network. Nov 12 17:37:11.939609 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.11/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 17:37:11.953358 ignition[666]: parsing config with SHA512: 50c215fc3f99b77b7a60f3937d273b890c51e290b7aaf4c91752d08c15259cad128ac99a63431e483b3f0741a88bd792767d6c680c01747a1b39805d66453fb4 Nov 12 17:37:11.958214 unknown[666]: fetched base config from "system" Nov 12 17:37:11.958224 unknown[666]: fetched user config from "qemu" Nov 12 17:37:11.959870 ignition[666]: fetch-offline: fetch-offline passed Nov 12 17:37:11.959966 ignition[666]: Ignition finished successfully Nov 12 17:37:11.961663 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 17:37:11.963605 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 17:37:11.970379 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 17:37:11.980314 ignition[771]: Ignition 2.19.0 Nov 12 17:37:11.980325 ignition[771]: Stage: kargs Nov 12 17:37:11.980491 ignition[771]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:37:11.980502 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:37:11.983288 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 17:37:11.981359 ignition[771]: kargs: kargs passed Nov 12 17:37:11.981403 ignition[771]: Ignition finished successfully Nov 12 17:37:11.987341 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 17:37:12.000248 ignition[780]: Ignition 2.19.0 Nov 12 17:37:12.000257 ignition[780]: Stage: disks Nov 12 17:37:12.000417 ignition[780]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:37:12.000427 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:37:12.002682 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 17:37:12.001275 ignition[780]: disks: disks passed Nov 12 17:37:12.004234 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 17:37:12.001317 ignition[780]: Ignition finished successfully Nov 12 17:37:12.006212 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 17:37:12.007559 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 17:37:12.009224 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 17:37:12.011528 systemd[1]: Reached target basic.target - Basic System. Nov 12 17:37:12.019708 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 17:37:12.029975 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 17:37:12.035940 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 17:37:12.045652 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 17:37:12.089449 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 17:37:12.090623 kernel: EXT4-fs (vda9): mounted filesystem b3af0fd7-3c7c-4cdc-9b88-dae3d10ea922 r/w with ordered data mode. Quota mode: none. Nov 12 17:37:12.090500 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 17:37:12.098642 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 17:37:12.100054 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 17:37:12.100914 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 17:37:12.100955 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 17:37:12.100975 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 17:37:12.106866 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 17:37:12.111920 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) Nov 12 17:37:12.111941 kernel: BTRFS info (device vda6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:37:12.111951 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:37:12.111961 kernel: BTRFS info (device vda6): using free space tree Nov 12 17:37:12.111396 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 17:37:12.115596 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 17:37:12.116259 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 17:37:12.154313 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 17:37:12.158104 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Nov 12 17:37:12.163073 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 17:37:12.166433 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 17:37:12.237100 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 17:37:12.251673 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 17:37:12.252996 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 17:37:12.261592 kernel: BTRFS info (device vda6): last unmount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:37:12.279281 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 17:37:12.281332 ignition[912]: INFO : Ignition 2.19.0 Nov 12 17:37:12.281332 ignition[912]: INFO : Stage: mount Nov 12 17:37:12.281332 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 17:37:12.281332 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:37:12.281332 ignition[912]: INFO : mount: mount passed Nov 12 17:37:12.281332 ignition[912]: INFO : Ignition finished successfully Nov 12 17:37:12.282221 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 17:37:12.292688 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 17:37:12.754885 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 17:37:12.769762 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 17:37:12.779962 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (926) Nov 12 17:37:12.781766 kernel: BTRFS info (device vda6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:37:12.781796 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:37:12.781807 kernel: BTRFS info (device vda6): using free space tree Nov 12 17:37:12.786613 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 17:37:12.787728 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 17:37:12.812202 ignition[943]: INFO : Ignition 2.19.0 Nov 12 17:37:12.812202 ignition[943]: INFO : Stage: files Nov 12 17:37:12.813446 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 17:37:12.813446 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:37:12.813446 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Nov 12 17:37:12.816115 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 17:37:12.816115 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 17:37:12.816115 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 17:37:12.816115 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 17:37:12.816115 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 17:37:12.815820 unknown[943]: wrote ssh authorized keys file for user: core Nov 12 17:37:12.821705 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 17:37:12.821705 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Nov 12 17:37:12.891607 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 17:37:13.360244 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 17:37:13.363651 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 17:37:13.363651 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 17:37:13.363651 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 17:37:13.363651 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 17:37:13.363651 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 17:37:13.363651 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 17:37:13.363651 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 17:37:13.363651 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 17:37:13.363651 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 17:37:13.363651 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 17:37:13.363651 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 17:37:13.363651 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 17:37:13.363651 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 17:37:13.363651 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Nov 12 17:37:13.577760 systemd-networkd[762]: eth0: Gained IPv6LL Nov 12 17:37:13.679297 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 17:37:13.947329 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 17:37:13.947329 ignition[943]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 17:37:13.950658 ignition[943]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 17:37:13.950658 ignition[943]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 17:37:13.950658 ignition[943]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 17:37:13.950658 ignition[943]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 12 17:37:13.950658 ignition[943]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 17:37:13.950658 ignition[943]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 17:37:13.950658 ignition[943]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 12 17:37:13.950658 ignition[943]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 17:37:13.981192 ignition[943]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 17:37:13.985304 ignition[943]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 17:37:13.987560 ignition[943]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 17:37:13.987560 ignition[943]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 12 17:37:13.987560 ignition[943]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 17:37:13.987560 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 17:37:13.987560 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 17:37:13.987560 ignition[943]: INFO : files: files passed Nov 12 17:37:13.987560 ignition[943]: INFO : Ignition finished successfully Nov 12 17:37:13.988372 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 17:37:14.006774 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 17:37:14.009246 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 17:37:14.017060 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 17:37:14.017158 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 17:37:14.021930 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 17:37:14.026108 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 17:37:14.026108 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 17:37:14.029114 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 17:37:14.030662 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 17:37:14.031725 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 17:37:14.043767 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 17:37:14.067531 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 17:37:14.067646 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 17:37:14.069524 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 17:37:14.071283 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 17:37:14.072903 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 17:37:14.073683 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 17:37:14.090031 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 17:37:14.092255 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 17:37:14.104189 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 17:37:14.105266 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 17:37:14.107114 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 17:37:14.108759 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 17:37:14.108877 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 17:37:14.111089 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 17:37:14.111974 systemd[1]: Stopped target basic.target - Basic System. Nov 12 17:37:14.113552 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 17:37:14.115087 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 17:37:14.116746 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 17:37:14.118472 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 17:37:14.120151 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 17:37:14.122018 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 17:37:14.123489 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 17:37:14.125378 systemd[1]: Stopped target swap.target - Swaps. Nov 12 17:37:14.126761 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 17:37:14.126878 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 17:37:14.129068 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 17:37:14.130058 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 17:37:14.131740 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 17:37:14.132669 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 17:37:14.133654 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 17:37:14.133770 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 17:37:14.136402 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 17:37:14.136512 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 17:37:14.138285 systemd[1]: Stopped target paths.target - Path Units. Nov 12 17:37:14.139975 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 17:37:14.140091 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 17:37:14.141714 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 17:37:14.143172 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 17:37:14.144647 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 17:37:14.144732 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 17:37:14.146474 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 17:37:14.146546 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 17:37:14.147988 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 17:37:14.148090 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 17:37:14.149751 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 17:37:14.149846 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 17:37:14.167747 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 17:37:14.169181 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 17:37:14.169894 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 17:37:14.170002 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 17:37:14.171589 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 17:37:14.171686 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 17:37:14.176522 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 17:37:14.176629 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 17:37:14.185529 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 17:37:14.187544 ignition[999]: INFO : Ignition 2.19.0 Nov 12 17:37:14.187544 ignition[999]: INFO : Stage: umount Nov 12 17:37:14.189764 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 17:37:14.189764 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:37:14.189764 ignition[999]: INFO : umount: umount passed Nov 12 17:37:14.189764 ignition[999]: INFO : Ignition finished successfully Nov 12 17:37:14.190899 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 17:37:14.190994 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 17:37:14.192370 systemd[1]: Stopped target network.target - Network. Nov 12 17:37:14.193556 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 17:37:14.193618 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 17:37:14.195101 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 17:37:14.195152 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 17:37:14.196533 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 17:37:14.196568 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 17:37:14.198800 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 17:37:14.198839 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 17:37:14.201394 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 17:37:14.204416 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 17:37:14.205836 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 17:37:14.205928 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 17:37:14.207632 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 17:37:14.207719 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 17:37:14.210852 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 17:37:14.210966 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 17:37:14.213330 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 17:37:14.213384 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 17:37:14.214666 systemd-networkd[762]: eth0: DHCPv6 lease lost Nov 12 17:37:14.218983 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 17:37:14.219086 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 17:37:14.222621 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 17:37:14.222665 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 17:37:14.230669 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 17:37:14.231407 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 17:37:14.231470 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 17:37:14.233136 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 17:37:14.233180 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:37:14.234924 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 17:37:14.234970 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 17:37:14.236650 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 17:37:14.247718 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 17:37:14.247819 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 17:37:14.256298 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 17:37:14.256436 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 17:37:14.258315 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 17:37:14.258354 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 17:37:14.260071 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 17:37:14.260106 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 17:37:14.261595 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 17:37:14.261638 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 17:37:14.264088 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 17:37:14.264147 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 17:37:14.265916 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 17:37:14.265961 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:37:14.282780 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 17:37:14.283547 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 17:37:14.283625 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 17:37:14.285463 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 17:37:14.285506 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 17:37:14.287262 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 17:37:14.287302 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 17:37:14.289197 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 17:37:14.289235 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:37:14.291280 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 17:37:14.291384 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 17:37:14.294108 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 17:37:14.296299 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 17:37:14.306692 systemd[1]: Switching root. Nov 12 17:37:14.335496 systemd-journald[237]: Journal stopped Nov 12 17:37:15.073028 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Nov 12 17:37:15.073089 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 17:37:15.073102 kernel: SELinux: policy capability open_perms=1 Nov 12 17:37:15.073112 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 17:37:15.073122 kernel: SELinux: policy capability always_check_network=0 Nov 12 17:37:15.073145 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 17:37:15.073156 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 17:37:15.073165 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 17:37:15.073178 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 17:37:15.073188 kernel: audit: type=1403 audit(1731433034.492:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 17:37:15.073199 systemd[1]: Successfully loaded SELinux policy in 32.926ms. Nov 12 17:37:15.073215 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.344ms. Nov 12 17:37:15.073231 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 17:37:15.073242 systemd[1]: Detected virtualization kvm. Nov 12 17:37:15.073252 systemd[1]: Detected architecture arm64. Nov 12 17:37:15.073264 systemd[1]: Detected first boot. Nov 12 17:37:15.073275 systemd[1]: Initializing machine ID from VM UUID. Nov 12 17:37:15.073288 zram_generator::config[1045]: No configuration found. Nov 12 17:37:15.073299 systemd[1]: Populated /etc with preset unit settings. Nov 12 17:37:15.073310 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 17:37:15.073320 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 17:37:15.073331 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 17:37:15.073342 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 17:37:15.073352 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 17:37:15.073363 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 17:37:15.073375 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 17:37:15.073386 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 17:37:15.073397 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 17:37:15.073407 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 17:37:15.073422 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 17:37:15.073433 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 17:37:15.073443 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 17:37:15.073454 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 17:37:15.073464 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 17:37:15.073476 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 17:37:15.073487 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 17:37:15.073499 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 12 17:37:15.073510 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 17:37:15.073520 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 17:37:15.073530 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 17:37:15.073541 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 17:37:15.073553 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 17:37:15.073564 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 17:37:15.073586 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 17:37:15.073598 systemd[1]: Reached target slices.target - Slice Units. Nov 12 17:37:15.073609 systemd[1]: Reached target swap.target - Swaps. Nov 12 17:37:15.073620 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 17:37:15.073630 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 17:37:15.073641 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 17:37:15.073651 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 17:37:15.073662 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 17:37:15.073674 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 17:37:15.073685 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 17:37:15.073695 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 17:37:15.073705 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 17:37:15.073716 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 17:37:15.073726 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 17:37:15.073737 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 17:37:15.073748 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 17:37:15.073761 systemd[1]: Reached target machines.target - Containers. Nov 12 17:37:15.073772 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 17:37:15.073783 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:37:15.073794 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 17:37:15.073805 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 17:37:15.073829 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 17:37:15.073840 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 17:37:15.073851 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 17:37:15.073861 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 17:37:15.073874 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 17:37:15.073885 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 17:37:15.073895 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 17:37:15.073906 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 17:37:15.073917 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 17:37:15.073927 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 17:37:15.073938 kernel: fuse: init (API version 7.39) Nov 12 17:37:15.073948 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 17:37:15.073958 kernel: loop: module loaded Nov 12 17:37:15.073970 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 17:37:15.073982 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 17:37:15.073993 kernel: ACPI: bus type drm_connector registered Nov 12 17:37:15.074003 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 17:37:15.074013 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 17:37:15.074024 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 17:37:15.074034 systemd[1]: Stopped verity-setup.service. Nov 12 17:37:15.074045 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 17:37:15.074056 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 17:37:15.074068 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 17:37:15.074079 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 17:37:15.074110 systemd-journald[1109]: Collecting audit messages is disabled. Nov 12 17:37:15.074171 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 17:37:15.074185 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 17:37:15.074198 systemd-journald[1109]: Journal started Nov 12 17:37:15.074221 systemd-journald[1109]: Runtime Journal (/run/log/journal/90af9118f7944675a206855b32db8bd3) is 5.9M, max 47.3M, 41.4M free. Nov 12 17:37:14.870827 systemd[1]: Queued start job for default target multi-user.target. Nov 12 17:37:14.888464 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 17:37:14.888846 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 17:37:15.077599 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 17:37:15.079648 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 17:37:15.080830 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 17:37:15.081985 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 17:37:15.082139 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 17:37:15.083282 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 17:37:15.083405 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 17:37:15.084540 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 17:37:15.084681 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 17:37:15.085739 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 17:37:15.085868 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 17:37:15.087170 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 17:37:15.087292 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 17:37:15.088420 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 17:37:15.088543 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 17:37:15.090884 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 17:37:15.092504 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 17:37:15.094310 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 17:37:15.108714 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 17:37:15.117723 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 17:37:15.119868 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 17:37:15.121008 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 17:37:15.121036 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 17:37:15.123106 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 17:37:15.125343 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 17:37:15.127625 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 17:37:15.128693 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:37:15.130254 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 17:37:15.135842 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 17:37:15.137942 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 17:37:15.140144 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 17:37:15.141167 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 17:37:15.142143 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 17:37:15.144086 systemd-journald[1109]: Time spent on flushing to /var/log/journal/90af9118f7944675a206855b32db8bd3 is 25.302ms for 855 entries. Nov 12 17:37:15.144086 systemd-journald[1109]: System Journal (/var/log/journal/90af9118f7944675a206855b32db8bd3) is 8.0M, max 195.6M, 187.6M free. Nov 12 17:37:15.182629 systemd-journald[1109]: Received client request to flush runtime journal. Nov 12 17:37:15.145648 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 17:37:15.150796 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 17:37:15.153255 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 17:37:15.154729 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 17:37:15.157859 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 17:37:15.159716 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 17:37:15.167776 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 17:37:15.176009 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 17:37:15.177292 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 17:37:15.180749 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 17:37:15.185613 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 17:37:15.186903 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:37:15.189791 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Nov 12 17:37:15.189809 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Nov 12 17:37:15.190763 kernel: loop0: detected capacity change from 0 to 114432 Nov 12 17:37:15.190674 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 17:37:15.203023 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 17:37:15.208625 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 17:37:15.207794 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 17:37:15.222301 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 17:37:15.222983 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 17:37:15.233426 kernel: loop1: detected capacity change from 0 to 114328 Nov 12 17:37:15.237607 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 17:37:15.244902 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 17:37:15.257154 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Nov 12 17:37:15.257169 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Nov 12 17:37:15.261157 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 17:37:15.286601 kernel: loop2: detected capacity change from 0 to 194512 Nov 12 17:37:15.339596 kernel: loop3: detected capacity change from 0 to 114432 Nov 12 17:37:15.344596 kernel: loop4: detected capacity change from 0 to 114328 Nov 12 17:37:15.348592 kernel: loop5: detected capacity change from 0 to 194512 Nov 12 17:37:15.356020 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 17:37:15.356415 (sd-merge)[1185]: Merged extensions into '/usr'. Nov 12 17:37:15.360500 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 17:37:15.360515 systemd[1]: Reloading... Nov 12 17:37:15.411641 zram_generator::config[1211]: No configuration found. Nov 12 17:37:15.467710 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 17:37:15.517845 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:37:15.553178 systemd[1]: Reloading finished in 192 ms. Nov 12 17:37:15.579939 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 17:37:15.581360 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 17:37:15.589752 systemd[1]: Starting ensure-sysext.service... Nov 12 17:37:15.591423 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 17:37:15.602561 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Nov 12 17:37:15.602589 systemd[1]: Reloading... Nov 12 17:37:15.614091 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 17:37:15.614373 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 17:37:15.615041 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 17:37:15.615292 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Nov 12 17:37:15.615350 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Nov 12 17:37:15.617819 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 17:37:15.617829 systemd-tmpfiles[1246]: Skipping /boot Nov 12 17:37:15.624799 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 17:37:15.624812 systemd-tmpfiles[1246]: Skipping /boot Nov 12 17:37:15.645602 zram_generator::config[1270]: No configuration found. Nov 12 17:37:15.731541 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:37:15.767893 systemd[1]: Reloading finished in 165 ms. Nov 12 17:37:15.789652 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 17:37:15.798955 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 17:37:15.806236 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 17:37:15.808675 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 17:37:15.810712 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 17:37:15.815853 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 17:37:15.820936 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 17:37:15.824645 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 17:37:15.829533 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:37:15.830783 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 17:37:15.834343 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 17:37:15.837990 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 17:37:15.839619 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:37:15.844821 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 17:37:15.846266 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 17:37:15.846548 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 17:37:15.848703 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 17:37:15.850900 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 17:37:15.853231 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 17:37:15.854718 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 17:37:15.856625 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 17:37:15.863341 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Nov 12 17:37:15.865892 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:37:15.874651 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 17:37:15.881799 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 17:37:15.886980 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 17:37:15.888110 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:37:15.891300 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 17:37:15.896027 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 17:37:15.900483 augenrules[1342]: No rules Nov 12 17:37:15.900754 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 17:37:15.911140 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 17:37:15.913062 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 17:37:15.916086 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 17:37:15.916561 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 17:37:15.919222 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 17:37:15.919377 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 17:37:15.921702 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 17:37:15.921843 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 17:37:15.923795 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 17:37:15.925953 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 17:37:15.942720 systemd[1]: Finished ensure-sysext.service. Nov 12 17:37:15.948295 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:37:15.948597 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1354) Nov 12 17:37:15.954908 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 17:37:15.956848 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 17:37:15.959981 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 17:37:15.961589 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1354) Nov 12 17:37:15.962116 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 17:37:15.964386 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:37:15.967770 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 17:37:15.974828 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 17:37:15.976005 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 17:37:15.976526 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 17:37:15.976702 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 17:37:15.978200 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 17:37:15.978346 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 17:37:15.979803 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 17:37:15.979944 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 17:37:15.981447 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 17:37:15.981617 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 17:37:15.987387 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 12 17:37:15.988530 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 17:37:15.988618 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 17:37:15.995851 systemd-resolved[1313]: Positive Trust Anchors: Nov 12 17:37:15.995867 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 17:37:15.995900 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 17:37:16.003927 systemd-resolved[1313]: Defaulting to hostname 'linux'. Nov 12 17:37:16.010074 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 17:37:16.011055 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 17:37:16.041604 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1360) Nov 12 17:37:16.056404 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 17:37:16.058081 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 17:37:16.070066 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 17:37:16.078213 systemd-networkd[1384]: lo: Link UP Nov 12 17:37:16.078226 systemd-networkd[1384]: lo: Gained carrier Nov 12 17:37:16.079076 systemd-networkd[1384]: Enumeration completed Nov 12 17:37:16.079118 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 17:37:16.079802 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:37:16.079805 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 17:37:16.080147 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 17:37:16.080606 systemd-networkd[1384]: eth0: Link UP Nov 12 17:37:16.080614 systemd-networkd[1384]: eth0: Gained carrier Nov 12 17:37:16.080628 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:37:16.081357 systemd[1]: Reached target network.target - Network. Nov 12 17:37:16.084851 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 17:37:16.103375 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:37:16.106263 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 17:37:16.106654 systemd-networkd[1384]: eth0: DHCPv4 address 10.0.0.11/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 17:37:16.107363 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Nov 12 17:37:15.610368 systemd-timesyncd[1385]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 17:37:15.618279 systemd-journald[1109]: Time jumped backwards, rotating. Nov 12 17:37:15.610419 systemd-timesyncd[1385]: Initial clock synchronization to Tue 2024-11-12 17:37:15.610273 UTC. Nov 12 17:37:15.615502 systemd-resolved[1313]: Clock change detected. Flushing caches. Nov 12 17:37:15.630265 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 17:37:15.644221 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 17:37:15.662723 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:37:15.687627 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 17:37:15.718595 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 17:37:15.719829 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 17:37:15.720770 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 17:37:15.721665 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 17:37:15.722623 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 17:37:15.723730 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 17:37:15.724677 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 17:37:15.725785 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 17:37:15.726719 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 17:37:15.726755 systemd[1]: Reached target paths.target - Path Units. Nov 12 17:37:15.727439 systemd[1]: Reached target timers.target - Timer Units. Nov 12 17:37:15.729223 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 17:37:15.731448 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 17:37:15.740027 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 17:37:15.742119 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 17:37:15.743463 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 17:37:15.744390 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 17:37:15.745112 systemd[1]: Reached target basic.target - Basic System. Nov 12 17:37:15.745811 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 17:37:15.745842 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 17:37:15.746851 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 17:37:15.748753 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 17:37:15.751132 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 17:37:15.753122 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 17:37:15.756767 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 17:37:15.758248 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 17:37:15.762240 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 17:37:15.763234 jq[1418]: false Nov 12 17:37:15.765167 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 17:37:15.770082 dbus-daemon[1417]: [system] SELinux support is enabled Nov 12 17:37:15.771180 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 17:37:15.773889 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 17:37:15.777751 extend-filesystems[1419]: Found loop3 Nov 12 17:37:15.782202 extend-filesystems[1419]: Found loop4 Nov 12 17:37:15.782202 extend-filesystems[1419]: Found loop5 Nov 12 17:37:15.782202 extend-filesystems[1419]: Found vda Nov 12 17:37:15.782202 extend-filesystems[1419]: Found vda1 Nov 12 17:37:15.782202 extend-filesystems[1419]: Found vda2 Nov 12 17:37:15.782202 extend-filesystems[1419]: Found vda3 Nov 12 17:37:15.782202 extend-filesystems[1419]: Found usr Nov 12 17:37:15.782202 extend-filesystems[1419]: Found vda4 Nov 12 17:37:15.782202 extend-filesystems[1419]: Found vda6 Nov 12 17:37:15.782202 extend-filesystems[1419]: Found vda7 Nov 12 17:37:15.782202 extend-filesystems[1419]: Found vda9 Nov 12 17:37:15.782202 extend-filesystems[1419]: Checking size of /dev/vda9 Nov 12 17:37:15.778473 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 17:37:15.785961 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 17:37:15.786462 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 17:37:15.788188 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 17:37:15.792154 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 17:37:15.811192 jq[1434]: true Nov 12 17:37:15.796215 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 17:37:15.801037 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 17:37:15.804170 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 17:37:15.804497 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 17:37:15.804823 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 17:37:15.804969 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 17:37:15.807629 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 17:37:15.808024 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 17:37:15.820934 extend-filesystems[1419]: Resized partition /dev/vda9 Nov 12 17:37:15.822415 (ntainerd)[1442]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 17:37:15.823463 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 17:37:15.823591 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 17:37:15.828479 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 17:37:15.828508 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 17:37:15.836875 jq[1441]: true Nov 12 17:37:15.854128 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1352) Nov 12 17:37:15.854151 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 17:37:15.854197 extend-filesystems[1453]: resize2fs 1.47.1 (20-May-2024) Nov 12 17:37:15.856120 systemd-logind[1426]: Watching system buttons on /dev/input/event0 (Power Button) Nov 12 17:37:15.859492 tar[1439]: linux-arm64/helm Nov 12 17:37:15.859153 systemd-logind[1426]: New seat seat0. Nov 12 17:37:15.860272 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 17:37:15.890081 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 17:37:15.901804 update_engine[1433]: I20241112 17:37:15.890947 1433 main.cc:92] Flatcar Update Engine starting Nov 12 17:37:15.901804 update_engine[1433]: I20241112 17:37:15.893786 1433 update_check_scheduler.cc:74] Next update check in 9m38s Nov 12 17:37:15.894019 systemd[1]: Started update-engine.service - Update Engine. Nov 12 17:37:15.900259 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 17:37:15.907104 extend-filesystems[1453]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 17:37:15.907104 extend-filesystems[1453]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 17:37:15.907104 extend-filesystems[1453]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 17:37:15.915564 extend-filesystems[1419]: Resized filesystem in /dev/vda9 Nov 12 17:37:15.918196 bash[1470]: Updated "/home/core/.ssh/authorized_keys" Nov 12 17:37:15.911816 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 17:37:15.911991 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 17:37:15.914319 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 17:37:15.917767 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 17:37:15.986688 locksmithd[1471]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 17:37:16.092744 containerd[1442]: time="2024-11-12T17:37:16.092310710Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 17:37:16.114054 sshd_keygen[1447]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 17:37:16.123007 containerd[1442]: time="2024-11-12T17:37:16.120694670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:37:16.123007 containerd[1442]: time="2024-11-12T17:37:16.122118590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:37:16.123007 containerd[1442]: time="2024-11-12T17:37:16.122149190Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 17:37:16.123007 containerd[1442]: time="2024-11-12T17:37:16.122165350Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 17:37:16.123007 containerd[1442]: time="2024-11-12T17:37:16.122313950Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 17:37:16.123007 containerd[1442]: time="2024-11-12T17:37:16.122331110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 17:37:16.123007 containerd[1442]: time="2024-11-12T17:37:16.122386470Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:37:16.123007 containerd[1442]: time="2024-11-12T17:37:16.122398430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:37:16.123007 containerd[1442]: time="2024-11-12T17:37:16.122561310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:37:16.123007 containerd[1442]: time="2024-11-12T17:37:16.122579150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 17:37:16.123007 containerd[1442]: time="2024-11-12T17:37:16.122592350Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:37:16.123241 containerd[1442]: time="2024-11-12T17:37:16.122603350Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 17:37:16.123241 containerd[1442]: time="2024-11-12T17:37:16.122676150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:37:16.123241 containerd[1442]: time="2024-11-12T17:37:16.122857950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:37:16.123241 containerd[1442]: time="2024-11-12T17:37:16.122951870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:37:16.123241 containerd[1442]: time="2024-11-12T17:37:16.122965150Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 17:37:16.123241 containerd[1442]: time="2024-11-12T17:37:16.123059110Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 17:37:16.123241 containerd[1442]: time="2024-11-12T17:37:16.123110030Z" level=info msg="metadata content store policy set" policy=shared Nov 12 17:37:16.126692 containerd[1442]: time="2024-11-12T17:37:16.126645470Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 17:37:16.126755 containerd[1442]: time="2024-11-12T17:37:16.126707790Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 17:37:16.126755 containerd[1442]: time="2024-11-12T17:37:16.126726950Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 17:37:16.126811 containerd[1442]: time="2024-11-12T17:37:16.126753150Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 17:37:16.126811 containerd[1442]: time="2024-11-12T17:37:16.126769270Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 17:37:16.126931 containerd[1442]: time="2024-11-12T17:37:16.126898630Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 17:37:16.127164 containerd[1442]: time="2024-11-12T17:37:16.127138390Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 17:37:16.127320 containerd[1442]: time="2024-11-12T17:37:16.127241870Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 17:37:16.127320 containerd[1442]: time="2024-11-12T17:37:16.127262990Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 17:37:16.127320 containerd[1442]: time="2024-11-12T17:37:16.127279510Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 17:37:16.127320 containerd[1442]: time="2024-11-12T17:37:16.127293270Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 17:37:16.127320 containerd[1442]: time="2024-11-12T17:37:16.127306630Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 17:37:16.127320 containerd[1442]: time="2024-11-12T17:37:16.127319790Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 17:37:16.127443 containerd[1442]: time="2024-11-12T17:37:16.127334190Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 17:37:16.127443 containerd[1442]: time="2024-11-12T17:37:16.127358670Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 17:37:16.127443 containerd[1442]: time="2024-11-12T17:37:16.127372390Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 17:37:16.127443 containerd[1442]: time="2024-11-12T17:37:16.127384630Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 17:37:16.127443 containerd[1442]: time="2024-11-12T17:37:16.127397030Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 17:37:16.127443 containerd[1442]: time="2024-11-12T17:37:16.127417430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 17:37:16.127443 containerd[1442]: time="2024-11-12T17:37:16.127431910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 17:37:16.127443 containerd[1442]: time="2024-11-12T17:37:16.127444070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 17:37:16.127595 containerd[1442]: time="2024-11-12T17:37:16.127456550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 17:37:16.127595 containerd[1442]: time="2024-11-12T17:37:16.127468830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 17:37:16.127595 containerd[1442]: time="2024-11-12T17:37:16.127481710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 17:37:16.127595 containerd[1442]: time="2024-11-12T17:37:16.127493990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 17:37:16.127595 containerd[1442]: time="2024-11-12T17:37:16.127508150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 17:37:16.127595 containerd[1442]: time="2024-11-12T17:37:16.127521950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 17:37:16.127595 containerd[1442]: time="2024-11-12T17:37:16.127549470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 17:37:16.127595 containerd[1442]: time="2024-11-12T17:37:16.127570030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 17:37:16.127595 containerd[1442]: time="2024-11-12T17:37:16.127582630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 17:37:16.127595 containerd[1442]: time="2024-11-12T17:37:16.127596230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 17:37:16.127787 containerd[1442]: time="2024-11-12T17:37:16.127612430Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 17:37:16.127787 containerd[1442]: time="2024-11-12T17:37:16.127633870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 17:37:16.127787 containerd[1442]: time="2024-11-12T17:37:16.127646630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 17:37:16.127787 containerd[1442]: time="2024-11-12T17:37:16.127657190Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 17:37:16.128422 containerd[1442]: time="2024-11-12T17:37:16.128386110Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 17:37:16.128471 containerd[1442]: time="2024-11-12T17:37:16.128422630Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 17:37:16.128471 containerd[1442]: time="2024-11-12T17:37:16.128434590Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 17:37:16.128471 containerd[1442]: time="2024-11-12T17:37:16.128447350Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 17:37:16.128471 containerd[1442]: time="2024-11-12T17:37:16.128457950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 17:37:16.128558 containerd[1442]: time="2024-11-12T17:37:16.128491630Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 17:37:16.128558 containerd[1442]: time="2024-11-12T17:37:16.128503150Z" level=info msg="NRI interface is disabled by configuration." Nov 12 17:37:16.128558 containerd[1442]: time="2024-11-12T17:37:16.128513830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 17:37:16.129056 containerd[1442]: time="2024-11-12T17:37:16.128994350Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 17:37:16.129184 containerd[1442]: time="2024-11-12T17:37:16.129066990Z" level=info msg="Connect containerd service" Nov 12 17:37:16.129184 containerd[1442]: time="2024-11-12T17:37:16.129103590Z" level=info msg="using legacy CRI server" Nov 12 17:37:16.129184 containerd[1442]: time="2024-11-12T17:37:16.129119630Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 17:37:16.129248 containerd[1442]: time="2024-11-12T17:37:16.129218950Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 17:37:16.130008 containerd[1442]: time="2024-11-12T17:37:16.129956670Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 17:37:16.130386 containerd[1442]: time="2024-11-12T17:37:16.130340870Z" level=info msg="Start subscribing containerd event" Nov 12 17:37:16.130420 containerd[1442]: time="2024-11-12T17:37:16.130404230Z" level=info msg="Start recovering state" Nov 12 17:37:16.130492 containerd[1442]: time="2024-11-12T17:37:16.130472270Z" level=info msg="Start event monitor" Nov 12 17:37:16.130492 containerd[1442]: time="2024-11-12T17:37:16.130489590Z" level=info msg="Start snapshots syncer" Nov 12 17:37:16.130563 containerd[1442]: time="2024-11-12T17:37:16.130500510Z" level=info msg="Start cni network conf syncer for default" Nov 12 17:37:16.130563 containerd[1442]: time="2024-11-12T17:37:16.130508470Z" level=info msg="Start streaming server" Nov 12 17:37:16.130701 containerd[1442]: time="2024-11-12T17:37:16.130659670Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 17:37:16.130729 containerd[1442]: time="2024-11-12T17:37:16.130712430Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 17:37:16.130864 containerd[1442]: time="2024-11-12T17:37:16.130765350Z" level=info msg="containerd successfully booted in 0.041671s" Nov 12 17:37:16.130904 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 17:37:16.141754 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 17:37:16.150246 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 17:37:16.155781 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 17:37:16.157034 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 17:37:16.174258 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 17:37:16.185195 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 17:37:16.188323 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 17:37:16.190638 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 12 17:37:16.192095 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 17:37:16.241227 tar[1439]: linux-arm64/LICENSE Nov 12 17:37:16.241343 tar[1439]: linux-arm64/README.md Nov 12 17:37:16.259652 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 17:37:17.560241 systemd-networkd[1384]: eth0: Gained IPv6LL Nov 12 17:37:17.563514 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 17:37:17.565807 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 17:37:17.580271 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 17:37:17.582620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:37:17.584761 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 17:37:17.604053 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 17:37:17.606230 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 17:37:17.608884 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 17:37:17.620241 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 17:37:18.067830 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:37:18.069212 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 17:37:18.070197 systemd[1]: Startup finished in 544ms (kernel) + 4.774s (initrd) + 4.118s (userspace) = 9.437s. Nov 12 17:37:18.071879 (kubelet)[1530]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:37:18.616335 kubelet[1530]: E1112 17:37:18.616219 1530 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:37:18.619035 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:37:18.619182 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:37:21.818638 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 17:37:21.819676 systemd[1]: Started sshd@0-10.0.0.11:22-10.0.0.1:41868.service - OpenSSH per-connection server daemon (10.0.0.1:41868). Nov 12 17:37:21.872048 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 41868 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:37:21.873705 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:37:21.881763 systemd-logind[1426]: New session 1 of user core. Nov 12 17:37:21.882702 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 17:37:21.892208 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 17:37:21.907335 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 17:37:21.910227 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 17:37:21.916417 (systemd)[1549]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 17:37:21.985925 systemd[1549]: Queued start job for default target default.target. Nov 12 17:37:21.998059 systemd[1549]: Created slice app.slice - User Application Slice. Nov 12 17:37:21.998105 systemd[1549]: Reached target paths.target - Paths. Nov 12 17:37:21.998117 systemd[1549]: Reached target timers.target - Timers. Nov 12 17:37:21.999539 systemd[1549]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 17:37:22.008851 systemd[1549]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 17:37:22.008918 systemd[1549]: Reached target sockets.target - Sockets. Nov 12 17:37:22.008930 systemd[1549]: Reached target basic.target - Basic System. Nov 12 17:37:22.008966 systemd[1549]: Reached target default.target - Main User Target. Nov 12 17:37:22.009084 systemd[1549]: Startup finished in 87ms. Nov 12 17:37:22.009259 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 17:37:22.010494 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 17:37:22.069386 systemd[1]: Started sshd@1-10.0.0.11:22-10.0.0.1:41870.service - OpenSSH per-connection server daemon (10.0.0.1:41870). Nov 12 17:37:22.105200 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 41870 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:37:22.106407 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:37:22.111073 systemd-logind[1426]: New session 2 of user core. Nov 12 17:37:22.117150 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 17:37:22.175084 sshd[1560]: pam_unix(sshd:session): session closed for user core Nov 12 17:37:22.194030 systemd[1]: sshd@1-10.0.0.11:22-10.0.0.1:41870.service: Deactivated successfully. Nov 12 17:37:22.195682 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 17:37:22.196969 systemd-logind[1426]: Session 2 logged out. Waiting for processes to exit. Nov 12 17:37:22.204919 systemd[1]: Started sshd@2-10.0.0.11:22-10.0.0.1:41876.service - OpenSSH per-connection server daemon (10.0.0.1:41876). Nov 12 17:37:22.206012 systemd-logind[1426]: Removed session 2. Nov 12 17:37:22.237568 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 41876 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:37:22.238749 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:37:22.242173 systemd-logind[1426]: New session 3 of user core. Nov 12 17:37:22.262172 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 17:37:22.309365 sshd[1567]: pam_unix(sshd:session): session closed for user core Nov 12 17:37:22.323431 systemd[1]: sshd@2-10.0.0.11:22-10.0.0.1:41876.service: Deactivated successfully. Nov 12 17:37:22.325013 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 17:37:22.326403 systemd-logind[1426]: Session 3 logged out. Waiting for processes to exit. Nov 12 17:37:22.327546 systemd[1]: Started sshd@3-10.0.0.11:22-10.0.0.1:41888.service - OpenSSH per-connection server daemon (10.0.0.1:41888). Nov 12 17:37:22.328364 systemd-logind[1426]: Removed session 3. Nov 12 17:37:22.363787 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 41888 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:37:22.365182 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:37:22.369027 systemd-logind[1426]: New session 4 of user core. Nov 12 17:37:22.383130 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 17:37:22.435148 sshd[1574]: pam_unix(sshd:session): session closed for user core Nov 12 17:37:22.458337 systemd[1]: sshd@3-10.0.0.11:22-10.0.0.1:41888.service: Deactivated successfully. Nov 12 17:37:22.459860 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 17:37:22.461438 systemd-logind[1426]: Session 4 logged out. Waiting for processes to exit. Nov 12 17:37:22.476282 systemd[1]: Started sshd@4-10.0.0.11:22-10.0.0.1:41904.service - OpenSSH per-connection server daemon (10.0.0.1:41904). Nov 12 17:37:22.477473 systemd-logind[1426]: Removed session 4. Nov 12 17:37:22.511032 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 41904 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:37:22.512267 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:37:22.516342 systemd-logind[1426]: New session 5 of user core. Nov 12 17:37:22.527142 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 17:37:22.587123 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 17:37:22.587424 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:37:22.602757 sudo[1584]: pam_unix(sudo:session): session closed for user root Nov 12 17:37:22.604391 sshd[1581]: pam_unix(sshd:session): session closed for user core Nov 12 17:37:22.613320 systemd[1]: sshd@4-10.0.0.11:22-10.0.0.1:41904.service: Deactivated successfully. Nov 12 17:37:22.615320 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 17:37:22.616523 systemd-logind[1426]: Session 5 logged out. Waiting for processes to exit. Nov 12 17:37:22.617741 systemd[1]: Started sshd@5-10.0.0.11:22-10.0.0.1:57756.service - OpenSSH per-connection server daemon (10.0.0.1:57756). Nov 12 17:37:22.618516 systemd-logind[1426]: Removed session 5. Nov 12 17:37:22.655410 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 57756 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:37:22.656733 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:37:22.661002 systemd-logind[1426]: New session 6 of user core. Nov 12 17:37:22.682149 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 17:37:22.733183 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 17:37:22.733451 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:37:22.736402 sudo[1593]: pam_unix(sudo:session): session closed for user root Nov 12 17:37:22.741143 sudo[1592]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 17:37:22.741680 sudo[1592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:37:22.759230 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 17:37:22.760438 auditctl[1596]: No rules Nov 12 17:37:22.761267 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 17:37:22.761464 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 17:37:22.763007 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 17:37:22.785764 augenrules[1614]: No rules Nov 12 17:37:22.786902 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 17:37:22.788053 sudo[1592]: pam_unix(sudo:session): session closed for user root Nov 12 17:37:22.789653 sshd[1589]: pam_unix(sshd:session): session closed for user core Nov 12 17:37:22.798367 systemd[1]: sshd@5-10.0.0.11:22-10.0.0.1:57756.service: Deactivated successfully. Nov 12 17:37:22.799668 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 17:37:22.800789 systemd-logind[1426]: Session 6 logged out. Waiting for processes to exit. Nov 12 17:37:22.801930 systemd[1]: Started sshd@6-10.0.0.11:22-10.0.0.1:57762.service - OpenSSH per-connection server daemon (10.0.0.1:57762). Nov 12 17:37:22.802693 systemd-logind[1426]: Removed session 6. Nov 12 17:37:22.847027 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 57762 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:37:22.848532 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:37:22.854227 systemd-logind[1426]: New session 7 of user core. Nov 12 17:37:22.866132 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 17:37:22.921499 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 17:37:22.924499 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:37:23.261238 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 17:37:23.261399 (dockerd)[1642]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 17:37:23.519936 dockerd[1642]: time="2024-11-12T17:37:23.519789110Z" level=info msg="Starting up" Nov 12 17:37:23.669169 dockerd[1642]: time="2024-11-12T17:37:23.669121150Z" level=info msg="Loading containers: start." Nov 12 17:37:23.769010 kernel: Initializing XFRM netlink socket Nov 12 17:37:23.852929 systemd-networkd[1384]: docker0: Link UP Nov 12 17:37:23.875606 dockerd[1642]: time="2024-11-12T17:37:23.875554830Z" level=info msg="Loading containers: done." Nov 12 17:37:23.889442 dockerd[1642]: time="2024-11-12T17:37:23.889372470Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 17:37:23.889578 dockerd[1642]: time="2024-11-12T17:37:23.889490990Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 17:37:23.889636 dockerd[1642]: time="2024-11-12T17:37:23.889606550Z" level=info msg="Daemon has completed initialization" Nov 12 17:37:23.936140 dockerd[1642]: time="2024-11-12T17:37:23.935948350Z" level=info msg="API listen on /run/docker.sock" Nov 12 17:37:23.936147 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 17:37:24.629892 containerd[1442]: time="2024-11-12T17:37:24.629857710Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 17:37:25.325707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount866586122.mount: Deactivated successfully. Nov 12 17:37:26.631287 containerd[1442]: time="2024-11-12T17:37:26.631239430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:37:26.631886 containerd[1442]: time="2024-11-12T17:37:26.631849550Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=32201617" Nov 12 17:37:26.632886 containerd[1442]: time="2024-11-12T17:37:26.632824150Z" level=info msg="ImageCreate event name:\"sha256:001ac07c2bb7d0e08d405a19d935c926c393c971a2801756755b8958a7306ca0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:37:26.636031 containerd[1442]: time="2024-11-12T17:37:26.635967870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:37:26.637286 containerd[1442]: time="2024-11-12T17:37:26.637250190Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:001ac07c2bb7d0e08d405a19d935c926c393c971a2801756755b8958a7306ca0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"32198415\" in 2.00735432s" Nov 12 17:37:26.637357 containerd[1442]: time="2024-11-12T17:37:26.637291270Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:001ac07c2bb7d0e08d405a19d935c926c393c971a2801756755b8958a7306ca0\"" Nov 12 17:37:26.655902 containerd[1442]: time="2024-11-12T17:37:26.655805790Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 17:37:28.550530 containerd[1442]: time="2024-11-12T17:37:28.550465070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:37:28.551129 containerd[1442]: time="2024-11-12T17:37:28.551087310Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=29381046" Nov 12 17:37:28.551892 containerd[1442]: time="2024-11-12T17:37:28.551841150Z" level=info msg="ImageCreate event name:\"sha256:27bef186b28e50ade2a010ef9201877431fb732ef6e370cb79149e8bd65220d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:37:28.555674 containerd[1442]: time="2024-11-12T17:37:28.554866510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:37:28.556409 containerd[1442]: time="2024-11-12T17:37:28.556113790Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:27bef186b28e50ade2a010ef9201877431fb732ef6e370cb79149e8bd65220d7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"30783669\" in 1.9002676s" Nov 12 17:37:28.556409 containerd[1442]: time="2024-11-12T17:37:28.556155350Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:27bef186b28e50ade2a010ef9201877431fb732ef6e370cb79149e8bd65220d7\"" Nov 12 17:37:28.575314 containerd[1442]: time="2024-11-12T17:37:28.575276550Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 17:37:28.869445 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 17:37:28.884239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:37:28.973494 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:37:28.977392 (kubelet)[1877]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:37:29.020112 kubelet[1877]: E1112 17:37:29.020030 1877 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:37:29.023623 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:37:29.023771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:37:29.658715 containerd[1442]: time="2024-11-12T17:37:29.658658430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:37:29.659265 containerd[1442]: time="2024-11-12T17:37:29.659229230Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=15770290" Nov 12 17:37:29.660007 containerd[1442]: time="2024-11-12T17:37:29.659935590Z" level=info msg="ImageCreate event name:\"sha256:a8e5012443313f8a99b528b68845e2bcb151785ed5c057613dad7ca5b03c7e60\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:37:29.663528 containerd[1442]: time="2024-11-12T17:37:29.663475350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:37:29.664218 containerd[1442]: time="2024-11-12T17:37:29.664188670Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:a8e5012443313f8a99b528b68845e2bcb151785ed5c057613dad7ca5b03c7e60\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"17172931\" in 1.08887396s" Nov 12 17:37:29.664273 containerd[1442]: time="2024-11-12T17:37:29.664218430Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:a8e5012443313f8a99b528b68845e2bcb151785ed5c057613dad7ca5b03c7e60\"" Nov 12 17:37:29.683671 containerd[1442]: time="2024-11-12T17:37:29.683636710Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 17:37:30.746234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999595837.mount: Deactivated successfully. Nov 12 17:37:31.072116 containerd[1442]: time="2024-11-12T17:37:31.071675310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:37:31.072410 containerd[1442]: time="2024-11-12T17:37:31.072253270Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=25272231" Nov 12 17:37:31.073147 containerd[1442]: time="2024-11-12T17:37:31.073104870Z" level=info msg="ImageCreate event name:\"sha256:4e66440765478454d48b169d648b000501e24066c0bad7c378bd9e8506bb919f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:37:31.075377 containerd[1442]: time="2024-11-12T17:37:31.075343510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:37:31.075998 containerd[1442]: time="2024-11-12T17:37:31.075950510Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:4e66440765478454d48b169d648b000501e24066c0bad7c378bd9e8506bb919f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"25271248\" in 1.39227408s" Nov 12 17:37:31.076035 containerd[1442]: time="2024-11-12T17:37:31.076004350Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:4e66440765478454d48b169d648b000501e24066c0bad7c378bd9e8506bb919f\"" Nov 12 17:37:31.094024 containerd[1442]: time="2024-11-12T17:37:31.093950950Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 17:37:31.691020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1035928516.mount: Deactivated successfully. Nov 12 17:37:32.257249 containerd[1442]: time="2024-11-12T17:37:32.257189630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:37:32.258573 containerd[1442]: time="2024-11-12T17:37:32.258538270Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Nov 12 17:37:32.259283 containerd[1442]: time="2024-11-12T17:37:32.259234310Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:37:32.262927 containerd[1442]: time="2024-11-12T17:37:32.262874150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:37:32.264787 containerd[1442]: time="2024-11-12T17:37:32.264715710Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.17071784s" Nov 12 17:37:32.264787 containerd[1442]: time="2024-11-12T17:37:32.264764390Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Nov 12 17:37:32.282395 containerd[1442]: time="2024-11-12T17:37:32.282357430Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 17:37:32.688492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1148272460.mount: Deactivated successfully. Nov 12 17:37:32.697733 containerd[1442]: time="2024-11-12T17:37:32.697683390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:37:32.698717 containerd[1442]: time="2024-11-12T17:37:32.698678350Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Nov 12 17:37:32.699321 containerd[1442]: time="2024-11-12T17:37:32.699257550Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:37:32.701518 containerd[1442]: time="2024-11-12T17:37:32.701456430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:37:32.702398 containerd[1442]: time="2024-11-12T17:37:32.702369430Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 419.97368ms" Nov 12 17:37:32.702443 containerd[1442]: time="2024-11-12T17:37:32.702403030Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Nov 12 17:37:32.720946 containerd[1442]: time="2024-11-12T17:37:32.720905830Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 17:37:33.475351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3637266377.mount: Deactivated successfully. Nov 12 17:37:35.396636 containerd[1442]: time="2024-11-12T17:37:35.396580910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:37:35.397104 containerd[1442]: time="2024-11-12T17:37:35.397061670Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Nov 12 17:37:35.397967 containerd[1442]: time="2024-11-12T17:37:35.397936630Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:37:35.402116 containerd[1442]: time="2024-11-12T17:37:35.402057390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:37:35.403213 containerd[1442]: time="2024-11-12T17:37:35.403181350Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.68223508s" Nov 12 17:37:35.403265 containerd[1442]: time="2024-11-12T17:37:35.403218510Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Nov 12 17:37:39.145926 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 17:37:39.158379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:37:39.239353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:37:39.242476 (kubelet)[2098]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:37:39.277065 kubelet[2098]: E1112 17:37:39.277012 2098 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:37:39.279642 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:37:39.279789 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:37:40.652248 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:37:40.662218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:37:40.679954 systemd[1]: Reloading requested from client PID 2113 ('systemctl') (unit session-7.scope)... Nov 12 17:37:40.679972 systemd[1]: Reloading... Nov 12 17:37:40.747997 zram_generator::config[2152]: No configuration found. Nov 12 17:37:40.962346 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:37:41.015320 systemd[1]: Reloading finished in 335 ms. Nov 12 17:37:41.056157 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:37:41.059273 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 17:37:41.059493 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:37:41.060897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:37:41.160471 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:37:41.165354 (kubelet)[2199]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 17:37:41.212029 kubelet[2199]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:37:41.212029 kubelet[2199]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 17:37:41.212029 kubelet[2199]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:37:41.212867 kubelet[2199]: I1112 17:37:41.212804 2199 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 17:37:42.098825 kubelet[2199]: I1112 17:37:42.098793 2199 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 17:37:42.100121 kubelet[2199]: I1112 17:37:42.098954 2199 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 17:37:42.100121 kubelet[2199]: I1112 17:37:42.099197 2199 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 17:37:42.118850 kubelet[2199]: I1112 17:37:42.118810 2199 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 17:37:42.119058 kubelet[2199]: E1112 17:37:42.119034 2199 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.11:6443: connect: connection refused Nov 12 17:37:42.128546 kubelet[2199]: I1112 17:37:42.128515 2199 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 17:37:42.130580 kubelet[2199]: I1112 17:37:42.130402 2199 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 17:37:42.130757 kubelet[2199]: I1112 17:37:42.130741 2199 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 17:37:42.130900 kubelet[2199]: I1112 17:37:42.130769 2199 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 17:37:42.130900 kubelet[2199]: I1112 17:37:42.130782 2199 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 17:37:42.131907 kubelet[2199]: I1112 17:37:42.131868 2199 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:37:42.134258 kubelet[2199]: I1112 17:37:42.134228 2199 kubelet.go:396] "Attempting to sync node with API server" Nov 12 17:37:42.134258 kubelet[2199]: I1112 17:37:42.134256 2199 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 17:37:42.134329 kubelet[2199]: I1112 17:37:42.134279 2199 kubelet.go:312] "Adding apiserver pod source" Nov 12 17:37:42.134329 kubelet[2199]: I1112 17:37:42.134291 2199 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 17:37:42.134779 kubelet[2199]: W1112 17:37:42.134728 2199 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Nov 12 17:37:42.134779 kubelet[2199]: E1112 17:37:42.134780 2199 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Nov 12 17:37:42.137731 kubelet[2199]: W1112 17:37:42.137697 2199 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Nov 12 17:37:42.137786 kubelet[2199]: E1112 17:37:42.137734 2199 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Nov 12 17:37:42.140306 kubelet[2199]: I1112 17:37:42.140286 2199 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 17:37:42.142765 kubelet[2199]: I1112 17:37:42.142695 2199 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 17:37:42.145672 kubelet[2199]: W1112 17:37:42.145635 2199 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 17:37:42.146565 kubelet[2199]: I1112 17:37:42.146542 2199 server.go:1256] "Started kubelet" Nov 12 17:37:42.147089 kubelet[2199]: I1112 17:37:42.146797 2199 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 17:37:42.147089 kubelet[2199]: I1112 17:37:42.146922 2199 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 17:37:42.147355 kubelet[2199]: I1112 17:37:42.147233 2199 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 17:37:42.148606 kubelet[2199]: I1112 17:37:42.148253 2199 server.go:461] "Adding debug handlers to kubelet server" Nov 12 17:37:42.149698 kubelet[2199]: I1112 17:37:42.149670 2199 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 17:37:42.149760 kubelet[2199]: I1112 17:37:42.149734 2199 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 17:37:42.149860 kubelet[2199]: I1112 17:37:42.149837 2199 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 17:37:42.149910 kubelet[2199]: I1112 17:37:42.149896 2199 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 17:37:42.150212 kubelet[2199]: W1112 17:37:42.150178 2199 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Nov 12 17:37:42.150269 kubelet[2199]: E1112 17:37:42.150216 2199 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Nov 12 17:37:42.150337 kubelet[2199]: E1112 17:37:42.150318 2199 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 17:37:42.150587 kubelet[2199]: E1112 17:37:42.150549 2199 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="200ms" Nov 12 17:37:42.151024 kubelet[2199]: E1112 17:37:42.151002 2199 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 17:37:42.151475 kubelet[2199]: I1112 17:37:42.151457 2199 factory.go:221] Registration of the systemd container factory successfully Nov 12 17:37:42.151563 kubelet[2199]: I1112 17:37:42.151547 2199 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 17:37:42.152544 kubelet[2199]: I1112 17:37:42.152525 2199 factory.go:221] Registration of the containerd container factory successfully Nov 12 17:37:42.152613 kubelet[2199]: E1112 17:37:42.152571 2199 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.11:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.11:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.180749324d9c1bf6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 17:37:42.14651391 +0000 UTC m=+0.976972561,LastTimestamp:2024-11-12 17:37:42.14651391 +0000 UTC m=+0.976972561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 17:37:42.163693 kubelet[2199]: I1112 17:37:42.163576 2199 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 17:37:42.163693 kubelet[2199]: I1112 17:37:42.163600 2199 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 17:37:42.163693 kubelet[2199]: I1112 17:37:42.163616 2199 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:37:42.164303 kubelet[2199]: I1112 17:37:42.164274 2199 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 17:37:42.165196 kubelet[2199]: I1112 17:37:42.165180 2199 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 17:37:42.165230 kubelet[2199]: I1112 17:37:42.165202 2199 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 17:37:42.165230 kubelet[2199]: I1112 17:37:42.165217 2199 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 17:37:42.165287 kubelet[2199]: E1112 17:37:42.165276 2199 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 17:37:42.170625 kubelet[2199]: W1112 17:37:42.170575 2199 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Nov 12 17:37:42.170625 kubelet[2199]: E1112 17:37:42.170630 2199 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Nov 12 17:37:42.227446 kubelet[2199]: I1112 17:37:42.227394 2199 policy_none.go:49] "None policy: Start" Nov 12 17:37:42.228199 kubelet[2199]: I1112 17:37:42.228143 2199 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 17:37:42.228375 kubelet[2199]: I1112 17:37:42.228328 2199 state_mem.go:35] "Initializing new in-memory state store" Nov 12 17:37:42.236431 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 17:37:42.250026 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 17:37:42.251461 kubelet[2199]: I1112 17:37:42.251428 2199 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:37:42.251858 kubelet[2199]: E1112 17:37:42.251838 2199 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" Nov 12 17:37:42.253303 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 17:37:42.264784 kubelet[2199]: I1112 17:37:42.264754 2199 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 17:37:42.265123 kubelet[2199]: I1112 17:37:42.265036 2199 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 17:37:42.265676 kubelet[2199]: I1112 17:37:42.265501 2199 topology_manager.go:215] "Topology Admit Handler" podUID="f817ee9e0f82f9dd80dc13037a28db17" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 17:37:42.270156 kubelet[2199]: I1112 17:37:42.270051 2199 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 17:37:42.270208 kubelet[2199]: E1112 17:37:42.270177 2199 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 17:37:42.271831 kubelet[2199]: I1112 17:37:42.271750 2199 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 17:37:42.278388 systemd[1]: Created slice kubepods-burstable-pod33932df710fd78419c0859d7fa44b8e7.slice - libcontainer container kubepods-burstable-pod33932df710fd78419c0859d7fa44b8e7.slice. Nov 12 17:37:42.299592 systemd[1]: Created slice kubepods-burstable-podf817ee9e0f82f9dd80dc13037a28db17.slice - libcontainer container kubepods-burstable-podf817ee9e0f82f9dd80dc13037a28db17.slice. Nov 12 17:37:42.304226 systemd[1]: Created slice kubepods-burstable-podc7145bec6839b5d7dcb0c5beff5515b4.slice - libcontainer container kubepods-burstable-podc7145bec6839b5d7dcb0c5beff5515b4.slice. Nov 12 17:37:42.352027 kubelet[2199]: E1112 17:37:42.351131 2199 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="400ms" Nov 12 17:37:42.451904 kubelet[2199]: I1112 17:37:42.451742 2199 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:37:42.451904 kubelet[2199]: I1112 17:37:42.451797 2199 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:37:42.451904 kubelet[2199]: I1112 17:37:42.451820 2199 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:37:42.451904 kubelet[2199]: I1112 17:37:42.451853 2199 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 17:37:42.451904 kubelet[2199]: I1112 17:37:42.451876 2199 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:37:42.452204 kubelet[2199]: I1112 17:37:42.451930 2199 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f817ee9e0f82f9dd80dc13037a28db17-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f817ee9e0f82f9dd80dc13037a28db17\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:37:42.452204 kubelet[2199]: I1112 17:37:42.452049 2199 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f817ee9e0f82f9dd80dc13037a28db17-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f817ee9e0f82f9dd80dc13037a28db17\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:37:42.452204 kubelet[2199]: I1112 17:37:42.452076 2199 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:37:42.452204 kubelet[2199]: I1112 17:37:42.452095 2199 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f817ee9e0f82f9dd80dc13037a28db17-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f817ee9e0f82f9dd80dc13037a28db17\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:37:42.453161 kubelet[2199]: I1112 17:37:42.453075 2199 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:37:42.453431 kubelet[2199]: E1112 17:37:42.453399 2199 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" Nov 12 17:37:42.598947 kubelet[2199]: E1112 17:37:42.598899 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:42.599806 containerd[1442]: time="2024-11-12T17:37:42.599521390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,}" Nov 12 17:37:42.603672 kubelet[2199]: E1112 17:37:42.603594 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:42.604024 containerd[1442]: time="2024-11-12T17:37:42.603972310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f817ee9e0f82f9dd80dc13037a28db17,Namespace:kube-system,Attempt:0,}" Nov 12 17:37:42.606573 kubelet[2199]: E1112 17:37:42.606521 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:42.607519 containerd[1442]: time="2024-11-12T17:37:42.606825590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,}" Nov 12 17:37:42.751683 kubelet[2199]: E1112 17:37:42.751643 2199 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="800ms" Nov 12 17:37:42.855268 kubelet[2199]: I1112 17:37:42.855161 2199 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:37:42.855675 kubelet[2199]: E1112 17:37:42.855644 2199 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" Nov 12 17:37:43.022795 kubelet[2199]: W1112 17:37:43.022700 2199 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Nov 12 17:37:43.022795 kubelet[2199]: E1112 17:37:43.022771 2199 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Nov 12 17:37:43.060957 kubelet[2199]: W1112 17:37:43.060866 2199 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Nov 12 17:37:43.060957 kubelet[2199]: E1112 17:37:43.060929 2199 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Nov 12 17:37:43.147418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1095198774.mount: Deactivated successfully. Nov 12 17:37:43.157353 containerd[1442]: time="2024-11-12T17:37:43.157310510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:37:43.158666 containerd[1442]: time="2024-11-12T17:37:43.158307710Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:37:43.159364 containerd[1442]: time="2024-11-12T17:37:43.159338710Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 17:37:43.159957 containerd[1442]: time="2024-11-12T17:37:43.159936310Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Nov 12 17:37:43.160674 containerd[1442]: time="2024-11-12T17:37:43.160636030Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:37:43.161711 containerd[1442]: time="2024-11-12T17:37:43.161625190Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:37:43.161898 containerd[1442]: time="2024-11-12T17:37:43.161878430Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 17:37:43.164291 containerd[1442]: time="2024-11-12T17:37:43.164257910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:37:43.166059 containerd[1442]: time="2024-11-12T17:37:43.166031270Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 566.42928ms" Nov 12 17:37:43.168637 containerd[1442]: time="2024-11-12T17:37:43.168586870Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 564.52264ms" Nov 12 17:37:43.169464 containerd[1442]: time="2024-11-12T17:37:43.169352270Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 562.4636ms" Nov 12 17:37:43.250826 kubelet[2199]: W1112 17:37:43.250740 2199 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Nov 12 17:37:43.250826 kubelet[2199]: E1112 17:37:43.250803 2199 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Nov 12 17:37:43.304115 containerd[1442]: time="2024-11-12T17:37:43.304000710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:37:43.304115 containerd[1442]: time="2024-11-12T17:37:43.304074910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:37:43.304318 containerd[1442]: time="2024-11-12T17:37:43.304089870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:37:43.304551 containerd[1442]: time="2024-11-12T17:37:43.304507550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:37:43.305151 containerd[1442]: time="2024-11-12T17:37:43.304894030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:37:43.305151 containerd[1442]: time="2024-11-12T17:37:43.304936430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:37:43.305151 containerd[1442]: time="2024-11-12T17:37:43.304947630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:37:43.305151 containerd[1442]: time="2024-11-12T17:37:43.305069710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:37:43.305778 containerd[1442]: time="2024-11-12T17:37:43.305722390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:37:43.305845 containerd[1442]: time="2024-11-12T17:37:43.305774070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:37:43.305880 containerd[1442]: time="2024-11-12T17:37:43.305831950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:37:43.307199 containerd[1442]: time="2024-11-12T17:37:43.305928390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:37:43.335135 systemd[1]: Started cri-containerd-10412e5dc90edea94b0de923f3b026bbe4b135b3e7d7a98854ac98724f3d381e.scope - libcontainer container 10412e5dc90edea94b0de923f3b026bbe4b135b3e7d7a98854ac98724f3d381e. Nov 12 17:37:43.336477 systemd[1]: Started cri-containerd-68e99f5e1fc0987682eb75c72fa70ce27feaa34851258dec3192c9ac7e1c76d7.scope - libcontainer container 68e99f5e1fc0987682eb75c72fa70ce27feaa34851258dec3192c9ac7e1c76d7. Nov 12 17:37:43.339695 systemd[1]: Started cri-containerd-8e73e9f3d0f45d49254f5228fc5f310300bc5b150bf8d86e36510674f2c422ce.scope - libcontainer container 8e73e9f3d0f45d49254f5228fc5f310300bc5b150bf8d86e36510674f2c422ce. Nov 12 17:37:43.371758 containerd[1442]: time="2024-11-12T17:37:43.371703430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f817ee9e0f82f9dd80dc13037a28db17,Namespace:kube-system,Attempt:0,} returns sandbox id \"10412e5dc90edea94b0de923f3b026bbe4b135b3e7d7a98854ac98724f3d381e\"" Nov 12 17:37:43.371758 containerd[1442]: time="2024-11-12T17:37:43.372585310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"68e99f5e1fc0987682eb75c72fa70ce27feaa34851258dec3192c9ac7e1c76d7\"" Nov 12 17:37:43.373715 kubelet[2199]: E1112 17:37:43.373686 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:43.374811 kubelet[2199]: E1112 17:37:43.374783 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:43.378788 containerd[1442]: time="2024-11-12T17:37:43.378752990Z" level=info msg="CreateContainer within sandbox \"10412e5dc90edea94b0de923f3b026bbe4b135b3e7d7a98854ac98724f3d381e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 17:37:43.379047 containerd[1442]: time="2024-11-12T17:37:43.378808110Z" level=info msg="CreateContainer within sandbox \"68e99f5e1fc0987682eb75c72fa70ce27feaa34851258dec3192c9ac7e1c76d7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 17:37:43.384282 containerd[1442]: time="2024-11-12T17:37:43.384252510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e73e9f3d0f45d49254f5228fc5f310300bc5b150bf8d86e36510674f2c422ce\"" Nov 12 17:37:43.384932 kubelet[2199]: E1112 17:37:43.384912 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:43.386567 containerd[1442]: time="2024-11-12T17:37:43.386539950Z" level=info msg="CreateContainer within sandbox \"8e73e9f3d0f45d49254f5228fc5f310300bc5b150bf8d86e36510674f2c422ce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 17:37:43.494932 containerd[1442]: time="2024-11-12T17:37:43.494803270Z" level=info msg="CreateContainer within sandbox \"10412e5dc90edea94b0de923f3b026bbe4b135b3e7d7a98854ac98724f3d381e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cd538b055888e8f799fc8428f67c70679779326e76ecf2d77c0a81644733ac3f\"" Nov 12 17:37:43.496397 containerd[1442]: time="2024-11-12T17:37:43.496368750Z" level=info msg="StartContainer for \"cd538b055888e8f799fc8428f67c70679779326e76ecf2d77c0a81644733ac3f\"" Nov 12 17:37:43.505555 containerd[1442]: time="2024-11-12T17:37:43.505430550Z" level=info msg="CreateContainer within sandbox \"68e99f5e1fc0987682eb75c72fa70ce27feaa34851258dec3192c9ac7e1c76d7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fb454feb317452105c158c675d66a64a51386a193978e9d061079b43d15bb4b7\"" Nov 12 17:37:43.507163 containerd[1442]: time="2024-11-12T17:37:43.506024270Z" level=info msg="StartContainer for \"fb454feb317452105c158c675d66a64a51386a193978e9d061079b43d15bb4b7\"" Nov 12 17:37:43.507365 containerd[1442]: time="2024-11-12T17:37:43.507318430Z" level=info msg="CreateContainer within sandbox \"8e73e9f3d0f45d49254f5228fc5f310300bc5b150bf8d86e36510674f2c422ce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6920a37f6237b51992b4be48d1aaaff305b10f159718420feda98c332eebbc11\"" Nov 12 17:37:43.507708 containerd[1442]: time="2024-11-12T17:37:43.507688470Z" level=info msg="StartContainer for \"6920a37f6237b51992b4be48d1aaaff305b10f159718420feda98c332eebbc11\"" Nov 12 17:37:43.522145 systemd[1]: Started cri-containerd-cd538b055888e8f799fc8428f67c70679779326e76ecf2d77c0a81644733ac3f.scope - libcontainer container cd538b055888e8f799fc8428f67c70679779326e76ecf2d77c0a81644733ac3f. Nov 12 17:37:43.530146 systemd[1]: Started cri-containerd-6920a37f6237b51992b4be48d1aaaff305b10f159718420feda98c332eebbc11.scope - libcontainer container 6920a37f6237b51992b4be48d1aaaff305b10f159718420feda98c332eebbc11. Nov 12 17:37:43.540221 systemd[1]: Started cri-containerd-fb454feb317452105c158c675d66a64a51386a193978e9d061079b43d15bb4b7.scope - libcontainer container fb454feb317452105c158c675d66a64a51386a193978e9d061079b43d15bb4b7. Nov 12 17:37:43.552957 kubelet[2199]: E1112 17:37:43.552919 2199 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="1.6s" Nov 12 17:37:43.591205 containerd[1442]: time="2024-11-12T17:37:43.591160590Z" level=info msg="StartContainer for \"6920a37f6237b51992b4be48d1aaaff305b10f159718420feda98c332eebbc11\" returns successfully" Nov 12 17:37:43.591691 containerd[1442]: time="2024-11-12T17:37:43.591277270Z" level=info msg="StartContainer for \"cd538b055888e8f799fc8428f67c70679779326e76ecf2d77c0a81644733ac3f\" returns successfully" Nov 12 17:37:43.612322 containerd[1442]: time="2024-11-12T17:37:43.607065550Z" level=info msg="StartContainer for \"fb454feb317452105c158c675d66a64a51386a193978e9d061079b43d15bb4b7\" returns successfully" Nov 12 17:37:43.659147 kubelet[2199]: I1112 17:37:43.657115 2199 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:37:43.659147 kubelet[2199]: E1112 17:37:43.657424 2199 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" Nov 12 17:37:43.678242 kubelet[2199]: W1112 17:37:43.676051 2199 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Nov 12 17:37:43.678242 kubelet[2199]: E1112 17:37:43.676109 2199 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Nov 12 17:37:44.175526 kubelet[2199]: E1112 17:37:44.175432 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:44.180699 kubelet[2199]: E1112 17:37:44.180667 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:44.182365 kubelet[2199]: E1112 17:37:44.182350 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:45.179309 kubelet[2199]: E1112 17:37:45.179268 2199 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 12 17:37:45.185990 kubelet[2199]: E1112 17:37:45.184931 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:45.259010 kubelet[2199]: I1112 17:37:45.258948 2199 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:37:45.276022 kubelet[2199]: I1112 17:37:45.275909 2199 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 17:37:45.284741 kubelet[2199]: E1112 17:37:45.284686 2199 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 17:37:45.385423 kubelet[2199]: E1112 17:37:45.385376 2199 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 17:37:45.485866 kubelet[2199]: E1112 17:37:45.485752 2199 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 17:37:45.586313 kubelet[2199]: E1112 17:37:45.586264 2199 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 17:37:45.687291 kubelet[2199]: E1112 17:37:45.687249 2199 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 17:37:45.788293 kubelet[2199]: E1112 17:37:45.788093 2199 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 17:37:45.888784 kubelet[2199]: E1112 17:37:45.888742 2199 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 17:37:45.989219 kubelet[2199]: E1112 17:37:45.989182 2199 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 17:37:46.136652 kubelet[2199]: I1112 17:37:46.136541 2199 apiserver.go:52] "Watching apiserver" Nov 12 17:37:46.150893 kubelet[2199]: I1112 17:37:46.150835 2199 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 17:37:46.388944 kubelet[2199]: E1112 17:37:46.388724 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:47.186464 kubelet[2199]: E1112 17:37:47.186434 2199 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:48.017232 systemd[1]: Reloading requested from client PID 2477 ('systemctl') (unit session-7.scope)... Nov 12 17:37:48.017247 systemd[1]: Reloading... Nov 12 17:37:48.072027 zram_generator::config[2522]: No configuration found. Nov 12 17:37:48.162847 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:37:48.231293 systemd[1]: Reloading finished in 213 ms. Nov 12 17:37:48.265974 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:37:48.284031 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 17:37:48.284274 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:37:48.284326 systemd[1]: kubelet.service: Consumed 1.300s CPU time, 115.1M memory peak, 0B memory swap peak. Nov 12 17:37:48.294346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:37:48.384233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:37:48.389516 (kubelet)[2558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 17:37:48.430781 kubelet[2558]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:37:48.430781 kubelet[2558]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 17:37:48.430781 kubelet[2558]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:37:48.430781 kubelet[2558]: I1112 17:37:48.430705 2558 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 17:37:48.434604 kubelet[2558]: I1112 17:37:48.434569 2558 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 17:37:48.434604 kubelet[2558]: I1112 17:37:48.434594 2558 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 17:37:48.435166 kubelet[2558]: I1112 17:37:48.434786 2558 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 17:37:48.436272 kubelet[2558]: I1112 17:37:48.436250 2558 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 17:37:48.438362 kubelet[2558]: I1112 17:37:48.438334 2558 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 17:37:48.448598 kubelet[2558]: I1112 17:37:48.448571 2558 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 17:37:48.448798 kubelet[2558]: I1112 17:37:48.448786 2558 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 17:37:48.449051 kubelet[2558]: I1112 17:37:48.448965 2558 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 17:37:48.449113 kubelet[2558]: I1112 17:37:48.449059 2558 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 17:37:48.449113 kubelet[2558]: I1112 17:37:48.449071 2558 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 17:37:48.449113 kubelet[2558]: I1112 17:37:48.449104 2558 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:37:48.449213 kubelet[2558]: I1112 17:37:48.449202 2558 kubelet.go:396] "Attempting to sync node with API server" Nov 12 17:37:48.449244 kubelet[2558]: I1112 17:37:48.449218 2558 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 17:37:48.449244 kubelet[2558]: I1112 17:37:48.449238 2558 kubelet.go:312] "Adding apiserver pod source" Nov 12 17:37:48.449291 kubelet[2558]: I1112 17:37:48.449249 2558 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 17:37:48.450178 kubelet[2558]: I1112 17:37:48.450160 2558 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 17:37:48.452008 kubelet[2558]: I1112 17:37:48.451088 2558 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 17:37:48.452008 kubelet[2558]: I1112 17:37:48.451481 2558 server.go:1256] "Started kubelet" Nov 12 17:37:48.452160 kubelet[2558]: I1112 17:37:48.452144 2558 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 17:37:48.452803 kubelet[2558]: I1112 17:37:48.452778 2558 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 17:37:48.452944 kubelet[2558]: I1112 17:37:48.452924 2558 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 17:37:48.453272 kubelet[2558]: I1112 17:37:48.453255 2558 server.go:461] "Adding debug handlers to kubelet server" Nov 12 17:37:48.455200 kubelet[2558]: I1112 17:37:48.455177 2558 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 17:37:48.461913 kubelet[2558]: E1112 17:37:48.459037 2558 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 17:37:48.461913 kubelet[2558]: I1112 17:37:48.459068 2558 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 17:37:48.461913 kubelet[2558]: I1112 17:37:48.459168 2558 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 17:37:48.461913 kubelet[2558]: I1112 17:37:48.459297 2558 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 17:37:48.464205 kubelet[2558]: I1112 17:37:48.464171 2558 factory.go:221] Registration of the systemd container factory successfully Nov 12 17:37:48.464289 kubelet[2558]: I1112 17:37:48.464263 2558 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 17:37:48.466868 kubelet[2558]: I1112 17:37:48.466838 2558 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 17:37:48.467756 kubelet[2558]: I1112 17:37:48.467729 2558 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 17:37:48.467756 kubelet[2558]: I1112 17:37:48.467748 2558 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 17:37:48.467851 kubelet[2558]: I1112 17:37:48.467765 2558 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 17:37:48.467851 kubelet[2558]: E1112 17:37:48.467809 2558 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 17:37:48.481997 kubelet[2558]: I1112 17:37:48.479239 2558 factory.go:221] Registration of the containerd container factory successfully Nov 12 17:37:48.521712 kubelet[2558]: I1112 17:37:48.521675 2558 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 17:37:48.521712 kubelet[2558]: I1112 17:37:48.521699 2558 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 17:37:48.521712 kubelet[2558]: I1112 17:37:48.521718 2558 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:37:48.521881 kubelet[2558]: I1112 17:37:48.521871 2558 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 17:37:48.521939 kubelet[2558]: I1112 17:37:48.521915 2558 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 17:37:48.521939 kubelet[2558]: I1112 17:37:48.521930 2558 policy_none.go:49] "None policy: Start" Nov 12 17:37:48.522617 kubelet[2558]: I1112 17:37:48.522593 2558 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 17:37:48.522653 kubelet[2558]: I1112 17:37:48.522621 2558 state_mem.go:35] "Initializing new in-memory state store" Nov 12 17:37:48.522791 kubelet[2558]: I1112 17:37:48.522770 2558 state_mem.go:75] "Updated machine memory state" Nov 12 17:37:48.527962 kubelet[2558]: I1112 17:37:48.527934 2558 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 17:37:48.528241 kubelet[2558]: I1112 17:37:48.528178 2558 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 17:37:48.564887 kubelet[2558]: I1112 17:37:48.563197 2558 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:37:48.568327 kubelet[2558]: I1112 17:37:48.567987 2558 topology_manager.go:215] "Topology Admit Handler" podUID="f817ee9e0f82f9dd80dc13037a28db17" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 17:37:48.568327 kubelet[2558]: I1112 17:37:48.568217 2558 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 17:37:48.569576 kubelet[2558]: I1112 17:37:48.569103 2558 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 17:37:48.570211 kubelet[2558]: I1112 17:37:48.570190 2558 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Nov 12 17:37:48.570262 kubelet[2558]: I1112 17:37:48.570257 2558 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 17:37:48.580113 kubelet[2558]: E1112 17:37:48.579460 2558 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 12 17:37:48.659812 kubelet[2558]: I1112 17:37:48.659680 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:37:48.659812 kubelet[2558]: I1112 17:37:48.659764 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:37:48.659812 kubelet[2558]: I1112 17:37:48.659791 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:37:48.659812 kubelet[2558]: I1112 17:37:48.659817 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f817ee9e0f82f9dd80dc13037a28db17-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f817ee9e0f82f9dd80dc13037a28db17\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:37:48.660116 kubelet[2558]: I1112 17:37:48.659836 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:37:48.660116 kubelet[2558]: I1112 17:37:48.659855 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:37:48.660116 kubelet[2558]: I1112 17:37:48.659876 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 17:37:48.660116 kubelet[2558]: I1112 17:37:48.659894 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f817ee9e0f82f9dd80dc13037a28db17-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f817ee9e0f82f9dd80dc13037a28db17\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:37:48.660116 kubelet[2558]: I1112 17:37:48.659913 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f817ee9e0f82f9dd80dc13037a28db17-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f817ee9e0f82f9dd80dc13037a28db17\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:37:48.878890 kubelet[2558]: E1112 17:37:48.878805 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:48.880272 kubelet[2558]: E1112 17:37:48.879238 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:48.881208 kubelet[2558]: E1112 17:37:48.881154 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:49.450462 kubelet[2558]: I1112 17:37:49.450211 2558 apiserver.go:52] "Watching apiserver" Nov 12 17:37:49.460129 kubelet[2558]: I1112 17:37:49.460069 2558 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 17:37:49.502026 kubelet[2558]: E1112 17:37:49.501810 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:49.502026 kubelet[2558]: E1112 17:37:49.501955 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:49.515593 kubelet[2558]: E1112 17:37:49.515215 2558 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 17:37:49.515729 kubelet[2558]: E1112 17:37:49.515698 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:49.526879 kubelet[2558]: I1112 17:37:49.526745 2558 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.526702401 podStartE2EDuration="3.526702401s" podCreationTimestamp="2024-11-12 17:37:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:37:49.525728998 +0000 UTC m=+1.132804527" watchObservedRunningTime="2024-11-12 17:37:49.526702401 +0000 UTC m=+1.133777930" Nov 12 17:37:49.533449 kubelet[2558]: I1112 17:37:49.533264 2558 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.533232941 podStartE2EDuration="1.533232941s" podCreationTimestamp="2024-11-12 17:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:37:49.532359259 +0000 UTC m=+1.139434788" watchObservedRunningTime="2024-11-12 17:37:49.533232941 +0000 UTC m=+1.140308430" Nov 12 17:37:50.503527 kubelet[2558]: E1112 17:37:50.503493 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:51.505081 kubelet[2558]: E1112 17:37:51.504734 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:51.531397 kubelet[2558]: E1112 17:37:51.531136 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:52.731802 sudo[1625]: pam_unix(sudo:session): session closed for user root Nov 12 17:37:52.734218 sshd[1622]: pam_unix(sshd:session): session closed for user core Nov 12 17:37:52.737543 systemd[1]: sshd@6-10.0.0.11:22-10.0.0.1:57762.service: Deactivated successfully. Nov 12 17:37:52.740711 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 17:37:52.740871 systemd[1]: session-7.scope: Consumed 7.426s CPU time, 187.4M memory peak, 0B memory swap peak. Nov 12 17:37:52.742055 systemd-logind[1426]: Session 7 logged out. Waiting for processes to exit. Nov 12 17:37:52.743094 systemd-logind[1426]: Removed session 7. Nov 12 17:37:54.279998 kubelet[2558]: E1112 17:37:54.279631 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:54.308443 kubelet[2558]: I1112 17:37:54.308338 2558 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.308301952 podStartE2EDuration="6.308301952s" podCreationTimestamp="2024-11-12 17:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:37:49.538951239 +0000 UTC m=+1.146026768" watchObservedRunningTime="2024-11-12 17:37:54.308301952 +0000 UTC m=+5.915377481" Nov 12 17:37:54.515705 kubelet[2558]: E1112 17:37:54.515452 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:37:59.725266 kubelet[2558]: E1112 17:37:59.725230 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:00.525901 kubelet[2558]: E1112 17:38:00.525868 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:01.538356 kubelet[2558]: E1112 17:38:01.538306 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:01.642699 update_engine[1433]: I20241112 17:38:01.642083 1433 update_attempter.cc:509] Updating boot flags... Nov 12 17:38:01.674419 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2655) Nov 12 17:38:01.714030 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2657) Nov 12 17:38:03.130831 kubelet[2558]: I1112 17:38:03.130786 2558 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 17:38:03.140010 containerd[1442]: time="2024-11-12T17:38:03.139859090Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 17:38:03.140294 kubelet[2558]: I1112 17:38:03.140097 2558 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 17:38:04.067265 kubelet[2558]: I1112 17:38:04.067228 2558 topology_manager.go:215] "Topology Admit Handler" podUID="5227a058-5a9c-4ec9-8d9a-689d0071fed3" podNamespace="kube-system" podName="kube-proxy-b2qc2" Nov 12 17:38:04.078011 systemd[1]: Created slice kubepods-besteffort-pod5227a058_5a9c_4ec9_8d9a_689d0071fed3.slice - libcontainer container kubepods-besteffort-pod5227a058_5a9c_4ec9_8d9a_689d0071fed3.slice. Nov 12 17:38:04.184199 kubelet[2558]: I1112 17:38:04.184159 2558 topology_manager.go:215] "Topology Admit Handler" podUID="ca1b6350-be04-41ee-8f73-aeee27c58f98" podNamespace="tigera-operator" podName="tigera-operator-56b74f76df-vsqtm" Nov 12 17:38:04.191275 systemd[1]: Created slice kubepods-besteffort-podca1b6350_be04_41ee_8f73_aeee27c58f98.slice - libcontainer container kubepods-besteffort-podca1b6350_be04_41ee_8f73_aeee27c58f98.slice. Nov 12 17:38:04.260041 kubelet[2558]: I1112 17:38:04.259996 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5227a058-5a9c-4ec9-8d9a-689d0071fed3-lib-modules\") pod \"kube-proxy-b2qc2\" (UID: \"5227a058-5a9c-4ec9-8d9a-689d0071fed3\") " pod="kube-system/kube-proxy-b2qc2" Nov 12 17:38:04.260041 kubelet[2558]: I1112 17:38:04.260044 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5227a058-5a9c-4ec9-8d9a-689d0071fed3-kube-proxy\") pod \"kube-proxy-b2qc2\" (UID: \"5227a058-5a9c-4ec9-8d9a-689d0071fed3\") " pod="kube-system/kube-proxy-b2qc2" Nov 12 17:38:04.260188 kubelet[2558]: I1112 17:38:04.260069 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5227a058-5a9c-4ec9-8d9a-689d0071fed3-xtables-lock\") pod \"kube-proxy-b2qc2\" (UID: \"5227a058-5a9c-4ec9-8d9a-689d0071fed3\") " pod="kube-system/kube-proxy-b2qc2" Nov 12 17:38:04.260188 kubelet[2558]: I1112 17:38:04.260096 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crqx4\" (UniqueName: \"kubernetes.io/projected/5227a058-5a9c-4ec9-8d9a-689d0071fed3-kube-api-access-crqx4\") pod \"kube-proxy-b2qc2\" (UID: \"5227a058-5a9c-4ec9-8d9a-689d0071fed3\") " pod="kube-system/kube-proxy-b2qc2" Nov 12 17:38:04.361333 kubelet[2558]: I1112 17:38:04.361110 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ca1b6350-be04-41ee-8f73-aeee27c58f98-var-lib-calico\") pod \"tigera-operator-56b74f76df-vsqtm\" (UID: \"ca1b6350-be04-41ee-8f73-aeee27c58f98\") " pod="tigera-operator/tigera-operator-56b74f76df-vsqtm" Nov 12 17:38:04.361333 kubelet[2558]: I1112 17:38:04.361193 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sh6p\" (UniqueName: \"kubernetes.io/projected/ca1b6350-be04-41ee-8f73-aeee27c58f98-kube-api-access-4sh6p\") pod \"tigera-operator-56b74f76df-vsqtm\" (UID: \"ca1b6350-be04-41ee-8f73-aeee27c58f98\") " pod="tigera-operator/tigera-operator-56b74f76df-vsqtm" Nov 12 17:38:04.498483 containerd[1442]: time="2024-11-12T17:38:04.498410823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-vsqtm,Uid:ca1b6350-be04-41ee-8f73-aeee27c58f98,Namespace:tigera-operator,Attempt:0,}" Nov 12 17:38:04.519640 containerd[1442]: time="2024-11-12T17:38:04.519486248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:38:04.519640 containerd[1442]: time="2024-11-12T17:38:04.519556848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:38:04.519640 containerd[1442]: time="2024-11-12T17:38:04.519570688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:38:04.519813 containerd[1442]: time="2024-11-12T17:38:04.519666488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:38:04.541145 systemd[1]: Started cri-containerd-3c31b001ee2ad5433051a410b127cffbf5f5b1b25768b01892934e9220b97c21.scope - libcontainer container 3c31b001ee2ad5433051a410b127cffbf5f5b1b25768b01892934e9220b97c21. Nov 12 17:38:04.567688 containerd[1442]: time="2024-11-12T17:38:04.567641864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-vsqtm,Uid:ca1b6350-be04-41ee-8f73-aeee27c58f98,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3c31b001ee2ad5433051a410b127cffbf5f5b1b25768b01892934e9220b97c21\"" Nov 12 17:38:04.575629 containerd[1442]: time="2024-11-12T17:38:04.575590953Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 17:38:04.685472 kubelet[2558]: E1112 17:38:04.685439 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:04.686272 containerd[1442]: time="2024-11-12T17:38:04.685902122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b2qc2,Uid:5227a058-5a9c-4ec9-8d9a-689d0071fed3,Namespace:kube-system,Attempt:0,}" Nov 12 17:38:04.705558 containerd[1442]: time="2024-11-12T17:38:04.705464425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:38:04.708009 containerd[1442]: time="2024-11-12T17:38:04.705537905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:38:04.708009 containerd[1442]: time="2024-11-12T17:38:04.705549865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:38:04.708277 containerd[1442]: time="2024-11-12T17:38:04.708234548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:38:04.726154 systemd[1]: Started cri-containerd-6bbedc1ca0da10840f54dadf9a7a096c881dfaec792673bc0c7c704c54dbb7d9.scope - libcontainer container 6bbedc1ca0da10840f54dadf9a7a096c881dfaec792673bc0c7c704c54dbb7d9. Nov 12 17:38:04.760854 containerd[1442]: time="2024-11-12T17:38:04.760789890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b2qc2,Uid:5227a058-5a9c-4ec9-8d9a-689d0071fed3,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bbedc1ca0da10840f54dadf9a7a096c881dfaec792673bc0c7c704c54dbb7d9\"" Nov 12 17:38:04.761612 kubelet[2558]: E1112 17:38:04.761572 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:04.763694 containerd[1442]: time="2024-11-12T17:38:04.763586773Z" level=info msg="CreateContainer within sandbox \"6bbedc1ca0da10840f54dadf9a7a096c881dfaec792673bc0c7c704c54dbb7d9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 17:38:04.783881 containerd[1442]: time="2024-11-12T17:38:04.783823636Z" level=info msg="CreateContainer within sandbox \"6bbedc1ca0da10840f54dadf9a7a096c881dfaec792673bc0c7c704c54dbb7d9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"57b395a6e4db8cb2eb0f0fb70756c6a4ab14be49f7e4a831e9707820ba0f0e33\"" Nov 12 17:38:04.787233 containerd[1442]: time="2024-11-12T17:38:04.787200360Z" level=info msg="StartContainer for \"57b395a6e4db8cb2eb0f0fb70756c6a4ab14be49f7e4a831e9707820ba0f0e33\"" Nov 12 17:38:04.814157 systemd[1]: Started cri-containerd-57b395a6e4db8cb2eb0f0fb70756c6a4ab14be49f7e4a831e9707820ba0f0e33.scope - libcontainer container 57b395a6e4db8cb2eb0f0fb70756c6a4ab14be49f7e4a831e9707820ba0f0e33. Nov 12 17:38:04.845427 containerd[1442]: time="2024-11-12T17:38:04.845373628Z" level=info msg="StartContainer for \"57b395a6e4db8cb2eb0f0fb70756c6a4ab14be49f7e4a831e9707820ba0f0e33\" returns successfully" Nov 12 17:38:05.536425 kubelet[2558]: E1112 17:38:05.535578 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:07.254303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2251069490.mount: Deactivated successfully. Nov 12 17:38:07.510541 containerd[1442]: time="2024-11-12T17:38:07.510421380Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:07.511126 containerd[1442]: time="2024-11-12T17:38:07.511090461Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=19123625" Nov 12 17:38:07.511771 containerd[1442]: time="2024-11-12T17:38:07.511742862Z" level=info msg="ImageCreate event name:\"sha256:43f5078c762aa5421f1f6830afd7f91e05937aac6b1d97f0516065571164e9ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:07.513931 containerd[1442]: time="2024-11-12T17:38:07.513856864Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:07.514871 containerd[1442]: time="2024-11-12T17:38:07.514597784Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:43f5078c762aa5421f1f6830afd7f91e05937aac6b1d97f0516065571164e9ee\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"19117824\" in 2.938967511s" Nov 12 17:38:07.514871 containerd[1442]: time="2024-11-12T17:38:07.514639064Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:43f5078c762aa5421f1f6830afd7f91e05937aac6b1d97f0516065571164e9ee\"" Nov 12 17:38:07.521200 containerd[1442]: time="2024-11-12T17:38:07.521079551Z" level=info msg="CreateContainer within sandbox \"3c31b001ee2ad5433051a410b127cffbf5f5b1b25768b01892934e9220b97c21\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 17:38:07.532166 containerd[1442]: time="2024-11-12T17:38:07.532105481Z" level=info msg="CreateContainer within sandbox \"3c31b001ee2ad5433051a410b127cffbf5f5b1b25768b01892934e9220b97c21\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5c3b0328c0dfb7decef2fdc0fbe29e33448e28c24faf1e293f1ff9749b30236d\"" Nov 12 17:38:07.534043 containerd[1442]: time="2024-11-12T17:38:07.533290682Z" level=info msg="StartContainer for \"5c3b0328c0dfb7decef2fdc0fbe29e33448e28c24faf1e293f1ff9749b30236d\"" Nov 12 17:38:07.563198 systemd[1]: Started cri-containerd-5c3b0328c0dfb7decef2fdc0fbe29e33448e28c24faf1e293f1ff9749b30236d.scope - libcontainer container 5c3b0328c0dfb7decef2fdc0fbe29e33448e28c24faf1e293f1ff9749b30236d. Nov 12 17:38:07.588586 containerd[1442]: time="2024-11-12T17:38:07.588525735Z" level=info msg="StartContainer for \"5c3b0328c0dfb7decef2fdc0fbe29e33448e28c24faf1e293f1ff9749b30236d\" returns successfully" Nov 12 17:38:08.562531 kubelet[2558]: I1112 17:38:08.562225 2558 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-b2qc2" podStartSLOduration=4.562182598 podStartE2EDuration="4.562182598s" podCreationTimestamp="2024-11-12 17:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:38:05.550346811 +0000 UTC m=+17.157422340" watchObservedRunningTime="2024-11-12 17:38:08.562182598 +0000 UTC m=+20.169258127" Nov 12 17:38:11.200732 kubelet[2558]: I1112 17:38:11.200631 2558 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-56b74f76df-vsqtm" podStartSLOduration=4.258502626 podStartE2EDuration="7.20058062s" podCreationTimestamp="2024-11-12 17:38:04 +0000 UTC" firstStartedPulling="2024-11-12 17:38:04.574182352 +0000 UTC m=+16.181257881" lastFinishedPulling="2024-11-12 17:38:07.516260346 +0000 UTC m=+19.123335875" observedRunningTime="2024-11-12 17:38:08.562661999 +0000 UTC m=+20.169737568" watchObservedRunningTime="2024-11-12 17:38:11.20058062 +0000 UTC m=+22.807656149" Nov 12 17:38:11.210518 kubelet[2558]: I1112 17:38:11.210466 2558 topology_manager.go:215] "Topology Admit Handler" podUID="7ca2265e-fca6-429b-a189-dd929cb91b47" podNamespace="calico-system" podName="calico-typha-d6dd66b9b-qhjxc" Nov 12 17:38:11.238443 systemd[1]: Created slice kubepods-besteffort-pod7ca2265e_fca6_429b_a189_dd929cb91b47.slice - libcontainer container kubepods-besteffort-pod7ca2265e_fca6_429b_a189_dd929cb91b47.slice. Nov 12 17:38:11.271545 kubelet[2558]: I1112 17:38:11.271006 2558 topology_manager.go:215] "Topology Admit Handler" podUID="21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a" podNamespace="calico-system" podName="calico-node-djgw6" Nov 12 17:38:11.280544 systemd[1]: Created slice kubepods-besteffort-pod21f08d34_9ce2_4b9b_97f4_e0eaeab05d7a.slice - libcontainer container kubepods-besteffort-pod21f08d34_9ce2_4b9b_97f4_e0eaeab05d7a.slice. Nov 12 17:38:11.318020 kubelet[2558]: I1112 17:38:11.317967 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvqf9\" (UniqueName: \"kubernetes.io/projected/7ca2265e-fca6-429b-a189-dd929cb91b47-kube-api-access-gvqf9\") pod \"calico-typha-d6dd66b9b-qhjxc\" (UID: \"7ca2265e-fca6-429b-a189-dd929cb91b47\") " pod="calico-system/calico-typha-d6dd66b9b-qhjxc" Nov 12 17:38:11.324882 kubelet[2558]: I1112 17:38:11.324714 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7ca2265e-fca6-429b-a189-dd929cb91b47-typha-certs\") pod \"calico-typha-d6dd66b9b-qhjxc\" (UID: \"7ca2265e-fca6-429b-a189-dd929cb91b47\") " pod="calico-system/calico-typha-d6dd66b9b-qhjxc" Nov 12 17:38:11.324882 kubelet[2558]: I1112 17:38:11.324803 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ca2265e-fca6-429b-a189-dd929cb91b47-tigera-ca-bundle\") pod \"calico-typha-d6dd66b9b-qhjxc\" (UID: \"7ca2265e-fca6-429b-a189-dd929cb91b47\") " pod="calico-system/calico-typha-d6dd66b9b-qhjxc" Nov 12 17:38:11.399297 kubelet[2558]: I1112 17:38:11.399241 2558 topology_manager.go:215] "Topology Admit Handler" podUID="307248dd-d398-4f72-8974-33e136137cb7" podNamespace="calico-system" podName="csi-node-driver-26lzj" Nov 12 17:38:11.403603 kubelet[2558]: E1112 17:38:11.403558 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-26lzj" podUID="307248dd-d398-4f72-8974-33e136137cb7" Nov 12 17:38:11.425623 kubelet[2558]: I1112 17:38:11.425396 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a-var-run-calico\") pod \"calico-node-djgw6\" (UID: \"21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a\") " pod="calico-system/calico-node-djgw6" Nov 12 17:38:11.425623 kubelet[2558]: I1112 17:38:11.425458 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a-policysync\") pod \"calico-node-djgw6\" (UID: \"21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a\") " pod="calico-system/calico-node-djgw6" Nov 12 17:38:11.425623 kubelet[2558]: I1112 17:38:11.425484 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a-node-certs\") pod \"calico-node-djgw6\" (UID: \"21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a\") " pod="calico-system/calico-node-djgw6" Nov 12 17:38:11.425623 kubelet[2558]: I1112 17:38:11.425520 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a-lib-modules\") pod \"calico-node-djgw6\" (UID: \"21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a\") " pod="calico-system/calico-node-djgw6" Nov 12 17:38:11.425623 kubelet[2558]: I1112 17:38:11.425540 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a-flexvol-driver-host\") pod \"calico-node-djgw6\" (UID: \"21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a\") " pod="calico-system/calico-node-djgw6" Nov 12 17:38:11.425845 kubelet[2558]: I1112 17:38:11.425562 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a-tigera-ca-bundle\") pod \"calico-node-djgw6\" (UID: \"21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a\") " pod="calico-system/calico-node-djgw6" Nov 12 17:38:11.425845 kubelet[2558]: I1112 17:38:11.425582 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a-cni-log-dir\") pod \"calico-node-djgw6\" (UID: \"21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a\") " pod="calico-system/calico-node-djgw6" Nov 12 17:38:11.425845 kubelet[2558]: I1112 17:38:11.425632 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a-cni-bin-dir\") pod \"calico-node-djgw6\" (UID: \"21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a\") " pod="calico-system/calico-node-djgw6" Nov 12 17:38:11.425845 kubelet[2558]: I1112 17:38:11.425652 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a-xtables-lock\") pod \"calico-node-djgw6\" (UID: \"21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a\") " pod="calico-system/calico-node-djgw6" Nov 12 17:38:11.425845 kubelet[2558]: I1112 17:38:11.425671 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a-var-lib-calico\") pod \"calico-node-djgw6\" (UID: \"21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a\") " pod="calico-system/calico-node-djgw6" Nov 12 17:38:11.425946 kubelet[2558]: I1112 17:38:11.425689 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a-cni-net-dir\") pod \"calico-node-djgw6\" (UID: \"21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a\") " pod="calico-system/calico-node-djgw6" Nov 12 17:38:11.425946 kubelet[2558]: I1112 17:38:11.425708 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdrps\" (UniqueName: \"kubernetes.io/projected/21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a-kube-api-access-bdrps\") pod \"calico-node-djgw6\" (UID: \"21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a\") " pod="calico-system/calico-node-djgw6" Nov 12 17:38:11.527165 kubelet[2558]: I1112 17:38:11.526363 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/307248dd-d398-4f72-8974-33e136137cb7-varrun\") pod \"csi-node-driver-26lzj\" (UID: \"307248dd-d398-4f72-8974-33e136137cb7\") " pod="calico-system/csi-node-driver-26lzj" Nov 12 17:38:11.527165 kubelet[2558]: I1112 17:38:11.526511 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/307248dd-d398-4f72-8974-33e136137cb7-socket-dir\") pod \"csi-node-driver-26lzj\" (UID: \"307248dd-d398-4f72-8974-33e136137cb7\") " pod="calico-system/csi-node-driver-26lzj" Nov 12 17:38:11.527165 kubelet[2558]: I1112 17:38:11.526577 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/307248dd-d398-4f72-8974-33e136137cb7-registration-dir\") pod \"csi-node-driver-26lzj\" (UID: \"307248dd-d398-4f72-8974-33e136137cb7\") " pod="calico-system/csi-node-driver-26lzj" Nov 12 17:38:11.528144 kubelet[2558]: E1112 17:38:11.528106 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.528144 kubelet[2558]: W1112 17:38:11.528140 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.528230 kubelet[2558]: E1112 17:38:11.528171 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.528460 kubelet[2558]: E1112 17:38:11.528442 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.528460 kubelet[2558]: W1112 17:38:11.528459 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.528519 kubelet[2558]: E1112 17:38:11.528480 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.528677 kubelet[2558]: E1112 17:38:11.528665 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.528677 kubelet[2558]: W1112 17:38:11.528676 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.528815 kubelet[2558]: E1112 17:38:11.528699 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.528841 kubelet[2558]: I1112 17:38:11.528832 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/307248dd-d398-4f72-8974-33e136137cb7-kubelet-dir\") pod \"csi-node-driver-26lzj\" (UID: \"307248dd-d398-4f72-8974-33e136137cb7\") " pod="calico-system/csi-node-driver-26lzj" Nov 12 17:38:11.530058 kubelet[2558]: E1112 17:38:11.529244 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.530058 kubelet[2558]: W1112 17:38:11.529261 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.530058 kubelet[2558]: E1112 17:38:11.529280 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.530272 kubelet[2558]: E1112 17:38:11.530256 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.530272 kubelet[2558]: W1112 17:38:11.530293 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.530272 kubelet[2558]: E1112 17:38:11.530316 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.530615 kubelet[2558]: E1112 17:38:11.530603 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.530678 kubelet[2558]: W1112 17:38:11.530666 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.530814 kubelet[2558]: E1112 17:38:11.530763 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.531083 kubelet[2558]: E1112 17:38:11.531028 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.531083 kubelet[2558]: W1112 17:38:11.531040 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.531283 kubelet[2558]: E1112 17:38:11.531191 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.531423 kubelet[2558]: E1112 17:38:11.531382 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.531423 kubelet[2558]: W1112 17:38:11.531403 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.531664 kubelet[2558]: E1112 17:38:11.531547 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.531787 kubelet[2558]: E1112 17:38:11.531775 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.531855 kubelet[2558]: W1112 17:38:11.531817 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.531958 kubelet[2558]: E1112 17:38:11.531903 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.532332 kubelet[2558]: E1112 17:38:11.532195 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.532332 kubelet[2558]: W1112 17:38:11.532213 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.532332 kubelet[2558]: E1112 17:38:11.532232 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.535458 kubelet[2558]: E1112 17:38:11.535438 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.535570 kubelet[2558]: W1112 17:38:11.535541 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.535669 kubelet[2558]: E1112 17:38:11.535616 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.535928 kubelet[2558]: E1112 17:38:11.535915 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.536087 kubelet[2558]: W1112 17:38:11.536006 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.536087 kubelet[2558]: E1112 17:38:11.536034 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.536524 kubelet[2558]: E1112 17:38:11.536492 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.536524 kubelet[2558]: W1112 17:38:11.536506 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.536701 kubelet[2558]: E1112 17:38:11.536621 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.536915 kubelet[2558]: E1112 17:38:11.536901 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.536993 kubelet[2558]: W1112 17:38:11.536961 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.537051 kubelet[2558]: E1112 17:38:11.537042 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.537293 kubelet[2558]: E1112 17:38:11.537280 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.537494 kubelet[2558]: W1112 17:38:11.537355 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.537494 kubelet[2558]: E1112 17:38:11.537376 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.537622 kubelet[2558]: E1112 17:38:11.537610 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.537779 kubelet[2558]: W1112 17:38:11.537767 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.537875 kubelet[2558]: E1112 17:38:11.537866 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.538282 kubelet[2558]: E1112 17:38:11.538229 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.538282 kubelet[2558]: W1112 17:38:11.538242 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.538402 kubelet[2558]: E1112 17:38:11.538370 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.538657 kubelet[2558]: E1112 17:38:11.538606 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.538657 kubelet[2558]: W1112 17:38:11.538618 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.538773 kubelet[2558]: E1112 17:38:11.538762 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.538992 kubelet[2558]: E1112 17:38:11.538950 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.538992 kubelet[2558]: W1112 17:38:11.538959 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.539211 kubelet[2558]: E1112 17:38:11.539161 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.539301 kubelet[2558]: E1112 17:38:11.539280 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.539301 kubelet[2558]: W1112 17:38:11.539290 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.539416 kubelet[2558]: E1112 17:38:11.539361 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.539701 kubelet[2558]: E1112 17:38:11.539688 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.539811 kubelet[2558]: W1112 17:38:11.539752 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.539885 kubelet[2558]: E1112 17:38:11.539874 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.539950 kubelet[2558]: I1112 17:38:11.539940 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9458c\" (UniqueName: \"kubernetes.io/projected/307248dd-d398-4f72-8974-33e136137cb7-kube-api-access-9458c\") pod \"csi-node-driver-26lzj\" (UID: \"307248dd-d398-4f72-8974-33e136137cb7\") " pod="calico-system/csi-node-driver-26lzj" Nov 12 17:38:11.540220 kubelet[2558]: E1112 17:38:11.540172 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.540220 kubelet[2558]: W1112 17:38:11.540183 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.540415 kubelet[2558]: E1112 17:38:11.540324 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.540528 kubelet[2558]: E1112 17:38:11.540517 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.540578 kubelet[2558]: W1112 17:38:11.540568 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.540702 kubelet[2558]: E1112 17:38:11.540691 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.540936 kubelet[2558]: E1112 17:38:11.540864 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.540936 kubelet[2558]: W1112 17:38:11.540875 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.541249 kubelet[2558]: E1112 17:38:11.541137 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.541751 kubelet[2558]: E1112 17:38:11.541607 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.541751 kubelet[2558]: W1112 17:38:11.541622 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.541886 kubelet[2558]: E1112 17:38:11.541872 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.541971 kubelet[2558]: E1112 17:38:11.541962 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.542150 kubelet[2558]: W1112 17:38:11.542084 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.543616 kubelet[2558]: E1112 17:38:11.542267 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.543757 kubelet[2558]: E1112 17:38:11.543726 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.543757 kubelet[2558]: W1112 17:38:11.543745 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.543832 kubelet[2558]: E1112 17:38:11.543793 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.544645 kubelet[2558]: E1112 17:38:11.544587 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:11.544698 kubelet[2558]: E1112 17:38:11.544663 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.544698 kubelet[2558]: W1112 17:38:11.544671 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.544757 kubelet[2558]: E1112 17:38:11.544724 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.544870 kubelet[2558]: E1112 17:38:11.544840 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.544870 kubelet[2558]: W1112 17:38:11.544852 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.544940 kubelet[2558]: E1112 17:38:11.544895 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.546059 kubelet[2558]: E1112 17:38:11.546036 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.546059 kubelet[2558]: W1112 17:38:11.546053 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.546168 kubelet[2558]: E1112 17:38:11.546153 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.546295 kubelet[2558]: E1112 17:38:11.546279 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.546295 kubelet[2558]: W1112 17:38:11.546290 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.546451 kubelet[2558]: E1112 17:38:11.546352 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.546482 kubelet[2558]: E1112 17:38:11.546452 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.546482 kubelet[2558]: W1112 17:38:11.546459 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.546557 kubelet[2558]: E1112 17:38:11.546542 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.546617 kubelet[2558]: E1112 17:38:11.546607 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.546617 kubelet[2558]: W1112 17:38:11.546616 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.546995 kubelet[2558]: E1112 17:38:11.546690 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.546995 kubelet[2558]: E1112 17:38:11.546898 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.546995 kubelet[2558]: W1112 17:38:11.546908 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.546995 kubelet[2558]: E1112 17:38:11.546970 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.547367 containerd[1442]: time="2024-11-12T17:38:11.547333118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d6dd66b9b-qhjxc,Uid:7ca2265e-fca6-429b-a189-dd929cb91b47,Namespace:calico-system,Attempt:0,}" Nov 12 17:38:11.547860 kubelet[2558]: E1112 17:38:11.547525 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.547860 kubelet[2558]: W1112 17:38:11.547536 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.547860 kubelet[2558]: E1112 17:38:11.547648 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.547860 kubelet[2558]: E1112 17:38:11.547690 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.547860 kubelet[2558]: W1112 17:38:11.547695 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.547860 kubelet[2558]: E1112 17:38:11.547782 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.547860 kubelet[2558]: E1112 17:38:11.547830 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.547860 kubelet[2558]: W1112 17:38:11.547836 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.548084 kubelet[2558]: E1112 17:38:11.547940 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.548084 kubelet[2558]: E1112 17:38:11.548006 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.548084 kubelet[2558]: W1112 17:38:11.548013 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.548084 kubelet[2558]: E1112 17:38:11.548082 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.548173 kubelet[2558]: E1112 17:38:11.548141 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.548173 kubelet[2558]: W1112 17:38:11.548147 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.548349 kubelet[2558]: E1112 17:38:11.548228 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.548349 kubelet[2558]: E1112 17:38:11.548287 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.548349 kubelet[2558]: W1112 17:38:11.548293 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.548349 kubelet[2558]: E1112 17:38:11.548303 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.548686 kubelet[2558]: E1112 17:38:11.548463 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.548686 kubelet[2558]: W1112 17:38:11.548471 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.548686 kubelet[2558]: E1112 17:38:11.548487 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.550259 kubelet[2558]: E1112 17:38:11.549914 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.550259 kubelet[2558]: W1112 17:38:11.549930 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.550259 kubelet[2558]: E1112 17:38:11.549947 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.551956 kubelet[2558]: E1112 17:38:11.551829 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.552176 kubelet[2558]: W1112 17:38:11.552100 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.552303 kubelet[2558]: E1112 17:38:11.552267 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.565522 kubelet[2558]: E1112 17:38:11.565475 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.565522 kubelet[2558]: W1112 17:38:11.565496 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.565522 kubelet[2558]: E1112 17:38:11.565518 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.575664 containerd[1442]: time="2024-11-12T17:38:11.575361139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:38:11.575664 containerd[1442]: time="2024-11-12T17:38:11.575432139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:38:11.575664 containerd[1442]: time="2024-11-12T17:38:11.575444499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:38:11.575664 containerd[1442]: time="2024-11-12T17:38:11.575616739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:38:11.584599 kubelet[2558]: E1112 17:38:11.584547 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:11.586460 containerd[1442]: time="2024-11-12T17:38:11.586091427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-djgw6,Uid:21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a,Namespace:calico-system,Attempt:0,}" Nov 12 17:38:11.599184 systemd[1]: Started cri-containerd-27557b82cd940853338da5f71a42e23186744388530ac7657ea819ee771ec3f6.scope - libcontainer container 27557b82cd940853338da5f71a42e23186744388530ac7657ea819ee771ec3f6. Nov 12 17:38:11.633625 containerd[1442]: time="2024-11-12T17:38:11.633568262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d6dd66b9b-qhjxc,Uid:7ca2265e-fca6-429b-a189-dd929cb91b47,Namespace:calico-system,Attempt:0,} returns sandbox id \"27557b82cd940853338da5f71a42e23186744388530ac7657ea819ee771ec3f6\"" Nov 12 17:38:11.653443 containerd[1442]: time="2024-11-12T17:38:11.638624226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 17:38:11.653525 kubelet[2558]: E1112 17:38:11.634289 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:11.653525 kubelet[2558]: E1112 17:38:11.648148 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.653525 kubelet[2558]: W1112 17:38:11.648168 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.653525 kubelet[2558]: E1112 17:38:11.648188 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.653525 kubelet[2558]: E1112 17:38:11.648452 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.653525 kubelet[2558]: W1112 17:38:11.648462 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.653525 kubelet[2558]: E1112 17:38:11.648479 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.653525 kubelet[2558]: E1112 17:38:11.648672 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.653525 kubelet[2558]: W1112 17:38:11.648681 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.653525 kubelet[2558]: E1112 17:38:11.648698 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.655272 kubelet[2558]: E1112 17:38:11.648849 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.655272 kubelet[2558]: W1112 17:38:11.648864 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.655272 kubelet[2558]: E1112 17:38:11.648875 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.655272 kubelet[2558]: E1112 17:38:11.649040 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.655272 kubelet[2558]: W1112 17:38:11.649049 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.655272 kubelet[2558]: E1112 17:38:11.649065 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.655272 kubelet[2558]: E1112 17:38:11.649237 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.655272 kubelet[2558]: W1112 17:38:11.649246 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.655272 kubelet[2558]: E1112 17:38:11.649259 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.655272 kubelet[2558]: E1112 17:38:11.649470 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.655645 kubelet[2558]: W1112 17:38:11.649480 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.655645 kubelet[2558]: E1112 17:38:11.649494 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.655645 kubelet[2558]: E1112 17:38:11.649661 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.655645 kubelet[2558]: W1112 17:38:11.649669 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.655645 kubelet[2558]: E1112 17:38:11.649712 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.655645 kubelet[2558]: E1112 17:38:11.649872 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.655645 kubelet[2558]: W1112 17:38:11.649880 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.655645 kubelet[2558]: E1112 17:38:11.649922 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.655645 kubelet[2558]: E1112 17:38:11.650082 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.655645 kubelet[2558]: W1112 17:38:11.650089 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.659441 kubelet[2558]: E1112 17:38:11.650133 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.659441 kubelet[2558]: E1112 17:38:11.650267 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.659441 kubelet[2558]: W1112 17:38:11.650274 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.659441 kubelet[2558]: E1112 17:38:11.650339 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.659441 kubelet[2558]: E1112 17:38:11.650488 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.659441 kubelet[2558]: W1112 17:38:11.650495 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.659441 kubelet[2558]: E1112 17:38:11.650564 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.659441 kubelet[2558]: E1112 17:38:11.650692 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.659441 kubelet[2558]: W1112 17:38:11.650699 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.659441 kubelet[2558]: E1112 17:38:11.650711 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.660369 kubelet[2558]: E1112 17:38:11.650865 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.660369 kubelet[2558]: W1112 17:38:11.650872 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.660369 kubelet[2558]: E1112 17:38:11.650885 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.660369 kubelet[2558]: E1112 17:38:11.651041 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.660369 kubelet[2558]: W1112 17:38:11.651048 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.660369 kubelet[2558]: E1112 17:38:11.651060 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.660369 kubelet[2558]: E1112 17:38:11.651305 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.660369 kubelet[2558]: W1112 17:38:11.651314 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.660369 kubelet[2558]: E1112 17:38:11.651386 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.660369 kubelet[2558]: E1112 17:38:11.651696 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.660877 kubelet[2558]: W1112 17:38:11.651706 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.660877 kubelet[2558]: E1112 17:38:11.651795 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.660877 kubelet[2558]: E1112 17:38:11.651869 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.660877 kubelet[2558]: W1112 17:38:11.651875 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.660877 kubelet[2558]: E1112 17:38:11.651913 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.660877 kubelet[2558]: E1112 17:38:11.652084 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.660877 kubelet[2558]: W1112 17:38:11.652092 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.660877 kubelet[2558]: E1112 17:38:11.652144 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.660877 kubelet[2558]: E1112 17:38:11.652298 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.660877 kubelet[2558]: W1112 17:38:11.652306 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.661570 kubelet[2558]: E1112 17:38:11.652319 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.661570 kubelet[2558]: E1112 17:38:11.652532 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.661570 kubelet[2558]: W1112 17:38:11.652540 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.661570 kubelet[2558]: E1112 17:38:11.652555 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.661570 kubelet[2558]: E1112 17:38:11.652728 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.661570 kubelet[2558]: W1112 17:38:11.652758 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.661570 kubelet[2558]: E1112 17:38:11.652817 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.661570 kubelet[2558]: E1112 17:38:11.653959 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.661570 kubelet[2558]: W1112 17:38:11.653974 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.661570 kubelet[2558]: E1112 17:38:11.655370 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.662053 kubelet[2558]: E1112 17:38:11.655700 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.662053 kubelet[2558]: W1112 17:38:11.655712 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.662053 kubelet[2558]: E1112 17:38:11.655799 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.662053 kubelet[2558]: E1112 17:38:11.655913 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.662053 kubelet[2558]: W1112 17:38:11.655921 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.662053 kubelet[2558]: E1112 17:38:11.655933 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.663625 kubelet[2558]: E1112 17:38:11.663550 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:11.663625 kubelet[2558]: W1112 17:38:11.663573 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:11.663625 kubelet[2558]: E1112 17:38:11.663591 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:11.695260 containerd[1442]: time="2024-11-12T17:38:11.695139108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:38:11.695260 containerd[1442]: time="2024-11-12T17:38:11.695206188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:38:11.695260 containerd[1442]: time="2024-11-12T17:38:11.695217908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:38:11.695653 containerd[1442]: time="2024-11-12T17:38:11.695307028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:38:11.713205 systemd[1]: Started cri-containerd-a46445e29c2b4af6844aa44fb69f440322c4af9cbbdadd4e71ff8b77ce9c1518.scope - libcontainer container a46445e29c2b4af6844aa44fb69f440322c4af9cbbdadd4e71ff8b77ce9c1518. Nov 12 17:38:11.737889 containerd[1442]: time="2024-11-12T17:38:11.737604619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-djgw6,Uid:21f08d34-9ce2-4b9b-97f4-e0eaeab05d7a,Namespace:calico-system,Attempt:0,} returns sandbox id \"a46445e29c2b4af6844aa44fb69f440322c4af9cbbdadd4e71ff8b77ce9c1518\"" Nov 12 17:38:11.743678 kubelet[2558]: E1112 17:38:11.740198 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:13.469019 kubelet[2558]: E1112 17:38:13.468963 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-26lzj" podUID="307248dd-d398-4f72-8974-33e136137cb7" Nov 12 17:38:13.683938 containerd[1442]: time="2024-11-12T17:38:13.683892637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:13.685918 containerd[1442]: time="2024-11-12T17:38:13.685873039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=27849584" Nov 12 17:38:13.686733 containerd[1442]: time="2024-11-12T17:38:13.686696999Z" level=info msg="ImageCreate event name:\"sha256:b2bb88f3f42552b429baa4766d841334e258ac314fd6372cf3b9700487183ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:13.688939 containerd[1442]: time="2024-11-12T17:38:13.688540560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:13.689311 containerd[1442]: time="2024-11-12T17:38:13.689268641Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:b2bb88f3f42552b429baa4766d841334e258ac314fd6372cf3b9700487183ad3\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"29219212\" in 2.050586375s" Nov 12 17:38:13.689311 containerd[1442]: time="2024-11-12T17:38:13.689306361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:b2bb88f3f42552b429baa4766d841334e258ac314fd6372cf3b9700487183ad3\"" Nov 12 17:38:13.689929 containerd[1442]: time="2024-11-12T17:38:13.689887321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 17:38:13.696371 containerd[1442]: time="2024-11-12T17:38:13.696203485Z" level=info msg="CreateContainer within sandbox \"27557b82cd940853338da5f71a42e23186744388530ac7657ea819ee771ec3f6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 17:38:13.708841 containerd[1442]: time="2024-11-12T17:38:13.708792374Z" level=info msg="CreateContainer within sandbox \"27557b82cd940853338da5f71a42e23186744388530ac7657ea819ee771ec3f6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"28c4b0a9294546a6675bdc1e0a8ce8aa743dd7f898b5069abff45bc559624b14\"" Nov 12 17:38:13.709771 containerd[1442]: time="2024-11-12T17:38:13.709738574Z" level=info msg="StartContainer for \"28c4b0a9294546a6675bdc1e0a8ce8aa743dd7f898b5069abff45bc559624b14\"" Nov 12 17:38:13.740181 systemd[1]: Started cri-containerd-28c4b0a9294546a6675bdc1e0a8ce8aa743dd7f898b5069abff45bc559624b14.scope - libcontainer container 28c4b0a9294546a6675bdc1e0a8ce8aa743dd7f898b5069abff45bc559624b14. Nov 12 17:38:13.782004 containerd[1442]: time="2024-11-12T17:38:13.779567100Z" level=info msg="StartContainer for \"28c4b0a9294546a6675bdc1e0a8ce8aa743dd7f898b5069abff45bc559624b14\" returns successfully" Nov 12 17:38:14.566453 kubelet[2558]: E1112 17:38:14.566348 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:14.578848 kubelet[2558]: I1112 17:38:14.578789 2558 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-d6dd66b9b-qhjxc" podStartSLOduration=1.5271360619999998 podStartE2EDuration="3.578748158s" podCreationTimestamp="2024-11-12 17:38:11 +0000 UTC" firstStartedPulling="2024-11-12 17:38:11.638009625 +0000 UTC m=+23.245085154" lastFinishedPulling="2024-11-12 17:38:13.689621721 +0000 UTC m=+25.296697250" observedRunningTime="2024-11-12 17:38:14.578053758 +0000 UTC m=+26.185129287" watchObservedRunningTime="2024-11-12 17:38:14.578748158 +0000 UTC m=+26.185823687" Nov 12 17:38:14.652026 kubelet[2558]: E1112 17:38:14.651571 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.652026 kubelet[2558]: W1112 17:38:14.651750 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.652026 kubelet[2558]: E1112 17:38:14.651781 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.652213 kubelet[2558]: E1112 17:38:14.652029 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.652213 kubelet[2558]: W1112 17:38:14.652105 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.652213 kubelet[2558]: E1112 17:38:14.652122 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.652571 kubelet[2558]: E1112 17:38:14.652552 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.652571 kubelet[2558]: W1112 17:38:14.652569 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.652694 kubelet[2558]: E1112 17:38:14.652604 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.652871 kubelet[2558]: E1112 17:38:14.652848 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.652871 kubelet[2558]: W1112 17:38:14.652861 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.652871 kubelet[2558]: E1112 17:38:14.652873 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.653245 kubelet[2558]: E1112 17:38:14.653226 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.653245 kubelet[2558]: W1112 17:38:14.653243 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.653315 kubelet[2558]: E1112 17:38:14.653261 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.653444 kubelet[2558]: E1112 17:38:14.653431 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.653444 kubelet[2558]: W1112 17:38:14.653441 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.653492 kubelet[2558]: E1112 17:38:14.653455 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.653598 kubelet[2558]: E1112 17:38:14.653588 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.653622 kubelet[2558]: W1112 17:38:14.653598 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.653622 kubelet[2558]: E1112 17:38:14.653608 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.653745 kubelet[2558]: E1112 17:38:14.653735 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.653745 kubelet[2558]: W1112 17:38:14.653744 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.653790 kubelet[2558]: E1112 17:38:14.653753 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.653973 kubelet[2558]: E1112 17:38:14.653961 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.654007 kubelet[2558]: W1112 17:38:14.653973 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.654007 kubelet[2558]: E1112 17:38:14.653998 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.654156 kubelet[2558]: E1112 17:38:14.654146 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.654156 kubelet[2558]: W1112 17:38:14.654155 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.654202 kubelet[2558]: E1112 17:38:14.654165 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.654312 kubelet[2558]: E1112 17:38:14.654302 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.654336 kubelet[2558]: W1112 17:38:14.654314 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.654336 kubelet[2558]: E1112 17:38:14.654324 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.654479 kubelet[2558]: E1112 17:38:14.654469 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.654506 kubelet[2558]: W1112 17:38:14.654481 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.654506 kubelet[2558]: E1112 17:38:14.654491 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.654687 kubelet[2558]: E1112 17:38:14.654674 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.654711 kubelet[2558]: W1112 17:38:14.654688 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.654711 kubelet[2558]: E1112 17:38:14.654699 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.654864 kubelet[2558]: E1112 17:38:14.654854 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.654884 kubelet[2558]: W1112 17:38:14.654863 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.654884 kubelet[2558]: E1112 17:38:14.654873 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.655033 kubelet[2558]: E1112 17:38:14.655023 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.655033 kubelet[2558]: W1112 17:38:14.655033 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.655082 kubelet[2558]: E1112 17:38:14.655042 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.673639 kubelet[2558]: E1112 17:38:14.673605 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.673639 kubelet[2558]: W1112 17:38:14.673625 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.673729 kubelet[2558]: E1112 17:38:14.673647 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.673913 kubelet[2558]: E1112 17:38:14.673888 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.673913 kubelet[2558]: W1112 17:38:14.673900 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.673972 kubelet[2558]: E1112 17:38:14.673920 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.674370 kubelet[2558]: E1112 17:38:14.674347 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.674370 kubelet[2558]: W1112 17:38:14.674364 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.674432 kubelet[2558]: E1112 17:38:14.674403 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.674652 kubelet[2558]: E1112 17:38:14.674636 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.674652 kubelet[2558]: W1112 17:38:14.674649 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.674718 kubelet[2558]: E1112 17:38:14.674665 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.674965 kubelet[2558]: E1112 17:38:14.674952 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.675014 kubelet[2558]: W1112 17:38:14.675001 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.675042 kubelet[2558]: E1112 17:38:14.675027 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.675248 kubelet[2558]: E1112 17:38:14.675237 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.675248 kubelet[2558]: W1112 17:38:14.675247 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.675314 kubelet[2558]: E1112 17:38:14.675282 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.675435 kubelet[2558]: E1112 17:38:14.675424 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.675435 kubelet[2558]: W1112 17:38:14.675434 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.675506 kubelet[2558]: E1112 17:38:14.675461 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.675807 kubelet[2558]: E1112 17:38:14.675793 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.675807 kubelet[2558]: W1112 17:38:14.675806 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.675862 kubelet[2558]: E1112 17:38:14.675840 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.676062 kubelet[2558]: E1112 17:38:14.676049 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.676062 kubelet[2558]: W1112 17:38:14.676060 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.676130 kubelet[2558]: E1112 17:38:14.676076 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.676346 kubelet[2558]: E1112 17:38:14.676333 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.676397 kubelet[2558]: W1112 17:38:14.676346 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.676397 kubelet[2558]: E1112 17:38:14.676389 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.676582 kubelet[2558]: E1112 17:38:14.676571 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.676582 kubelet[2558]: W1112 17:38:14.676581 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.676640 kubelet[2558]: E1112 17:38:14.676593 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.676781 kubelet[2558]: E1112 17:38:14.676770 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.676781 kubelet[2558]: W1112 17:38:14.676781 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.676837 kubelet[2558]: E1112 17:38:14.676792 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.677265 kubelet[2558]: E1112 17:38:14.677251 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.677265 kubelet[2558]: W1112 17:38:14.677263 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.677416 kubelet[2558]: E1112 17:38:14.677365 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.677416 kubelet[2558]: E1112 17:38:14.677412 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.677487 kubelet[2558]: W1112 17:38:14.677420 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.677487 kubelet[2558]: E1112 17:38:14.677431 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.677570 kubelet[2558]: E1112 17:38:14.677561 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.677570 kubelet[2558]: W1112 17:38:14.677571 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.677642 kubelet[2558]: E1112 17:38:14.677580 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.677743 kubelet[2558]: E1112 17:38:14.677731 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.677743 kubelet[2558]: W1112 17:38:14.677741 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.677802 kubelet[2558]: E1112 17:38:14.677751 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.678059 kubelet[2558]: E1112 17:38:14.678048 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.678059 kubelet[2558]: W1112 17:38:14.678059 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.678105 kubelet[2558]: E1112 17:38:14.678071 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.678393 kubelet[2558]: E1112 17:38:14.678370 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:38:14.678393 kubelet[2558]: W1112 17:38:14.678393 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:38:14.678453 kubelet[2558]: E1112 17:38:14.678407 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:38:14.825373 containerd[1442]: time="2024-11-12T17:38:14.825182109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:14.825927 containerd[1442]: time="2024-11-12T17:38:14.825861749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5117816" Nov 12 17:38:14.826825 containerd[1442]: time="2024-11-12T17:38:14.826791310Z" level=info msg="ImageCreate event name:\"sha256:bd15f6fc4f6c943c0f50373a7141cb17e8f12e21aaad47c24b6667c3f1c9947e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:14.829294 containerd[1442]: time="2024-11-12T17:38:14.829251071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:14.830530 containerd[1442]: time="2024-11-12T17:38:14.830378192Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:bd15f6fc4f6c943c0f50373a7141cb17e8f12e21aaad47c24b6667c3f1c9947e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6487412\" in 1.140453551s" Nov 12 17:38:14.830530 containerd[1442]: time="2024-11-12T17:38:14.830429312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:bd15f6fc4f6c943c0f50373a7141cb17e8f12e21aaad47c24b6667c3f1c9947e\"" Nov 12 17:38:14.832668 containerd[1442]: time="2024-11-12T17:38:14.832507673Z" level=info msg="CreateContainer within sandbox \"a46445e29c2b4af6844aa44fb69f440322c4af9cbbdadd4e71ff8b77ce9c1518\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 17:38:14.845788 containerd[1442]: time="2024-11-12T17:38:14.845662321Z" level=info msg="CreateContainer within sandbox \"a46445e29c2b4af6844aa44fb69f440322c4af9cbbdadd4e71ff8b77ce9c1518\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"78b2c502c9f5fb8faf8d8c05a32dd3b9e7ecc588f1e0955399949943347ced82\"" Nov 12 17:38:14.846319 containerd[1442]: time="2024-11-12T17:38:14.846295002Z" level=info msg="StartContainer for \"78b2c502c9f5fb8faf8d8c05a32dd3b9e7ecc588f1e0955399949943347ced82\"" Nov 12 17:38:14.873176 systemd[1]: Started cri-containerd-78b2c502c9f5fb8faf8d8c05a32dd3b9e7ecc588f1e0955399949943347ced82.scope - libcontainer container 78b2c502c9f5fb8faf8d8c05a32dd3b9e7ecc588f1e0955399949943347ced82. Nov 12 17:38:14.895269 containerd[1442]: time="2024-11-12T17:38:14.895222152Z" level=info msg="StartContainer for \"78b2c502c9f5fb8faf8d8c05a32dd3b9e7ecc588f1e0955399949943347ced82\" returns successfully" Nov 12 17:38:14.917669 systemd[1]: cri-containerd-78b2c502c9f5fb8faf8d8c05a32dd3b9e7ecc588f1e0955399949943347ced82.scope: Deactivated successfully. Nov 12 17:38:15.007048 containerd[1442]: time="2024-11-12T17:38:14.994671533Z" level=info msg="shim disconnected" id=78b2c502c9f5fb8faf8d8c05a32dd3b9e7ecc588f1e0955399949943347ced82 namespace=k8s.io Nov 12 17:38:15.007048 containerd[1442]: time="2024-11-12T17:38:15.006860660Z" level=warning msg="cleaning up after shim disconnected" id=78b2c502c9f5fb8faf8d8c05a32dd3b9e7ecc588f1e0955399949943347ced82 namespace=k8s.io Nov 12 17:38:15.007048 containerd[1442]: time="2024-11-12T17:38:15.006877780Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:38:15.469444 kubelet[2558]: E1112 17:38:15.469073 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-26lzj" podUID="307248dd-d398-4f72-8974-33e136137cb7" Nov 12 17:38:15.568835 kubelet[2558]: E1112 17:38:15.568391 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:15.569782 containerd[1442]: time="2024-11-12T17:38:15.569270583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 17:38:15.569860 kubelet[2558]: I1112 17:38:15.569408 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:38:15.570161 kubelet[2558]: E1112 17:38:15.570111 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:15.694063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78b2c502c9f5fb8faf8d8c05a32dd3b9e7ecc588f1e0955399949943347ced82-rootfs.mount: Deactivated successfully. Nov 12 17:38:17.468521 kubelet[2558]: E1112 17:38:17.468407 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-26lzj" podUID="307248dd-d398-4f72-8974-33e136137cb7" Nov 12 17:38:17.982013 kubelet[2558]: I1112 17:38:17.981929 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:38:17.982658 kubelet[2558]: E1112 17:38:17.982641 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:18.577491 kubelet[2558]: E1112 17:38:18.577455 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:18.662892 containerd[1442]: time="2024-11-12T17:38:18.662832586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:18.665465 containerd[1442]: time="2024-11-12T17:38:18.665431147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=89700517" Nov 12 17:38:18.667994 containerd[1442]: time="2024-11-12T17:38:18.667950908Z" level=info msg="ImageCreate event name:\"sha256:9c7b7d79ea478f25cd5de34ec1519a0aaa7ac440910e61075e65092a94aea41f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:18.672475 containerd[1442]: time="2024-11-12T17:38:18.672435431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:18.673122 containerd[1442]: time="2024-11-12T17:38:18.673084351Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:9c7b7d79ea478f25cd5de34ec1519a0aaa7ac440910e61075e65092a94aea41f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"91070153\" in 3.103775168s" Nov 12 17:38:18.673122 containerd[1442]: time="2024-11-12T17:38:18.673118711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:9c7b7d79ea478f25cd5de34ec1519a0aaa7ac440910e61075e65092a94aea41f\"" Nov 12 17:38:18.685750 containerd[1442]: time="2024-11-12T17:38:18.685688757Z" level=info msg="CreateContainer within sandbox \"a46445e29c2b4af6844aa44fb69f440322c4af9cbbdadd4e71ff8b77ce9c1518\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 17:38:18.709171 containerd[1442]: time="2024-11-12T17:38:18.709120288Z" level=info msg="CreateContainer within sandbox \"a46445e29c2b4af6844aa44fb69f440322c4af9cbbdadd4e71ff8b77ce9c1518\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"75bed27803e0816e8ddd55f3bd3cb6a7aceba629faeb7052bd9cf7617e5ab8fb\"" Nov 12 17:38:18.710040 containerd[1442]: time="2024-11-12T17:38:18.710010528Z" level=info msg="StartContainer for \"75bed27803e0816e8ddd55f3bd3cb6a7aceba629faeb7052bd9cf7617e5ab8fb\"" Nov 12 17:38:18.743205 systemd[1]: Started cri-containerd-75bed27803e0816e8ddd55f3bd3cb6a7aceba629faeb7052bd9cf7617e5ab8fb.scope - libcontainer container 75bed27803e0816e8ddd55f3bd3cb6a7aceba629faeb7052bd9cf7617e5ab8fb. Nov 12 17:38:18.778415 containerd[1442]: time="2024-11-12T17:38:18.778360081Z" level=info msg="StartContainer for \"75bed27803e0816e8ddd55f3bd3cb6a7aceba629faeb7052bd9cf7617e5ab8fb\" returns successfully" Nov 12 17:38:19.437893 systemd[1]: cri-containerd-75bed27803e0816e8ddd55f3bd3cb6a7aceba629faeb7052bd9cf7617e5ab8fb.scope: Deactivated successfully. Nov 12 17:38:19.459651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75bed27803e0816e8ddd55f3bd3cb6a7aceba629faeb7052bd9cf7617e5ab8fb-rootfs.mount: Deactivated successfully. Nov 12 17:38:19.469002 kubelet[2558]: E1112 17:38:19.468943 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-26lzj" podUID="307248dd-d398-4f72-8974-33e136137cb7" Nov 12 17:38:19.487048 kubelet[2558]: I1112 17:38:19.486349 2558 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 17:38:19.540322 kubelet[2558]: I1112 17:38:19.540270 2558 topology_manager.go:215] "Topology Admit Handler" podUID="19027f50-17e2-49b5-8e0e-161e0d0f74db" podNamespace="calico-apiserver" podName="calico-apiserver-7f4f97b7c-bg95b" Nov 12 17:38:19.541697 kubelet[2558]: I1112 17:38:19.541642 2558 topology_manager.go:215] "Topology Admit Handler" podUID="54c761a6-cb36-44cf-b192-819c3beafff3" podNamespace="calico-system" podName="calico-kube-controllers-bc9b56c48-rq9pz" Nov 12 17:38:19.543673 kubelet[2558]: I1112 17:38:19.543622 2558 topology_manager.go:215] "Topology Admit Handler" podUID="2ba7e0bf-58bb-4e6e-91ed-49866ce4c112" podNamespace="kube-system" podName="coredns-76f75df574-6lmj5" Nov 12 17:38:19.543845 kubelet[2558]: I1112 17:38:19.543778 2558 topology_manager.go:215] "Topology Admit Handler" podUID="8e960c16-fd3a-4a1e-b33c-95978141e8c2" podNamespace="kube-system" podName="coredns-76f75df574-t796q" Nov 12 17:38:19.544258 kubelet[2558]: I1112 17:38:19.543875 2558 topology_manager.go:215] "Topology Admit Handler" podUID="fc25e9ca-d598-4ffd-89c6-df55cbfb8cfb" podNamespace="calico-apiserver" podName="calico-apiserver-7f4f97b7c-bdtjn" Nov 12 17:38:19.555299 systemd[1]: Created slice kubepods-besteffort-pod19027f50_17e2_49b5_8e0e_161e0d0f74db.slice - libcontainer container kubepods-besteffort-pod19027f50_17e2_49b5_8e0e_161e0d0f74db.slice. Nov 12 17:38:19.562145 systemd[1]: Created slice kubepods-besteffort-pod54c761a6_cb36_44cf_b192_819c3beafff3.slice - libcontainer container kubepods-besteffort-pod54c761a6_cb36_44cf_b192_819c3beafff3.slice. Nov 12 17:38:19.570822 systemd[1]: Created slice kubepods-besteffort-podfc25e9ca_d598_4ffd_89c6_df55cbfb8cfb.slice - libcontainer container kubepods-besteffort-podfc25e9ca_d598_4ffd_89c6_df55cbfb8cfb.slice. Nov 12 17:38:19.572543 containerd[1442]: time="2024-11-12T17:38:19.572413199Z" level=info msg="shim disconnected" id=75bed27803e0816e8ddd55f3bd3cb6a7aceba629faeb7052bd9cf7617e5ab8fb namespace=k8s.io Nov 12 17:38:19.572543 containerd[1442]: time="2024-11-12T17:38:19.572488999Z" level=warning msg="cleaning up after shim disconnected" id=75bed27803e0816e8ddd55f3bd3cb6a7aceba629faeb7052bd9cf7617e5ab8fb namespace=k8s.io Nov 12 17:38:19.572543 containerd[1442]: time="2024-11-12T17:38:19.572498559Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:38:19.583888 kubelet[2558]: E1112 17:38:19.581638 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:19.584191 systemd[1]: Created slice kubepods-burstable-pod8e960c16_fd3a_4a1e_b33c_95978141e8c2.slice - libcontainer container kubepods-burstable-pod8e960c16_fd3a_4a1e_b33c_95978141e8c2.slice. Nov 12 17:38:19.593895 systemd[1]: Created slice kubepods-burstable-pod2ba7e0bf_58bb_4e6e_91ed_49866ce4c112.slice - libcontainer container kubepods-burstable-pod2ba7e0bf_58bb_4e6e_91ed_49866ce4c112.slice. Nov 12 17:38:19.615409 kubelet[2558]: I1112 17:38:19.615359 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fc25e9ca-d598-4ffd-89c6-df55cbfb8cfb-calico-apiserver-certs\") pod \"calico-apiserver-7f4f97b7c-bdtjn\" (UID: \"fc25e9ca-d598-4ffd-89c6-df55cbfb8cfb\") " pod="calico-apiserver/calico-apiserver-7f4f97b7c-bdtjn" Nov 12 17:38:19.615409 kubelet[2558]: I1112 17:38:19.615416 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vzf5\" (UniqueName: \"kubernetes.io/projected/fc25e9ca-d598-4ffd-89c6-df55cbfb8cfb-kube-api-access-2vzf5\") pod \"calico-apiserver-7f4f97b7c-bdtjn\" (UID: \"fc25e9ca-d598-4ffd-89c6-df55cbfb8cfb\") " pod="calico-apiserver/calico-apiserver-7f4f97b7c-bdtjn" Nov 12 17:38:19.615577 kubelet[2558]: I1112 17:38:19.615439 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54c761a6-cb36-44cf-b192-819c3beafff3-tigera-ca-bundle\") pod \"calico-kube-controllers-bc9b56c48-rq9pz\" (UID: \"54c761a6-cb36-44cf-b192-819c3beafff3\") " pod="calico-system/calico-kube-controllers-bc9b56c48-rq9pz" Nov 12 17:38:19.615577 kubelet[2558]: I1112 17:38:19.615481 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x7d6\" (UniqueName: \"kubernetes.io/projected/19027f50-17e2-49b5-8e0e-161e0d0f74db-kube-api-access-8x7d6\") pod \"calico-apiserver-7f4f97b7c-bg95b\" (UID: \"19027f50-17e2-49b5-8e0e-161e0d0f74db\") " pod="calico-apiserver/calico-apiserver-7f4f97b7c-bg95b" Nov 12 17:38:19.623575 kubelet[2558]: I1112 17:38:19.623533 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e960c16-fd3a-4a1e-b33c-95978141e8c2-config-volume\") pod \"coredns-76f75df574-t796q\" (UID: \"8e960c16-fd3a-4a1e-b33c-95978141e8c2\") " pod="kube-system/coredns-76f75df574-t796q" Nov 12 17:38:19.623575 kubelet[2558]: I1112 17:38:19.623581 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ba7e0bf-58bb-4e6e-91ed-49866ce4c112-config-volume\") pod \"coredns-76f75df574-6lmj5\" (UID: \"2ba7e0bf-58bb-4e6e-91ed-49866ce4c112\") " pod="kube-system/coredns-76f75df574-6lmj5" Nov 12 17:38:19.623747 kubelet[2558]: I1112 17:38:19.623603 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58fz2\" (UniqueName: \"kubernetes.io/projected/8e960c16-fd3a-4a1e-b33c-95978141e8c2-kube-api-access-58fz2\") pod \"coredns-76f75df574-t796q\" (UID: \"8e960c16-fd3a-4a1e-b33c-95978141e8c2\") " pod="kube-system/coredns-76f75df574-t796q" Nov 12 17:38:19.623747 kubelet[2558]: I1112 17:38:19.623650 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpwb9\" (UniqueName: \"kubernetes.io/projected/2ba7e0bf-58bb-4e6e-91ed-49866ce4c112-kube-api-access-wpwb9\") pod \"coredns-76f75df574-6lmj5\" (UID: \"2ba7e0bf-58bb-4e6e-91ed-49866ce4c112\") " pod="kube-system/coredns-76f75df574-6lmj5" Nov 12 17:38:19.623747 kubelet[2558]: I1112 17:38:19.623697 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/19027f50-17e2-49b5-8e0e-161e0d0f74db-calico-apiserver-certs\") pod \"calico-apiserver-7f4f97b7c-bg95b\" (UID: \"19027f50-17e2-49b5-8e0e-161e0d0f74db\") " pod="calico-apiserver/calico-apiserver-7f4f97b7c-bg95b" Nov 12 17:38:19.623747 kubelet[2558]: I1112 17:38:19.623723 2558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7qth\" (UniqueName: \"kubernetes.io/projected/54c761a6-cb36-44cf-b192-819c3beafff3-kube-api-access-z7qth\") pod \"calico-kube-controllers-bc9b56c48-rq9pz\" (UID: \"54c761a6-cb36-44cf-b192-819c3beafff3\") " pod="calico-system/calico-kube-controllers-bc9b56c48-rq9pz" Nov 12 17:38:19.860628 containerd[1442]: time="2024-11-12T17:38:19.860512007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f97b7c-bg95b,Uid:19027f50-17e2-49b5-8e0e-161e0d0f74db,Namespace:calico-apiserver,Attempt:0,}" Nov 12 17:38:19.865816 containerd[1442]: time="2024-11-12T17:38:19.865774049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc9b56c48-rq9pz,Uid:54c761a6-cb36-44cf-b192-819c3beafff3,Namespace:calico-system,Attempt:0,}" Nov 12 17:38:19.878849 containerd[1442]: time="2024-11-12T17:38:19.878799175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f97b7c-bdtjn,Uid:fc25e9ca-d598-4ffd-89c6-df55cbfb8cfb,Namespace:calico-apiserver,Attempt:0,}" Nov 12 17:38:19.891432 kubelet[2558]: E1112 17:38:19.891396 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:19.891921 containerd[1442]: time="2024-11-12T17:38:19.891876621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t796q,Uid:8e960c16-fd3a-4a1e-b33c-95978141e8c2,Namespace:kube-system,Attempt:0,}" Nov 12 17:38:19.901069 kubelet[2558]: E1112 17:38:19.900949 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:19.903286 containerd[1442]: time="2024-11-12T17:38:19.902967946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6lmj5,Uid:2ba7e0bf-58bb-4e6e-91ed-49866ce4c112,Namespace:kube-system,Attempt:0,}" Nov 12 17:38:20.319070 containerd[1442]: time="2024-11-12T17:38:20.319009961Z" level=error msg="Failed to destroy network for sandbox \"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.320357 containerd[1442]: time="2024-11-12T17:38:20.320312122Z" level=error msg="encountered an error cleaning up failed sandbox \"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.320650 containerd[1442]: time="2024-11-12T17:38:20.320536962Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f97b7c-bg95b,Uid:19027f50-17e2-49b5-8e0e-161e0d0f74db,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.327758 kubelet[2558]: E1112 17:38:20.327690 2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.327926 kubelet[2558]: E1112 17:38:20.327800 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f4f97b7c-bg95b" Nov 12 17:38:20.327926 kubelet[2558]: E1112 17:38:20.327828 2558 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f4f97b7c-bg95b" Nov 12 17:38:20.327926 kubelet[2558]: E1112 17:38:20.327889 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f4f97b7c-bg95b_calico-apiserver(19027f50-17e2-49b5-8e0e-161e0d0f74db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f4f97b7c-bg95b_calico-apiserver(19027f50-17e2-49b5-8e0e-161e0d0f74db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f4f97b7c-bg95b" podUID="19027f50-17e2-49b5-8e0e-161e0d0f74db" Nov 12 17:38:20.332536 containerd[1442]: time="2024-11-12T17:38:20.332198887Z" level=error msg="Failed to destroy network for sandbox \"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.332665 containerd[1442]: time="2024-11-12T17:38:20.332550087Z" level=error msg="encountered an error cleaning up failed sandbox \"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.332665 containerd[1442]: time="2024-11-12T17:38:20.332603287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f97b7c-bdtjn,Uid:fc25e9ca-d598-4ffd-89c6-df55cbfb8cfb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.332887 kubelet[2558]: E1112 17:38:20.332851 2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.332936 kubelet[2558]: E1112 17:38:20.332923 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f4f97b7c-bdtjn" Nov 12 17:38:20.332973 kubelet[2558]: E1112 17:38:20.332946 2558 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f4f97b7c-bdtjn" Nov 12 17:38:20.333083 kubelet[2558]: E1112 17:38:20.333065 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f4f97b7c-bdtjn_calico-apiserver(fc25e9ca-d598-4ffd-89c6-df55cbfb8cfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f4f97b7c-bdtjn_calico-apiserver(fc25e9ca-d598-4ffd-89c6-df55cbfb8cfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f4f97b7c-bdtjn" podUID="fc25e9ca-d598-4ffd-89c6-df55cbfb8cfb" Nov 12 17:38:20.340734 containerd[1442]: time="2024-11-12T17:38:20.340680410Z" level=error msg="Failed to destroy network for sandbox \"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.341650 containerd[1442]: time="2024-11-12T17:38:20.341610291Z" level=error msg="encountered an error cleaning up failed sandbox \"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.341763 containerd[1442]: time="2024-11-12T17:38:20.341674091Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t796q,Uid:8e960c16-fd3a-4a1e-b33c-95978141e8c2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.342721 kubelet[2558]: E1112 17:38:20.342644 2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.343184 kubelet[2558]: E1112 17:38:20.343160 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-t796q" Nov 12 17:38:20.343297 kubelet[2558]: E1112 17:38:20.343225 2558 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-t796q" Nov 12 17:38:20.343331 kubelet[2558]: E1112 17:38:20.343318 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-t796q_kube-system(8e960c16-fd3a-4a1e-b33c-95978141e8c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-t796q_kube-system(8e960c16-fd3a-4a1e-b33c-95978141e8c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-t796q" podUID="8e960c16-fd3a-4a1e-b33c-95978141e8c2" Nov 12 17:38:20.345994 containerd[1442]: time="2024-11-12T17:38:20.345880293Z" level=error msg="Failed to destroy network for sandbox \"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.346427 containerd[1442]: time="2024-11-12T17:38:20.346396013Z" level=error msg="encountered an error cleaning up failed sandbox \"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.346615 containerd[1442]: time="2024-11-12T17:38:20.346529933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc9b56c48-rq9pz,Uid:54c761a6-cb36-44cf-b192-819c3beafff3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.346869 kubelet[2558]: E1112 17:38:20.346839 2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.346924 kubelet[2558]: E1112 17:38:20.346895 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bc9b56c48-rq9pz" Nov 12 17:38:20.346924 kubelet[2558]: E1112 17:38:20.346916 2558 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bc9b56c48-rq9pz" Nov 12 17:38:20.347677 kubelet[2558]: E1112 17:38:20.346972 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bc9b56c48-rq9pz_calico-system(54c761a6-cb36-44cf-b192-819c3beafff3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bc9b56c48-rq9pz_calico-system(54c761a6-cb36-44cf-b192-819c3beafff3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bc9b56c48-rq9pz" podUID="54c761a6-cb36-44cf-b192-819c3beafff3" Nov 12 17:38:20.355962 containerd[1442]: time="2024-11-12T17:38:20.355892497Z" level=error msg="Failed to destroy network for sandbox \"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.356295 containerd[1442]: time="2024-11-12T17:38:20.356262137Z" level=error msg="encountered an error cleaning up failed sandbox \"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.356345 containerd[1442]: time="2024-11-12T17:38:20.356324057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6lmj5,Uid:2ba7e0bf-58bb-4e6e-91ed-49866ce4c112,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.356657 kubelet[2558]: E1112 17:38:20.356626 2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.356704 kubelet[2558]: E1112 17:38:20.356689 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-6lmj5" Nov 12 17:38:20.356727 kubelet[2558]: E1112 17:38:20.356711 2558 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-6lmj5" Nov 12 17:38:20.356787 kubelet[2558]: E1112 17:38:20.356772 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-6lmj5_kube-system(2ba7e0bf-58bb-4e6e-91ed-49866ce4c112)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-6lmj5_kube-system(2ba7e0bf-58bb-4e6e-91ed-49866ce4c112)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-6lmj5" podUID="2ba7e0bf-58bb-4e6e-91ed-49866ce4c112" Nov 12 17:38:20.584934 kubelet[2558]: I1112 17:38:20.584210 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Nov 12 17:38:20.585274 containerd[1442]: time="2024-11-12T17:38:20.585010272Z" level=info msg="StopPodSandbox for \"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\"" Nov 12 17:38:20.585274 containerd[1442]: time="2024-11-12T17:38:20.585194112Z" level=info msg="Ensure that sandbox fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41 in task-service has been cleanup successfully" Nov 12 17:38:20.587132 kubelet[2558]: I1112 17:38:20.587089 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Nov 12 17:38:20.588207 containerd[1442]: time="2024-11-12T17:38:20.588171153Z" level=info msg="StopPodSandbox for \"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\"" Nov 12 17:38:20.588541 containerd[1442]: time="2024-11-12T17:38:20.588345553Z" level=info msg="Ensure that sandbox 30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1 in task-service has been cleanup successfully" Nov 12 17:38:20.589784 containerd[1442]: time="2024-11-12T17:38:20.589744754Z" level=info msg="StopPodSandbox for \"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\"" Nov 12 17:38:20.589967 containerd[1442]: time="2024-11-12T17:38:20.589944194Z" level=info msg="Ensure that sandbox b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef in task-service has been cleanup successfully" Nov 12 17:38:20.590386 kubelet[2558]: I1112 17:38:20.589206 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Nov 12 17:38:20.594036 kubelet[2558]: I1112 17:38:20.593630 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Nov 12 17:38:20.596769 containerd[1442]: time="2024-11-12T17:38:20.595674116Z" level=info msg="StopPodSandbox for \"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\"" Nov 12 17:38:20.596769 containerd[1442]: time="2024-11-12T17:38:20.595833037Z" level=info msg="Ensure that sandbox 800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f in task-service has been cleanup successfully" Nov 12 17:38:20.596909 kubelet[2558]: I1112 17:38:20.596687 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Nov 12 17:38:20.597631 containerd[1442]: time="2024-11-12T17:38:20.597594357Z" level=info msg="StopPodSandbox for \"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\"" Nov 12 17:38:20.600060 containerd[1442]: time="2024-11-12T17:38:20.598915038Z" level=info msg="Ensure that sandbox 0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88 in task-service has been cleanup successfully" Nov 12 17:38:20.601356 kubelet[2558]: E1112 17:38:20.600688 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:20.602049 containerd[1442]: time="2024-11-12T17:38:20.601771039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 17:38:20.639546 containerd[1442]: time="2024-11-12T17:38:20.639495535Z" level=error msg="StopPodSandbox for \"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\" failed" error="failed to destroy network for sandbox \"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.642647 containerd[1442]: time="2024-11-12T17:38:20.642588256Z" level=error msg="StopPodSandbox for \"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\" failed" error="failed to destroy network for sandbox \"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.643632 kubelet[2558]: E1112 17:38:20.643596 2558 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Nov 12 17:38:20.643722 kubelet[2558]: E1112 17:38:20.643689 2558 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef"} Nov 12 17:38:20.643754 kubelet[2558]: E1112 17:38:20.643738 2558 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fc25e9ca-d598-4ffd-89c6-df55cbfb8cfb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:38:20.643817 kubelet[2558]: E1112 17:38:20.643767 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fc25e9ca-d598-4ffd-89c6-df55cbfb8cfb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f4f97b7c-bdtjn" podUID="fc25e9ca-d598-4ffd-89c6-df55cbfb8cfb" Nov 12 17:38:20.644653 kubelet[2558]: E1112 17:38:20.644616 2558 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Nov 12 17:38:20.644754 kubelet[2558]: E1112 17:38:20.644668 2558 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1"} Nov 12 17:38:20.644754 kubelet[2558]: E1112 17:38:20.644745 2558 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e960c16-fd3a-4a1e-b33c-95978141e8c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:38:20.644833 kubelet[2558]: E1112 17:38:20.644770 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e960c16-fd3a-4a1e-b33c-95978141e8c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-t796q" podUID="8e960c16-fd3a-4a1e-b33c-95978141e8c2" Nov 12 17:38:20.646606 containerd[1442]: time="2024-11-12T17:38:20.646482698Z" level=error msg="StopPodSandbox for \"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\" failed" error="failed to destroy network for sandbox \"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.646969 kubelet[2558]: E1112 17:38:20.646937 2558 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Nov 12 17:38:20.647852 kubelet[2558]: E1112 17:38:20.647835 2558 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f"} Nov 12 17:38:20.647913 kubelet[2558]: E1112 17:38:20.647902 2558 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"54c761a6-cb36-44cf-b192-819c3beafff3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:38:20.648002 kubelet[2558]: E1112 17:38:20.647934 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"54c761a6-cb36-44cf-b192-819c3beafff3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bc9b56c48-rq9pz" podUID="54c761a6-cb36-44cf-b192-819c3beafff3" Nov 12 17:38:20.651783 containerd[1442]: time="2024-11-12T17:38:20.651731980Z" level=error msg="StopPodSandbox for \"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\" failed" error="failed to destroy network for sandbox \"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.652031 kubelet[2558]: E1112 17:38:20.651991 2558 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Nov 12 17:38:20.652031 kubelet[2558]: E1112 17:38:20.652033 2558 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41"} Nov 12 17:38:20.652108 kubelet[2558]: E1112 17:38:20.652075 2558 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ba7e0bf-58bb-4e6e-91ed-49866ce4c112\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:38:20.652108 kubelet[2558]: E1112 17:38:20.652105 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ba7e0bf-58bb-4e6e-91ed-49866ce4c112\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-6lmj5" podUID="2ba7e0bf-58bb-4e6e-91ed-49866ce4c112" Nov 12 17:38:20.658168 containerd[1442]: time="2024-11-12T17:38:20.658122462Z" level=error msg="StopPodSandbox for \"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\" failed" error="failed to destroy network for sandbox \"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:20.658552 kubelet[2558]: E1112 17:38:20.658520 2558 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Nov 12 17:38:20.658622 kubelet[2558]: E1112 17:38:20.658571 2558 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88"} Nov 12 17:38:20.658622 kubelet[2558]: E1112 17:38:20.658616 2558 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"19027f50-17e2-49b5-8e0e-161e0d0f74db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:38:20.658709 kubelet[2558]: E1112 17:38:20.658645 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"19027f50-17e2-49b5-8e0e-161e0d0f74db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f4f97b7c-bg95b" podUID="19027f50-17e2-49b5-8e0e-161e0d0f74db" Nov 12 17:38:20.730234 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f-shm.mount: Deactivated successfully. Nov 12 17:38:21.472920 systemd[1]: Created slice kubepods-besteffort-pod307248dd_d398_4f72_8974_33e136137cb7.slice - libcontainer container kubepods-besteffort-pod307248dd_d398_4f72_8974_33e136137cb7.slice. Nov 12 17:38:21.475369 containerd[1442]: time="2024-11-12T17:38:21.475317270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-26lzj,Uid:307248dd-d398-4f72-8974-33e136137cb7,Namespace:calico-system,Attempt:0,}" Nov 12 17:38:21.533594 containerd[1442]: time="2024-11-12T17:38:21.533444252Z" level=error msg="Failed to destroy network for sandbox \"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:21.535799 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c-shm.mount: Deactivated successfully. Nov 12 17:38:21.535943 containerd[1442]: time="2024-11-12T17:38:21.535907413Z" level=error msg="encountered an error cleaning up failed sandbox \"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:21.537456 containerd[1442]: time="2024-11-12T17:38:21.536037813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-26lzj,Uid:307248dd-d398-4f72-8974-33e136137cb7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:21.537816 kubelet[2558]: E1112 17:38:21.537671 2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:21.537816 kubelet[2558]: E1112 17:38:21.537735 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-26lzj" Nov 12 17:38:21.537816 kubelet[2558]: E1112 17:38:21.537756 2558 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-26lzj" Nov 12 17:38:21.538533 kubelet[2558]: E1112 17:38:21.537807 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-26lzj_calico-system(307248dd-d398-4f72-8974-33e136137cb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-26lzj_calico-system(307248dd-d398-4f72-8974-33e136137cb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-26lzj" podUID="307248dd-d398-4f72-8974-33e136137cb7" Nov 12 17:38:21.606115 kubelet[2558]: I1112 17:38:21.606069 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Nov 12 17:38:21.607425 containerd[1442]: time="2024-11-12T17:38:21.607383321Z" level=info msg="StopPodSandbox for \"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\"" Nov 12 17:38:21.607624 containerd[1442]: time="2024-11-12T17:38:21.607600601Z" level=info msg="Ensure that sandbox a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c in task-service has been cleanup successfully" Nov 12 17:38:21.637761 containerd[1442]: time="2024-11-12T17:38:21.637519093Z" level=error msg="StopPodSandbox for \"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\" failed" error="failed to destroy network for sandbox \"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:38:21.640227 kubelet[2558]: E1112 17:38:21.640147 2558 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Nov 12 17:38:21.640227 kubelet[2558]: E1112 17:38:21.640198 2558 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c"} Nov 12 17:38:21.640227 kubelet[2558]: E1112 17:38:21.640240 2558 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"307248dd-d398-4f72-8974-33e136137cb7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:38:21.640492 kubelet[2558]: E1112 17:38:21.640275 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"307248dd-d398-4f72-8974-33e136137cb7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-26lzj" podUID="307248dd-d398-4f72-8974-33e136137cb7" Nov 12 17:38:21.708269 systemd[1]: Started sshd@7-10.0.0.11:22-10.0.0.1:39162.service - OpenSSH per-connection server daemon (10.0.0.1:39162). Nov 12 17:38:21.758313 sshd[3708]: Accepted publickey for core from 10.0.0.1 port 39162 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:38:21.762126 sshd[3708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:38:21.768046 systemd-logind[1426]: New session 8 of user core. Nov 12 17:38:21.785284 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 17:38:21.938319 sshd[3708]: pam_unix(sshd:session): session closed for user core Nov 12 17:38:21.942074 systemd[1]: sshd@7-10.0.0.11:22-10.0.0.1:39162.service: Deactivated successfully. Nov 12 17:38:21.944388 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 17:38:21.945159 systemd-logind[1426]: Session 8 logged out. Waiting for processes to exit. Nov 12 17:38:21.946007 systemd-logind[1426]: Removed session 8. Nov 12 17:38:24.401167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3597919297.mount: Deactivated successfully. Nov 12 17:38:24.490945 containerd[1442]: time="2024-11-12T17:38:24.490804460Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:24.491741 containerd[1442]: time="2024-11-12T17:38:24.491372660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=135495328" Nov 12 17:38:24.494396 containerd[1442]: time="2024-11-12T17:38:24.494361301Z" level=info msg="ImageCreate event name:\"sha256:8d083b1bdef5f976f011d47e03dcb8015c1a80cb54a915c6b8e64df03f0743d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:24.517490 containerd[1442]: time="2024-11-12T17:38:24.517448108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:24.518281 containerd[1442]: time="2024-11-12T17:38:24.518146988Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:8d083b1bdef5f976f011d47e03dcb8015c1a80cb54a915c6b8e64df03f0743d5\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"135495190\" in 3.916099589s" Nov 12 17:38:24.518281 containerd[1442]: time="2024-11-12T17:38:24.518182708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:8d083b1bdef5f976f011d47e03dcb8015c1a80cb54a915c6b8e64df03f0743d5\"" Nov 12 17:38:24.526317 containerd[1442]: time="2024-11-12T17:38:24.526215991Z" level=info msg="CreateContainer within sandbox \"a46445e29c2b4af6844aa44fb69f440322c4af9cbbdadd4e71ff8b77ce9c1518\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 17:38:24.538894 containerd[1442]: time="2024-11-12T17:38:24.538775395Z" level=info msg="CreateContainer within sandbox \"a46445e29c2b4af6844aa44fb69f440322c4af9cbbdadd4e71ff8b77ce9c1518\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f5927dcb827ab16e2382b613fb0a44f1423adf2d21fb726d5e955c4fdc5397c6\"" Nov 12 17:38:24.541017 containerd[1442]: time="2024-11-12T17:38:24.539593315Z" level=info msg="StartContainer for \"f5927dcb827ab16e2382b613fb0a44f1423adf2d21fb726d5e955c4fdc5397c6\"" Nov 12 17:38:24.601239 systemd[1]: Started cri-containerd-f5927dcb827ab16e2382b613fb0a44f1423adf2d21fb726d5e955c4fdc5397c6.scope - libcontainer container f5927dcb827ab16e2382b613fb0a44f1423adf2d21fb726d5e955c4fdc5397c6. Nov 12 17:38:24.693350 containerd[1442]: time="2024-11-12T17:38:24.693231045Z" level=info msg="StartContainer for \"f5927dcb827ab16e2382b613fb0a44f1423adf2d21fb726d5e955c4fdc5397c6\" returns successfully" Nov 12 17:38:24.894022 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 17:38:24.894143 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 17:38:25.659824 kubelet[2558]: E1112 17:38:25.659782 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:25.673214 kubelet[2558]: I1112 17:38:25.673178 2558 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-djgw6" podStartSLOduration=1.900147822 podStartE2EDuration="14.673137266s" podCreationTimestamp="2024-11-12 17:38:11 +0000 UTC" firstStartedPulling="2024-11-12 17:38:11.745494145 +0000 UTC m=+23.352569674" lastFinishedPulling="2024-11-12 17:38:24.518483589 +0000 UTC m=+36.125559118" observedRunningTime="2024-11-12 17:38:25.672598146 +0000 UTC m=+37.279673635" watchObservedRunningTime="2024-11-12 17:38:25.673137266 +0000 UTC m=+37.280212795" Nov 12 17:38:26.321073 kernel: bpftool[3902]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 17:38:26.496709 systemd-networkd[1384]: vxlan.calico: Link UP Nov 12 17:38:26.496718 systemd-networkd[1384]: vxlan.calico: Gained carrier Nov 12 17:38:26.660724 kubelet[2558]: I1112 17:38:26.660679 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:38:26.661536 kubelet[2558]: E1112 17:38:26.661497 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:26.958140 systemd[1]: Started sshd@8-10.0.0.11:22-10.0.0.1:56122.service - OpenSSH per-connection server daemon (10.0.0.1:56122). Nov 12 17:38:27.001154 sshd[3993]: Accepted publickey for core from 10.0.0.1 port 56122 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:38:27.003163 sshd[3993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:38:27.008078 systemd-logind[1426]: New session 9 of user core. Nov 12 17:38:27.016167 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 17:38:27.132213 sshd[3993]: pam_unix(sshd:session): session closed for user core Nov 12 17:38:27.136025 systemd[1]: sshd@8-10.0.0.11:22-10.0.0.1:56122.service: Deactivated successfully. Nov 12 17:38:27.139281 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 17:38:27.142596 systemd-logind[1426]: Session 9 logged out. Waiting for processes to exit. Nov 12 17:38:27.143484 systemd-logind[1426]: Removed session 9. Nov 12 17:38:28.472612 systemd-networkd[1384]: vxlan.calico: Gained IPv6LL Nov 12 17:38:31.470188 containerd[1442]: time="2024-11-12T17:38:31.469596786Z" level=info msg="StopPodSandbox for \"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\"" Nov 12 17:38:31.742859 containerd[1442]: 2024-11-12 17:38:31.586 [INFO][4027] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Nov 12 17:38:31.742859 containerd[1442]: 2024-11-12 17:38:31.586 [INFO][4027] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" iface="eth0" netns="/var/run/netns/cni-533099e5-aef2-f57b-f707-59e0b8e43380" Nov 12 17:38:31.742859 containerd[1442]: 2024-11-12 17:38:31.587 [INFO][4027] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" iface="eth0" netns="/var/run/netns/cni-533099e5-aef2-f57b-f707-59e0b8e43380" Nov 12 17:38:31.742859 containerd[1442]: 2024-11-12 17:38:31.588 [INFO][4027] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" iface="eth0" netns="/var/run/netns/cni-533099e5-aef2-f57b-f707-59e0b8e43380" Nov 12 17:38:31.742859 containerd[1442]: 2024-11-12 17:38:31.588 [INFO][4027] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Nov 12 17:38:31.742859 containerd[1442]: 2024-11-12 17:38:31.588 [INFO][4027] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Nov 12 17:38:31.742859 containerd[1442]: 2024-11-12 17:38:31.728 [INFO][4034] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" HandleID="k8s-pod-network.b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0" Nov 12 17:38:31.742859 containerd[1442]: 2024-11-12 17:38:31.728 [INFO][4034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:31.742859 containerd[1442]: 2024-11-12 17:38:31.728 [INFO][4034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:31.742859 containerd[1442]: 2024-11-12 17:38:31.737 [WARNING][4034] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" HandleID="k8s-pod-network.b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0" Nov 12 17:38:31.742859 containerd[1442]: 2024-11-12 17:38:31.737 [INFO][4034] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" HandleID="k8s-pod-network.b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0" Nov 12 17:38:31.742859 containerd[1442]: 2024-11-12 17:38:31.739 [INFO][4034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:31.742859 containerd[1442]: 2024-11-12 17:38:31.741 [INFO][4027] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Nov 12 17:38:31.743318 containerd[1442]: time="2024-11-12T17:38:31.743038401Z" level=info msg="TearDown network for sandbox \"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\" successfully" Nov 12 17:38:31.743318 containerd[1442]: time="2024-11-12T17:38:31.743076081Z" level=info msg="StopPodSandbox for \"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\" returns successfully" Nov 12 17:38:31.748008 containerd[1442]: time="2024-11-12T17:38:31.744774322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f97b7c-bdtjn,Uid:fc25e9ca-d598-4ffd-89c6-df55cbfb8cfb,Namespace:calico-apiserver,Attempt:1,}" Nov 12 17:38:31.749197 systemd[1]: run-netns-cni\x2d533099e5\x2daef2\x2df57b\x2df707\x2d59e0b8e43380.mount: Deactivated successfully. Nov 12 17:38:31.862205 systemd-networkd[1384]: calie4a62ec6621: Link UP Nov 12 17:38:31.862403 systemd-networkd[1384]: calie4a62ec6621: Gained carrier Nov 12 17:38:31.877604 containerd[1442]: 2024-11-12 17:38:31.793 [INFO][4052] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0 calico-apiserver-7f4f97b7c- calico-apiserver fc25e9ca-d598-4ffd-89c6-df55cbfb8cfb 840 0 2024-11-12 17:38:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f4f97b7c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f4f97b7c-bdtjn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie4a62ec6621 [] []}} ContainerID="25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f97b7c-bdtjn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-" Nov 12 17:38:31.877604 containerd[1442]: 2024-11-12 17:38:31.793 [INFO][4052] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f97b7c-bdtjn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0" Nov 12 17:38:31.877604 containerd[1442]: 2024-11-12 17:38:31.817 [INFO][4066] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089" HandleID="k8s-pod-network.25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0" Nov 12 17:38:31.877604 containerd[1442]: 2024-11-12 17:38:31.828 [INFO][4066] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089" HandleID="k8s-pod-network.25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027c650), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f4f97b7c-bdtjn", "timestamp":"2024-11-12 17:38:31.817861457 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:38:31.877604 containerd[1442]: 2024-11-12 17:38:31.828 [INFO][4066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:31.877604 containerd[1442]: 2024-11-12 17:38:31.828 [INFO][4066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:31.877604 containerd[1442]: 2024-11-12 17:38:31.828 [INFO][4066] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 17:38:31.877604 containerd[1442]: 2024-11-12 17:38:31.830 [INFO][4066] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089" host="localhost" Nov 12 17:38:31.877604 containerd[1442]: 2024-11-12 17:38:31.838 [INFO][4066] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 17:38:31.877604 containerd[1442]: 2024-11-12 17:38:31.842 [INFO][4066] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 17:38:31.877604 containerd[1442]: 2024-11-12 17:38:31.844 [INFO][4066] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 17:38:31.877604 containerd[1442]: 2024-11-12 17:38:31.846 [INFO][4066] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 17:38:31.877604 containerd[1442]: 2024-11-12 17:38:31.846 [INFO][4066] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089" host="localhost" Nov 12 17:38:31.877604 containerd[1442]: 2024-11-12 17:38:31.848 [INFO][4066] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089 Nov 12 17:38:31.877604 containerd[1442]: 2024-11-12 17:38:31.851 [INFO][4066] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089" host="localhost" Nov 12 17:38:31.877604 containerd[1442]: 2024-11-12 17:38:31.857 [INFO][4066] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089" host="localhost" Nov 12 17:38:31.877604 containerd[1442]: 2024-11-12 17:38:31.857 [INFO][4066] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089" host="localhost" Nov 12 17:38:31.877604 containerd[1442]: 2024-11-12 17:38:31.857 [INFO][4066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:31.877604 containerd[1442]: 2024-11-12 17:38:31.857 [INFO][4066] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089" HandleID="k8s-pod-network.25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0" Nov 12 17:38:31.878145 containerd[1442]: 2024-11-12 17:38:31.859 [INFO][4052] cni-plugin/k8s.go 386: Populated endpoint ContainerID="25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f97b7c-bdtjn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0", GenerateName:"calico-apiserver-7f4f97b7c-", Namespace:"calico-apiserver", SelfLink:"", UID:"fc25e9ca-d598-4ffd-89c6-df55cbfb8cfb", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f4f97b7c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f4f97b7c-bdtjn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie4a62ec6621", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:31.878145 containerd[1442]: 2024-11-12 17:38:31.859 [INFO][4052] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f97b7c-bdtjn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0" Nov 12 17:38:31.878145 containerd[1442]: 2024-11-12 17:38:31.859 [INFO][4052] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4a62ec6621 ContainerID="25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f97b7c-bdtjn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0" Nov 12 17:38:31.878145 containerd[1442]: 2024-11-12 17:38:31.863 [INFO][4052] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f97b7c-bdtjn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0" Nov 12 17:38:31.878145 containerd[1442]: 2024-11-12 17:38:31.863 [INFO][4052] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f97b7c-bdtjn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0", GenerateName:"calico-apiserver-7f4f97b7c-", Namespace:"calico-apiserver", SelfLink:"", UID:"fc25e9ca-d598-4ffd-89c6-df55cbfb8cfb", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f4f97b7c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089", Pod:"calico-apiserver-7f4f97b7c-bdtjn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie4a62ec6621", MAC:"c2:35:0a:73:54:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:31.878145 containerd[1442]: 2024-11-12 17:38:31.874 [INFO][4052] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f97b7c-bdtjn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0" Nov 12 17:38:31.897223 containerd[1442]: time="2024-11-12T17:38:31.897106353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:38:31.897223 containerd[1442]: time="2024-11-12T17:38:31.897186593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:38:31.897223 containerd[1442]: time="2024-11-12T17:38:31.897203233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:38:31.897396 containerd[1442]: time="2024-11-12T17:38:31.897274353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:38:31.918192 systemd[1]: Started cri-containerd-25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089.scope - libcontainer container 25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089. Nov 12 17:38:31.929020 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 17:38:31.945236 containerd[1442]: time="2024-11-12T17:38:31.945195563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f97b7c-bdtjn,Uid:fc25e9ca-d598-4ffd-89c6-df55cbfb8cfb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089\"" Nov 12 17:38:31.946930 containerd[1442]: time="2024-11-12T17:38:31.946643803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 17:38:32.148566 systemd[1]: Started sshd@9-10.0.0.11:22-10.0.0.1:56124.service - OpenSSH per-connection server daemon (10.0.0.1:56124). Nov 12 17:38:32.192487 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 56124 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:38:32.193944 sshd[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:38:32.198185 systemd-logind[1426]: New session 10 of user core. Nov 12 17:38:32.204141 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 17:38:32.315600 sshd[4130]: pam_unix(sshd:session): session closed for user core Nov 12 17:38:32.325620 systemd[1]: sshd@9-10.0.0.11:22-10.0.0.1:56124.service: Deactivated successfully. Nov 12 17:38:32.327544 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 17:38:32.328918 systemd-logind[1426]: Session 10 logged out. Waiting for processes to exit. Nov 12 17:38:32.337318 systemd[1]: Started sshd@10-10.0.0.11:22-10.0.0.1:56136.service - OpenSSH per-connection server daemon (10.0.0.1:56136). Nov 12 17:38:32.338598 systemd-logind[1426]: Removed session 10. Nov 12 17:38:32.370303 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 56136 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:38:32.371549 sshd[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:38:32.375679 systemd-logind[1426]: New session 11 of user core. Nov 12 17:38:32.382126 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 17:38:32.470659 containerd[1442]: time="2024-11-12T17:38:32.469596304Z" level=info msg="StopPodSandbox for \"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\"" Nov 12 17:38:32.544902 sshd[4149]: pam_unix(sshd:session): session closed for user core Nov 12 17:38:32.554565 systemd[1]: sshd@10-10.0.0.11:22-10.0.0.1:56136.service: Deactivated successfully. Nov 12 17:38:32.556975 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 17:38:32.559659 systemd-logind[1426]: Session 11 logged out. Waiting for processes to exit. Nov 12 17:38:32.567741 systemd[1]: Started sshd@11-10.0.0.11:22-10.0.0.1:32982.service - OpenSSH per-connection server daemon (10.0.0.1:32982). Nov 12 17:38:32.570665 systemd-logind[1426]: Removed session 11. Nov 12 17:38:32.590799 containerd[1442]: 2024-11-12 17:38:32.522 [INFO][4174] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Nov 12 17:38:32.590799 containerd[1442]: 2024-11-12 17:38:32.522 [INFO][4174] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" iface="eth0" netns="/var/run/netns/cni-f2c2592b-58a9-7675-2809-afcaf67dc8f4" Nov 12 17:38:32.590799 containerd[1442]: 2024-11-12 17:38:32.522 [INFO][4174] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" iface="eth0" netns="/var/run/netns/cni-f2c2592b-58a9-7675-2809-afcaf67dc8f4" Nov 12 17:38:32.590799 containerd[1442]: 2024-11-12 17:38:32.523 [INFO][4174] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" iface="eth0" netns="/var/run/netns/cni-f2c2592b-58a9-7675-2809-afcaf67dc8f4" Nov 12 17:38:32.590799 containerd[1442]: 2024-11-12 17:38:32.523 [INFO][4174] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Nov 12 17:38:32.590799 containerd[1442]: 2024-11-12 17:38:32.523 [INFO][4174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Nov 12 17:38:32.590799 containerd[1442]: 2024-11-12 17:38:32.566 [INFO][4182] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" HandleID="k8s-pod-network.a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Workload="localhost-k8s-csi--node--driver--26lzj-eth0" Nov 12 17:38:32.590799 containerd[1442]: 2024-11-12 17:38:32.566 [INFO][4182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:32.590799 containerd[1442]: 2024-11-12 17:38:32.566 [INFO][4182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:32.590799 containerd[1442]: 2024-11-12 17:38:32.584 [WARNING][4182] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" HandleID="k8s-pod-network.a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Workload="localhost-k8s-csi--node--driver--26lzj-eth0" Nov 12 17:38:32.590799 containerd[1442]: 2024-11-12 17:38:32.584 [INFO][4182] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" HandleID="k8s-pod-network.a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Workload="localhost-k8s-csi--node--driver--26lzj-eth0" Nov 12 17:38:32.590799 containerd[1442]: 2024-11-12 17:38:32.585 [INFO][4182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:32.590799 containerd[1442]: 2024-11-12 17:38:32.589 [INFO][4174] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Nov 12 17:38:32.591296 containerd[1442]: time="2024-11-12T17:38:32.590940407Z" level=info msg="TearDown network for sandbox \"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\" successfully" Nov 12 17:38:32.591296 containerd[1442]: time="2024-11-12T17:38:32.590967287Z" level=info msg="StopPodSandbox for \"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\" returns successfully" Nov 12 17:38:32.591677 containerd[1442]: time="2024-11-12T17:38:32.591646127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-26lzj,Uid:307248dd-d398-4f72-8974-33e136137cb7,Namespace:calico-system,Attempt:1,}" Nov 12 17:38:32.608564 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 32982 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:38:32.610086 sshd[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:38:32.614121 systemd-logind[1426]: New session 12 of user core. Nov 12 17:38:32.623247 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 17:38:32.710652 systemd-networkd[1384]: calif0064864d5d: Link UP Nov 12 17:38:32.710799 systemd-networkd[1384]: calif0064864d5d: Gained carrier Nov 12 17:38:32.726601 containerd[1442]: 2024-11-12 17:38:32.635 [INFO][4195] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--26lzj-eth0 csi-node-driver- calico-system 307248dd-d398-4f72-8974-33e136137cb7 849 0 2024-11-12 17:38:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:64dd8495dc k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-26lzj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif0064864d5d [] []}} ContainerID="047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f" Namespace="calico-system" Pod="csi-node-driver-26lzj" WorkloadEndpoint="localhost-k8s-csi--node--driver--26lzj-" Nov 12 17:38:32.726601 containerd[1442]: 2024-11-12 17:38:32.635 [INFO][4195] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f" Namespace="calico-system" Pod="csi-node-driver-26lzj" WorkloadEndpoint="localhost-k8s-csi--node--driver--26lzj-eth0" Nov 12 17:38:32.726601 containerd[1442]: 2024-11-12 17:38:32.663 [INFO][4209] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f" HandleID="k8s-pod-network.047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f" Workload="localhost-k8s-csi--node--driver--26lzj-eth0" Nov 12 17:38:32.726601 containerd[1442]: 2024-11-12 17:38:32.677 [INFO][4209] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f" HandleID="k8s-pod-network.047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f" Workload="localhost-k8s-csi--node--driver--26lzj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003c2130), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-26lzj", "timestamp":"2024-11-12 17:38:32.663817341 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:38:32.726601 containerd[1442]: 2024-11-12 17:38:32.677 [INFO][4209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:32.726601 containerd[1442]: 2024-11-12 17:38:32.678 [INFO][4209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:32.726601 containerd[1442]: 2024-11-12 17:38:32.678 [INFO][4209] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 17:38:32.726601 containerd[1442]: 2024-11-12 17:38:32.679 [INFO][4209] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f" host="localhost" Nov 12 17:38:32.726601 containerd[1442]: 2024-11-12 17:38:32.683 [INFO][4209] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 17:38:32.726601 containerd[1442]: 2024-11-12 17:38:32.688 [INFO][4209] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 17:38:32.726601 containerd[1442]: 2024-11-12 17:38:32.690 [INFO][4209] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 17:38:32.726601 containerd[1442]: 2024-11-12 17:38:32.693 [INFO][4209] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 17:38:32.726601 containerd[1442]: 2024-11-12 17:38:32.694 [INFO][4209] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f" host="localhost" Nov 12 17:38:32.726601 containerd[1442]: 2024-11-12 17:38:32.695 [INFO][4209] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f Nov 12 17:38:32.726601 containerd[1442]: 2024-11-12 17:38:32.699 [INFO][4209] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f" host="localhost" Nov 12 17:38:32.726601 containerd[1442]: 2024-11-12 17:38:32.705 [INFO][4209] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f" host="localhost" Nov 12 17:38:32.726601 containerd[1442]: 2024-11-12 17:38:32.705 [INFO][4209] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f" host="localhost" Nov 12 17:38:32.726601 containerd[1442]: 2024-11-12 17:38:32.705 [INFO][4209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:32.726601 containerd[1442]: 2024-11-12 17:38:32.705 [INFO][4209] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f" HandleID="k8s-pod-network.047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f" Workload="localhost-k8s-csi--node--driver--26lzj-eth0" Nov 12 17:38:32.727118 containerd[1442]: 2024-11-12 17:38:32.707 [INFO][4195] cni-plugin/k8s.go 386: Populated endpoint ContainerID="047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f" Namespace="calico-system" Pod="csi-node-driver-26lzj" WorkloadEndpoint="localhost-k8s-csi--node--driver--26lzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--26lzj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"307248dd-d398-4f72-8974-33e136137cb7", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-26lzj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif0064864d5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:32.727118 containerd[1442]: 2024-11-12 17:38:32.708 [INFO][4195] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f" Namespace="calico-system" Pod="csi-node-driver-26lzj" WorkloadEndpoint="localhost-k8s-csi--node--driver--26lzj-eth0" Nov 12 17:38:32.727118 containerd[1442]: 2024-11-12 17:38:32.708 [INFO][4195] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0064864d5d ContainerID="047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f" Namespace="calico-system" Pod="csi-node-driver-26lzj" WorkloadEndpoint="localhost-k8s-csi--node--driver--26lzj-eth0" Nov 12 17:38:32.727118 containerd[1442]: 2024-11-12 17:38:32.709 [INFO][4195] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f" Namespace="calico-system" Pod="csi-node-driver-26lzj" WorkloadEndpoint="localhost-k8s-csi--node--driver--26lzj-eth0" Nov 12 17:38:32.727118 containerd[1442]: 2024-11-12 17:38:32.713 [INFO][4195] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f" Namespace="calico-system" Pod="csi-node-driver-26lzj" WorkloadEndpoint="localhost-k8s-csi--node--driver--26lzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--26lzj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"307248dd-d398-4f72-8974-33e136137cb7", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f", Pod:"csi-node-driver-26lzj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif0064864d5d", MAC:"5e:e0:b9:a5:9e:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:32.727118 containerd[1442]: 2024-11-12 17:38:32.721 [INFO][4195] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f" Namespace="calico-system" Pod="csi-node-driver-26lzj" WorkloadEndpoint="localhost-k8s-csi--node--driver--26lzj-eth0" Nov 12 17:38:32.747469 systemd[1]: run-netns-cni\x2df2c2592b\x2d58a9\x2d7675\x2d2809\x2dafcaf67dc8f4.mount: Deactivated successfully. Nov 12 17:38:32.767020 sshd[4190]: pam_unix(sshd:session): session closed for user core Nov 12 17:38:32.770356 systemd[1]: sshd@11-10.0.0.11:22-10.0.0.1:32982.service: Deactivated successfully. Nov 12 17:38:32.774311 containerd[1442]: time="2024-11-12T17:38:32.772505562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:38:32.774311 containerd[1442]: time="2024-11-12T17:38:32.772592442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:38:32.774311 containerd[1442]: time="2024-11-12T17:38:32.772611922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:38:32.774311 containerd[1442]: time="2024-11-12T17:38:32.772735282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:38:32.773915 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 17:38:32.776019 systemd-logind[1426]: Session 12 logged out. Waiting for processes to exit. Nov 12 17:38:32.782675 systemd-logind[1426]: Removed session 12. Nov 12 17:38:32.814187 systemd[1]: Started cri-containerd-047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f.scope - libcontainer container 047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f. Nov 12 17:38:32.828588 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 17:38:32.837862 containerd[1442]: time="2024-11-12T17:38:32.837826415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-26lzj,Uid:307248dd-d398-4f72-8974-33e136137cb7,Namespace:calico-system,Attempt:1,} returns sandbox id \"047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f\"" Nov 12 17:38:32.952389 systemd-networkd[1384]: calie4a62ec6621: Gained IPv6LL Nov 12 17:38:33.522910 containerd[1442]: time="2024-11-12T17:38:33.522861940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:33.523500 containerd[1442]: time="2024-11-12T17:38:33.523467620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=39277239" Nov 12 17:38:33.524137 containerd[1442]: time="2024-11-12T17:38:33.524099220Z" level=info msg="ImageCreate event name:\"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:33.526497 containerd[1442]: time="2024-11-12T17:38:33.526463940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:33.527242 containerd[1442]: time="2024-11-12T17:38:33.527145700Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"40646891\" in 1.580460257s" Nov 12 17:38:33.527242 containerd[1442]: time="2024-11-12T17:38:33.527179580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\"" Nov 12 17:38:33.529072 containerd[1442]: time="2024-11-12T17:38:33.528417221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 17:38:33.546790 containerd[1442]: time="2024-11-12T17:38:33.546759304Z" level=info msg="CreateContainer within sandbox \"25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 17:38:33.556506 containerd[1442]: time="2024-11-12T17:38:33.556448786Z" level=info msg="CreateContainer within sandbox \"25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"af964da78982f42bd6dfb5ba6663d4d297d59f3108e70fa0a269feb711a4b240\"" Nov 12 17:38:33.557050 containerd[1442]: time="2024-11-12T17:38:33.557021346Z" level=info msg="StartContainer for \"af964da78982f42bd6dfb5ba6663d4d297d59f3108e70fa0a269feb711a4b240\"" Nov 12 17:38:33.581140 systemd[1]: Started cri-containerd-af964da78982f42bd6dfb5ba6663d4d297d59f3108e70fa0a269feb711a4b240.scope - libcontainer container af964da78982f42bd6dfb5ba6663d4d297d59f3108e70fa0a269feb711a4b240. Nov 12 17:38:33.613411 containerd[1442]: time="2024-11-12T17:38:33.613362196Z" level=info msg="StartContainer for \"af964da78982f42bd6dfb5ba6663d4d297d59f3108e70fa0a269feb711a4b240\" returns successfully" Nov 12 17:38:33.690847 kubelet[2558]: I1112 17:38:33.689777 2558 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f4f97b7c-bdtjn" podStartSLOduration=21.108151912 podStartE2EDuration="22.689738929s" podCreationTimestamp="2024-11-12 17:38:11 +0000 UTC" firstStartedPulling="2024-11-12 17:38:31.946340563 +0000 UTC m=+43.553416092" lastFinishedPulling="2024-11-12 17:38:33.52792758 +0000 UTC m=+45.135003109" observedRunningTime="2024-11-12 17:38:33.689520889 +0000 UTC m=+45.296596418" watchObservedRunningTime="2024-11-12 17:38:33.689738929 +0000 UTC m=+45.296814418" Nov 12 17:38:34.472997 containerd[1442]: time="2024-11-12T17:38:34.472608665Z" level=info msg="StopPodSandbox for \"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\"" Nov 12 17:38:34.473147 containerd[1442]: time="2024-11-12T17:38:34.473007385Z" level=info msg="StopPodSandbox for \"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\"" Nov 12 17:38:34.506499 containerd[1442]: time="2024-11-12T17:38:34.506439470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7464731" Nov 12 17:38:34.506499 containerd[1442]: time="2024-11-12T17:38:34.506494351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:34.507429 containerd[1442]: time="2024-11-12T17:38:34.507376111Z" level=info msg="ImageCreate event name:\"sha256:7c36e10791d457ced41235b20bab3cd8f54891dd8f7ddaa627378845532c8737\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:34.510480 containerd[1442]: time="2024-11-12T17:38:34.510445191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:34.511042 containerd[1442]: time="2024-11-12T17:38:34.510764271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:7c36e10791d457ced41235b20bab3cd8f54891dd8f7ddaa627378845532c8737\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"8834367\" in 982.30101ms" Nov 12 17:38:34.511042 containerd[1442]: time="2024-11-12T17:38:34.510792151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:7c36e10791d457ced41235b20bab3cd8f54891dd8f7ddaa627378845532c8737\"" Nov 12 17:38:34.514006 containerd[1442]: time="2024-11-12T17:38:34.513739632Z" level=info msg="CreateContainer within sandbox \"047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 17:38:34.541260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1587851294.mount: Deactivated successfully. Nov 12 17:38:34.564800 containerd[1442]: time="2024-11-12T17:38:34.564743880Z" level=info msg="CreateContainer within sandbox \"047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5d29eb6aafdca1ce405d080b2d6873c46d9fc68c3994fee371bb791dfa13e5a7\"" Nov 12 17:38:34.565762 containerd[1442]: time="2024-11-12T17:38:34.565505680Z" level=info msg="StartContainer for \"5d29eb6aafdca1ce405d080b2d6873c46d9fc68c3994fee371bb791dfa13e5a7\"" Nov 12 17:38:34.630690 containerd[1442]: 2024-11-12 17:38:34.563 [INFO][4373] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Nov 12 17:38:34.630690 containerd[1442]: 2024-11-12 17:38:34.563 [INFO][4373] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" iface="eth0" netns="/var/run/netns/cni-3e293b7a-d075-27e2-97b1-4e0d7614a09a" Nov 12 17:38:34.630690 containerd[1442]: 2024-11-12 17:38:34.563 [INFO][4373] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" iface="eth0" netns="/var/run/netns/cni-3e293b7a-d075-27e2-97b1-4e0d7614a09a" Nov 12 17:38:34.630690 containerd[1442]: 2024-11-12 17:38:34.567 [INFO][4373] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" iface="eth0" netns="/var/run/netns/cni-3e293b7a-d075-27e2-97b1-4e0d7614a09a" Nov 12 17:38:34.630690 containerd[1442]: 2024-11-12 17:38:34.567 [INFO][4373] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Nov 12 17:38:34.630690 containerd[1442]: 2024-11-12 17:38:34.567 [INFO][4373] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Nov 12 17:38:34.630690 containerd[1442]: 2024-11-12 17:38:34.610 [INFO][4387] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" HandleID="k8s-pod-network.30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Workload="localhost-k8s-coredns--76f75df574--t796q-eth0" Nov 12 17:38:34.630690 containerd[1442]: 2024-11-12 17:38:34.610 [INFO][4387] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:34.630690 containerd[1442]: 2024-11-12 17:38:34.610 [INFO][4387] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:34.630690 containerd[1442]: 2024-11-12 17:38:34.621 [WARNING][4387] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" HandleID="k8s-pod-network.30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Workload="localhost-k8s-coredns--76f75df574--t796q-eth0" Nov 12 17:38:34.630690 containerd[1442]: 2024-11-12 17:38:34.621 [INFO][4387] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" HandleID="k8s-pod-network.30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Workload="localhost-k8s-coredns--76f75df574--t796q-eth0" Nov 12 17:38:34.630690 containerd[1442]: 2024-11-12 17:38:34.622 [INFO][4387] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:34.630690 containerd[1442]: 2024-11-12 17:38:34.624 [INFO][4373] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Nov 12 17:38:34.631836 containerd[1442]: time="2024-11-12T17:38:34.631803732Z" level=info msg="TearDown network for sandbox \"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\" successfully" Nov 12 17:38:34.634031 containerd[1442]: time="2024-11-12T17:38:34.634006852Z" level=info msg="StopPodSandbox for \"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\" returns successfully" Nov 12 17:38:34.634185 systemd[1]: run-netns-cni\x2d3e293b7a\x2dd075\x2d27e2\x2d97b1\x2d4e0d7614a09a.mount: Deactivated successfully. Nov 12 17:38:34.634537 kubelet[2558]: E1112 17:38:34.634513 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:34.635069 containerd[1442]: time="2024-11-12T17:38:34.635043492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t796q,Uid:8e960c16-fd3a-4a1e-b33c-95978141e8c2,Namespace:kube-system,Attempt:1,}" Nov 12 17:38:34.638312 containerd[1442]: 2024-11-12 17:38:34.573 [INFO][4372] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Nov 12 17:38:34.638312 containerd[1442]: 2024-11-12 17:38:34.574 [INFO][4372] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" iface="eth0" netns="/var/run/netns/cni-8bd67058-fc4e-d81b-7f7a-17598cf94578" Nov 12 17:38:34.638312 containerd[1442]: 2024-11-12 17:38:34.574 [INFO][4372] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" iface="eth0" netns="/var/run/netns/cni-8bd67058-fc4e-d81b-7f7a-17598cf94578" Nov 12 17:38:34.638312 containerd[1442]: 2024-11-12 17:38:34.574 [INFO][4372] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" iface="eth0" netns="/var/run/netns/cni-8bd67058-fc4e-d81b-7f7a-17598cf94578" Nov 12 17:38:34.638312 containerd[1442]: 2024-11-12 17:38:34.574 [INFO][4372] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Nov 12 17:38:34.638312 containerd[1442]: 2024-11-12 17:38:34.574 [INFO][4372] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Nov 12 17:38:34.638312 containerd[1442]: 2024-11-12 17:38:34.619 [INFO][4393] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" HandleID="k8s-pod-network.800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Workload="localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0" Nov 12 17:38:34.638312 containerd[1442]: 2024-11-12 17:38:34.619 [INFO][4393] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:34.638312 containerd[1442]: 2024-11-12 17:38:34.622 [INFO][4393] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:34.638312 containerd[1442]: 2024-11-12 17:38:34.632 [WARNING][4393] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" HandleID="k8s-pod-network.800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Workload="localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0" Nov 12 17:38:34.638312 containerd[1442]: 2024-11-12 17:38:34.632 [INFO][4393] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" HandleID="k8s-pod-network.800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Workload="localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0" Nov 12 17:38:34.638312 containerd[1442]: 2024-11-12 17:38:34.633 [INFO][4393] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:34.638312 containerd[1442]: 2024-11-12 17:38:34.635 [INFO][4372] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Nov 12 17:38:34.638700 containerd[1442]: time="2024-11-12T17:38:34.638675693Z" level=info msg="TearDown network for sandbox \"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\" successfully" Nov 12 17:38:34.638746 containerd[1442]: time="2024-11-12T17:38:34.638700813Z" level=info msg="StopPodSandbox for \"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\" returns successfully" Nov 12 17:38:34.639611 containerd[1442]: time="2024-11-12T17:38:34.639221573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc9b56c48-rq9pz,Uid:54c761a6-cb36-44cf-b192-819c3beafff3,Namespace:calico-system,Attempt:1,}" Nov 12 17:38:34.652149 systemd[1]: Started cri-containerd-5d29eb6aafdca1ce405d080b2d6873c46d9fc68c3994fee371bb791dfa13e5a7.scope - libcontainer container 5d29eb6aafdca1ce405d080b2d6873c46d9fc68c3994fee371bb791dfa13e5a7. Nov 12 17:38:34.680243 systemd-networkd[1384]: calif0064864d5d: Gained IPv6LL Nov 12 17:38:34.686153 kubelet[2558]: I1112 17:38:34.686123 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:38:34.741093 containerd[1442]: time="2024-11-12T17:38:34.740639750Z" level=info msg="StartContainer for \"5d29eb6aafdca1ce405d080b2d6873c46d9fc68c3994fee371bb791dfa13e5a7\" returns successfully" Nov 12 17:38:34.745407 containerd[1442]: time="2024-11-12T17:38:34.745372191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 17:38:34.748696 systemd[1]: run-netns-cni\x2d8bd67058\x2dfc4e\x2dd81b\x2d7f7a\x2d17598cf94578.mount: Deactivated successfully. Nov 12 17:38:34.780498 systemd-networkd[1384]: calie322e528534: Link UP Nov 12 17:38:34.780686 systemd-networkd[1384]: calie322e528534: Gained carrier Nov 12 17:38:34.790806 containerd[1442]: 2024-11-12 17:38:34.681 [INFO][4422] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--t796q-eth0 coredns-76f75df574- kube-system 8e960c16-fd3a-4a1e-b33c-95978141e8c2 897 0 2024-11-12 17:38:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-t796q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie322e528534 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f" Namespace="kube-system" Pod="coredns-76f75df574-t796q" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--t796q-" Nov 12 17:38:34.790806 containerd[1442]: 2024-11-12 17:38:34.682 [INFO][4422] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f" Namespace="kube-system" Pod="coredns-76f75df574-t796q" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--t796q-eth0" Nov 12 17:38:34.790806 containerd[1442]: 2024-11-12 17:38:34.714 [INFO][4454] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f" HandleID="k8s-pod-network.a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f" Workload="localhost-k8s-coredns--76f75df574--t796q-eth0" Nov 12 17:38:34.790806 containerd[1442]: 2024-11-12 17:38:34.734 [INFO][4454] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f" HandleID="k8s-pod-network.a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f" Workload="localhost-k8s-coredns--76f75df574--t796q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ce10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-t796q", "timestamp":"2024-11-12 17:38:34.714007305 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:38:34.790806 containerd[1442]: 2024-11-12 17:38:34.734 [INFO][4454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:34.790806 containerd[1442]: 2024-11-12 17:38:34.734 [INFO][4454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:34.790806 containerd[1442]: 2024-11-12 17:38:34.734 [INFO][4454] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 17:38:34.790806 containerd[1442]: 2024-11-12 17:38:34.736 [INFO][4454] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f" host="localhost" Nov 12 17:38:34.790806 containerd[1442]: 2024-11-12 17:38:34.748 [INFO][4454] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 17:38:34.790806 containerd[1442]: 2024-11-12 17:38:34.758 [INFO][4454] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 17:38:34.790806 containerd[1442]: 2024-11-12 17:38:34.760 [INFO][4454] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 17:38:34.790806 containerd[1442]: 2024-11-12 17:38:34.762 [INFO][4454] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 17:38:34.790806 containerd[1442]: 2024-11-12 17:38:34.762 [INFO][4454] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f" host="localhost" Nov 12 17:38:34.790806 containerd[1442]: 2024-11-12 17:38:34.763 [INFO][4454] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f Nov 12 17:38:34.790806 containerd[1442]: 2024-11-12 17:38:34.768 [INFO][4454] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f" host="localhost" Nov 12 17:38:34.790806 containerd[1442]: 2024-11-12 17:38:34.773 [INFO][4454] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f" host="localhost" Nov 12 17:38:34.790806 containerd[1442]: 2024-11-12 17:38:34.773 [INFO][4454] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f" host="localhost" Nov 12 17:38:34.790806 containerd[1442]: 2024-11-12 17:38:34.773 [INFO][4454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:34.790806 containerd[1442]: 2024-11-12 17:38:34.773 [INFO][4454] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f" HandleID="k8s-pod-network.a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f" Workload="localhost-k8s-coredns--76f75df574--t796q-eth0" Nov 12 17:38:34.791844 containerd[1442]: 2024-11-12 17:38:34.775 [INFO][4422] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f" Namespace="kube-system" Pod="coredns-76f75df574-t796q" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--t796q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--t796q-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8e960c16-fd3a-4a1e-b33c-95978141e8c2", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-t796q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie322e528534", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:34.791844 containerd[1442]: 2024-11-12 17:38:34.777 [INFO][4422] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f" Namespace="kube-system" Pod="coredns-76f75df574-t796q" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--t796q-eth0" Nov 12 17:38:34.791844 containerd[1442]: 2024-11-12 17:38:34.777 [INFO][4422] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie322e528534 ContainerID="a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f" Namespace="kube-system" Pod="coredns-76f75df574-t796q" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--t796q-eth0" Nov 12 17:38:34.791844 containerd[1442]: 2024-11-12 17:38:34.779 [INFO][4422] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f" Namespace="kube-system" Pod="coredns-76f75df574-t796q" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--t796q-eth0" Nov 12 17:38:34.791844 containerd[1442]: 2024-11-12 17:38:34.779 [INFO][4422] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f" Namespace="kube-system" Pod="coredns-76f75df574-t796q" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--t796q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--t796q-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8e960c16-fd3a-4a1e-b33c-95978141e8c2", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f", Pod:"coredns-76f75df574-t796q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie322e528534", MAC:"26:8e:fb:4b:81:83", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:34.791844 containerd[1442]: 2024-11-12 17:38:34.788 [INFO][4422] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f" Namespace="kube-system" Pod="coredns-76f75df574-t796q" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--t796q-eth0" Nov 12 17:38:34.813412 systemd-networkd[1384]: cali240bbeb3066: Link UP Nov 12 17:38:34.814608 systemd-networkd[1384]: cali240bbeb3066: Gained carrier Nov 12 17:38:34.816765 containerd[1442]: time="2024-11-12T17:38:34.816662723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:38:34.816765 containerd[1442]: time="2024-11-12T17:38:34.816712843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:38:34.817035 containerd[1442]: time="2024-11-12T17:38:34.816767603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:38:34.817035 containerd[1442]: time="2024-11-12T17:38:34.816858843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:38:34.835038 containerd[1442]: 2024-11-12 17:38:34.699 [INFO][4444] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0 calico-kube-controllers-bc9b56c48- calico-system 54c761a6-cb36-44cf-b192-819c3beafff3 899 0 2024-11-12 17:38:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:bc9b56c48 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-bc9b56c48-rq9pz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali240bbeb3066 [] []}} ContainerID="191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884" Namespace="calico-system" Pod="calico-kube-controllers-bc9b56c48-rq9pz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-" Nov 12 17:38:34.835038 containerd[1442]: 2024-11-12 17:38:34.699 [INFO][4444] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884" Namespace="calico-system" Pod="calico-kube-controllers-bc9b56c48-rq9pz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0" Nov 12 17:38:34.835038 containerd[1442]: 2024-11-12 17:38:34.743 [INFO][4461] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884" HandleID="k8s-pod-network.191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884" Workload="localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0" Nov 12 17:38:34.835038 containerd[1442]: 2024-11-12 17:38:34.760 [INFO][4461] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884" HandleID="k8s-pod-network.191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884" Workload="localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d970), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-bc9b56c48-rq9pz", "timestamp":"2024-11-12 17:38:34.74317067 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:38:34.835038 containerd[1442]: 2024-11-12 17:38:34.760 [INFO][4461] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:34.835038 containerd[1442]: 2024-11-12 17:38:34.773 [INFO][4461] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:34.835038 containerd[1442]: 2024-11-12 17:38:34.773 [INFO][4461] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 17:38:34.835038 containerd[1442]: 2024-11-12 17:38:34.775 [INFO][4461] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884" host="localhost" Nov 12 17:38:34.835038 containerd[1442]: 2024-11-12 17:38:34.785 [INFO][4461] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 17:38:34.835038 containerd[1442]: 2024-11-12 17:38:34.791 [INFO][4461] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 17:38:34.835038 containerd[1442]: 2024-11-12 17:38:34.793 [INFO][4461] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 17:38:34.835038 containerd[1442]: 2024-11-12 17:38:34.796 [INFO][4461] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 17:38:34.835038 containerd[1442]: 2024-11-12 17:38:34.796 [INFO][4461] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884" host="localhost" Nov 12 17:38:34.835038 containerd[1442]: 2024-11-12 17:38:34.798 [INFO][4461] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884 Nov 12 17:38:34.835038 containerd[1442]: 2024-11-12 17:38:34.801 [INFO][4461] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884" host="localhost" Nov 12 17:38:34.835038 containerd[1442]: 2024-11-12 17:38:34.806 [INFO][4461] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884" host="localhost" Nov 12 17:38:34.835038 containerd[1442]: 2024-11-12 17:38:34.806 [INFO][4461] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884" host="localhost" Nov 12 17:38:34.835038 containerd[1442]: 2024-11-12 17:38:34.806 [INFO][4461] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:34.835038 containerd[1442]: 2024-11-12 17:38:34.806 [INFO][4461] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884" HandleID="k8s-pod-network.191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884" Workload="localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0" Nov 12 17:38:34.835812 containerd[1442]: 2024-11-12 17:38:34.809 [INFO][4444] cni-plugin/k8s.go 386: Populated endpoint ContainerID="191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884" Namespace="calico-system" Pod="calico-kube-controllers-bc9b56c48-rq9pz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0", GenerateName:"calico-kube-controllers-bc9b56c48-", Namespace:"calico-system", SelfLink:"", UID:"54c761a6-cb36-44cf-b192-819c3beafff3", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bc9b56c48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-bc9b56c48-rq9pz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali240bbeb3066", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:34.835812 containerd[1442]: 2024-11-12 17:38:34.809 [INFO][4444] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884" Namespace="calico-system" Pod="calico-kube-controllers-bc9b56c48-rq9pz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0" Nov 12 17:38:34.835812 containerd[1442]: 2024-11-12 17:38:34.809 [INFO][4444] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali240bbeb3066 ContainerID="191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884" Namespace="calico-system" Pod="calico-kube-controllers-bc9b56c48-rq9pz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0" Nov 12 17:38:34.835812 containerd[1442]: 2024-11-12 17:38:34.815 [INFO][4444] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884" Namespace="calico-system" Pod="calico-kube-controllers-bc9b56c48-rq9pz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0" Nov 12 17:38:34.835812 containerd[1442]: 2024-11-12 17:38:34.815 [INFO][4444] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884" Namespace="calico-system" Pod="calico-kube-controllers-bc9b56c48-rq9pz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0", GenerateName:"calico-kube-controllers-bc9b56c48-", Namespace:"calico-system", SelfLink:"", UID:"54c761a6-cb36-44cf-b192-819c3beafff3", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bc9b56c48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884", Pod:"calico-kube-controllers-bc9b56c48-rq9pz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali240bbeb3066", MAC:"a6:fb:6e:ae:0c:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:34.835812 containerd[1442]: 2024-11-12 17:38:34.831 [INFO][4444] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884" Namespace="calico-system" Pod="calico-kube-controllers-bc9b56c48-rq9pz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0" Nov 12 17:38:34.842147 systemd[1]: Started cri-containerd-a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f.scope - libcontainer container a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f. Nov 12 17:38:34.855548 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 17:38:34.868049 containerd[1442]: time="2024-11-12T17:38:34.867579171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:38:34.868049 containerd[1442]: time="2024-11-12T17:38:34.867630971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:38:34.874001 containerd[1442]: time="2024-11-12T17:38:34.867646091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:38:34.874130 containerd[1442]: time="2024-11-12T17:38:34.873507132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:38:34.880956 containerd[1442]: time="2024-11-12T17:38:34.880921494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t796q,Uid:8e960c16-fd3a-4a1e-b33c-95978141e8c2,Namespace:kube-system,Attempt:1,} returns sandbox id \"a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f\"" Nov 12 17:38:34.881867 kubelet[2558]: E1112 17:38:34.881846 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:34.885671 containerd[1442]: time="2024-11-12T17:38:34.885629814Z" level=info msg="CreateContainer within sandbox \"a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 17:38:34.899362 systemd[1]: Started cri-containerd-191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884.scope - libcontainer container 191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884. Nov 12 17:38:34.902149 containerd[1442]: time="2024-11-12T17:38:34.901714817Z" level=info msg="CreateContainer within sandbox \"a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fa9e08c47300903653e0ced10a6c8bd5a006dd11139cbfba020b5f25f0165951\"" Nov 12 17:38:34.902149 containerd[1442]: time="2024-11-12T17:38:34.902134777Z" level=info msg="StartContainer for \"fa9e08c47300903653e0ced10a6c8bd5a006dd11139cbfba020b5f25f0165951\"" Nov 12 17:38:34.918132 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 17:38:34.940124 systemd[1]: Started cri-containerd-fa9e08c47300903653e0ced10a6c8bd5a006dd11139cbfba020b5f25f0165951.scope - libcontainer container fa9e08c47300903653e0ced10a6c8bd5a006dd11139cbfba020b5f25f0165951. Nov 12 17:38:34.940925 containerd[1442]: time="2024-11-12T17:38:34.940771504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc9b56c48-rq9pz,Uid:54c761a6-cb36-44cf-b192-819c3beafff3,Namespace:calico-system,Attempt:1,} returns sandbox id \"191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884\"" Nov 12 17:38:34.968837 containerd[1442]: time="2024-11-12T17:38:34.968743268Z" level=info msg="StartContainer for \"fa9e08c47300903653e0ced10a6c8bd5a006dd11139cbfba020b5f25f0165951\" returns successfully" Nov 12 17:38:35.469091 containerd[1442]: time="2024-11-12T17:38:35.468811388Z" level=info msg="StopPodSandbox for \"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\"" Nov 12 17:38:35.469460 containerd[1442]: time="2024-11-12T17:38:35.469365828Z" level=info msg="StopPodSandbox for \"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\"" Nov 12 17:38:35.560786 containerd[1442]: 2024-11-12 17:38:35.518 [INFO][4653] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Nov 12 17:38:35.560786 containerd[1442]: 2024-11-12 17:38:35.518 [INFO][4653] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" iface="eth0" netns="/var/run/netns/cni-a58afa63-d350-5167-3494-fd8430c1ebd3" Nov 12 17:38:35.560786 containerd[1442]: 2024-11-12 17:38:35.519 [INFO][4653] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" iface="eth0" netns="/var/run/netns/cni-a58afa63-d350-5167-3494-fd8430c1ebd3" Nov 12 17:38:35.560786 containerd[1442]: 2024-11-12 17:38:35.519 [INFO][4653] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" iface="eth0" netns="/var/run/netns/cni-a58afa63-d350-5167-3494-fd8430c1ebd3" Nov 12 17:38:35.560786 containerd[1442]: 2024-11-12 17:38:35.519 [INFO][4653] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Nov 12 17:38:35.560786 containerd[1442]: 2024-11-12 17:38:35.519 [INFO][4653] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Nov 12 17:38:35.560786 containerd[1442]: 2024-11-12 17:38:35.545 [INFO][4684] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" HandleID="k8s-pod-network.fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Workload="localhost-k8s-coredns--76f75df574--6lmj5-eth0" Nov 12 17:38:35.560786 containerd[1442]: 2024-11-12 17:38:35.545 [INFO][4684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:35.560786 containerd[1442]: 2024-11-12 17:38:35.545 [INFO][4684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:35.560786 containerd[1442]: 2024-11-12 17:38:35.555 [WARNING][4684] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" HandleID="k8s-pod-network.fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Workload="localhost-k8s-coredns--76f75df574--6lmj5-eth0" Nov 12 17:38:35.560786 containerd[1442]: 2024-11-12 17:38:35.555 [INFO][4684] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" HandleID="k8s-pod-network.fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Workload="localhost-k8s-coredns--76f75df574--6lmj5-eth0" Nov 12 17:38:35.560786 containerd[1442]: 2024-11-12 17:38:35.557 [INFO][4684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:35.560786 containerd[1442]: 2024-11-12 17:38:35.558 [INFO][4653] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Nov 12 17:38:35.561713 containerd[1442]: time="2024-11-12T17:38:35.561583802Z" level=info msg="TearDown network for sandbox \"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\" successfully" Nov 12 17:38:35.561713 containerd[1442]: time="2024-11-12T17:38:35.561619162Z" level=info msg="StopPodSandbox for \"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\" returns successfully" Nov 12 17:38:35.562206 kubelet[2558]: E1112 17:38:35.562150 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:35.562817 containerd[1442]: time="2024-11-12T17:38:35.562782602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6lmj5,Uid:2ba7e0bf-58bb-4e6e-91ed-49866ce4c112,Namespace:kube-system,Attempt:1,}" Nov 12 17:38:35.572742 containerd[1442]: 2024-11-12 17:38:35.531 [INFO][4674] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Nov 12 17:38:35.572742 containerd[1442]: 2024-11-12 17:38:35.531 [INFO][4674] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" iface="eth0" netns="/var/run/netns/cni-4c3a4d38-a401-09b3-87a9-c5541e3cd66f" Nov 12 17:38:35.572742 containerd[1442]: 2024-11-12 17:38:35.531 [INFO][4674] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" iface="eth0" netns="/var/run/netns/cni-4c3a4d38-a401-09b3-87a9-c5541e3cd66f" Nov 12 17:38:35.572742 containerd[1442]: 2024-11-12 17:38:35.531 [INFO][4674] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" iface="eth0" netns="/var/run/netns/cni-4c3a4d38-a401-09b3-87a9-c5541e3cd66f" Nov 12 17:38:35.572742 containerd[1442]: 2024-11-12 17:38:35.531 [INFO][4674] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Nov 12 17:38:35.572742 containerd[1442]: 2024-11-12 17:38:35.531 [INFO][4674] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Nov 12 17:38:35.572742 containerd[1442]: 2024-11-12 17:38:35.557 [INFO][4690] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" HandleID="k8s-pod-network.0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0" Nov 12 17:38:35.572742 containerd[1442]: 2024-11-12 17:38:35.557 [INFO][4690] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:35.572742 containerd[1442]: 2024-11-12 17:38:35.558 [INFO][4690] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:35.572742 containerd[1442]: 2024-11-12 17:38:35.566 [WARNING][4690] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" HandleID="k8s-pod-network.0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0" Nov 12 17:38:35.572742 containerd[1442]: 2024-11-12 17:38:35.566 [INFO][4690] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" HandleID="k8s-pod-network.0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0" Nov 12 17:38:35.572742 containerd[1442]: 2024-11-12 17:38:35.568 [INFO][4690] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:35.572742 containerd[1442]: 2024-11-12 17:38:35.570 [INFO][4674] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Nov 12 17:38:35.573709 containerd[1442]: time="2024-11-12T17:38:35.573194564Z" level=info msg="TearDown network for sandbox \"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\" successfully" Nov 12 17:38:35.573709 containerd[1442]: time="2024-11-12T17:38:35.573232764Z" level=info msg="StopPodSandbox for \"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\" returns successfully" Nov 12 17:38:35.582055 containerd[1442]: time="2024-11-12T17:38:35.581719525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f97b7c-bg95b,Uid:19027f50-17e2-49b5-8e0e-161e0d0f74db,Namespace:calico-apiserver,Attempt:1,}" Nov 12 17:38:35.691865 kubelet[2558]: E1112 17:38:35.691837 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:35.705772 kubelet[2558]: I1112 17:38:35.705734 2558 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-t796q" podStartSLOduration=31.705696065 podStartE2EDuration="31.705696065s" podCreationTimestamp="2024-11-12 17:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:38:35.704746985 +0000 UTC m=+47.311822474" watchObservedRunningTime="2024-11-12 17:38:35.705696065 +0000 UTC m=+47.312771554" Nov 12 17:38:35.729121 systemd-networkd[1384]: cali75b185f7688: Link UP Nov 12 17:38:35.732607 systemd-networkd[1384]: cali75b185f7688: Gained carrier Nov 12 17:38:35.751738 systemd[1]: run-netns-cni\x2da58afa63\x2dd350\x2d5167\x2d3494\x2dfd8430c1ebd3.mount: Deactivated successfully. Nov 12 17:38:35.752092 systemd[1]: run-netns-cni\x2d4c3a4d38\x2da401\x2d09b3\x2d87a9\x2dc5541e3cd66f.mount: Deactivated successfully. Nov 12 17:38:35.756178 containerd[1442]: 2024-11-12 17:38:35.626 [INFO][4712] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0 calico-apiserver-7f4f97b7c- calico-apiserver 19027f50-17e2-49b5-8e0e-161e0d0f74db 918 0 2024-11-12 17:38:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f4f97b7c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f4f97b7c-bg95b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali75b185f7688 [] []}} ContainerID="972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f97b7c-bg95b" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-" Nov 12 17:38:35.756178 containerd[1442]: 2024-11-12 17:38:35.626 [INFO][4712] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f97b7c-bg95b" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0" Nov 12 17:38:35.756178 containerd[1442]: 2024-11-12 17:38:35.653 [INFO][4731] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70" HandleID="k8s-pod-network.972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0" Nov 12 17:38:35.756178 containerd[1442]: 2024-11-12 17:38:35.671 [INFO][4731] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70" HandleID="k8s-pod-network.972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c040), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f4f97b7c-bg95b", "timestamp":"2024-11-12 17:38:35.653468097 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:38:35.756178 containerd[1442]: 2024-11-12 17:38:35.671 [INFO][4731] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:35.756178 containerd[1442]: 2024-11-12 17:38:35.671 [INFO][4731] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:35.756178 containerd[1442]: 2024-11-12 17:38:35.671 [INFO][4731] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 17:38:35.756178 containerd[1442]: 2024-11-12 17:38:35.673 [INFO][4731] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70" host="localhost" Nov 12 17:38:35.756178 containerd[1442]: 2024-11-12 17:38:35.678 [INFO][4731] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 17:38:35.756178 containerd[1442]: 2024-11-12 17:38:35.685 [INFO][4731] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 17:38:35.756178 containerd[1442]: 2024-11-12 17:38:35.688 [INFO][4731] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 17:38:35.756178 containerd[1442]: 2024-11-12 17:38:35.691 [INFO][4731] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 17:38:35.756178 containerd[1442]: 2024-11-12 17:38:35.691 [INFO][4731] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70" host="localhost" Nov 12 17:38:35.756178 containerd[1442]: 2024-11-12 17:38:35.694 [INFO][4731] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70 Nov 12 17:38:35.756178 containerd[1442]: 2024-11-12 17:38:35.701 [INFO][4731] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70" host="localhost" Nov 12 17:38:35.756178 containerd[1442]: 2024-11-12 17:38:35.714 [INFO][4731] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70" host="localhost" Nov 12 17:38:35.756178 containerd[1442]: 2024-11-12 17:38:35.714 [INFO][4731] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70" host="localhost" Nov 12 17:38:35.756178 containerd[1442]: 2024-11-12 17:38:35.714 [INFO][4731] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:35.756178 containerd[1442]: 2024-11-12 17:38:35.714 [INFO][4731] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70" HandleID="k8s-pod-network.972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0" Nov 12 17:38:35.757268 containerd[1442]: 2024-11-12 17:38:35.720 [INFO][4712] cni-plugin/k8s.go 386: Populated endpoint ContainerID="972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f97b7c-bg95b" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0", GenerateName:"calico-apiserver-7f4f97b7c-", Namespace:"calico-apiserver", SelfLink:"", UID:"19027f50-17e2-49b5-8e0e-161e0d0f74db", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f4f97b7c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f4f97b7c-bg95b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali75b185f7688", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:35.757268 containerd[1442]: 2024-11-12 17:38:35.720 [INFO][4712] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f97b7c-bg95b" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0" Nov 12 17:38:35.757268 containerd[1442]: 2024-11-12 17:38:35.720 [INFO][4712] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75b185f7688 ContainerID="972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f97b7c-bg95b" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0" Nov 12 17:38:35.757268 containerd[1442]: 2024-11-12 17:38:35.728 [INFO][4712] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f97b7c-bg95b" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0" Nov 12 17:38:35.757268 containerd[1442]: 2024-11-12 17:38:35.729 [INFO][4712] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f97b7c-bg95b" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0", GenerateName:"calico-apiserver-7f4f97b7c-", Namespace:"calico-apiserver", SelfLink:"", UID:"19027f50-17e2-49b5-8e0e-161e0d0f74db", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f4f97b7c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70", Pod:"calico-apiserver-7f4f97b7c-bg95b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali75b185f7688", MAC:"06:23:ee:ab:0c:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:35.757268 containerd[1442]: 2024-11-12 17:38:35.743 [INFO][4712] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f97b7c-bg95b" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0" Nov 12 17:38:35.785311 containerd[1442]: time="2024-11-12T17:38:35.785223998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:38:35.785458 containerd[1442]: time="2024-11-12T17:38:35.785286518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:38:35.785458 containerd[1442]: time="2024-11-12T17:38:35.785302438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:38:35.785458 containerd[1442]: time="2024-11-12T17:38:35.785403518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:38:35.788053 systemd-networkd[1384]: cali0dfbc56a8a1: Link UP Nov 12 17:38:35.788340 systemd-networkd[1384]: cali0dfbc56a8a1: Gained carrier Nov 12 17:38:35.811004 containerd[1442]: 2024-11-12 17:38:35.610 [INFO][4701] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--6lmj5-eth0 coredns-76f75df574- kube-system 2ba7e0bf-58bb-4e6e-91ed-49866ce4c112 917 0 2024-11-12 17:38:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-6lmj5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0dfbc56a8a1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67" Namespace="kube-system" Pod="coredns-76f75df574-6lmj5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--6lmj5-" Nov 12 17:38:35.811004 containerd[1442]: 2024-11-12 17:38:35.610 [INFO][4701] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67" Namespace="kube-system" Pod="coredns-76f75df574-6lmj5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--6lmj5-eth0" Nov 12 17:38:35.811004 containerd[1442]: 2024-11-12 17:38:35.655 [INFO][4726] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67" HandleID="k8s-pod-network.6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67" Workload="localhost-k8s-coredns--76f75df574--6lmj5-eth0" Nov 12 17:38:35.811004 containerd[1442]: 2024-11-12 17:38:35.680 [INFO][4726] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67" HandleID="k8s-pod-network.6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67" Workload="localhost-k8s-coredns--76f75df574--6lmj5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000372230), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-6lmj5", "timestamp":"2024-11-12 17:38:35.655894057 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:38:35.811004 containerd[1442]: 2024-11-12 17:38:35.680 [INFO][4726] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:35.811004 containerd[1442]: 2024-11-12 17:38:35.716 [INFO][4726] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:35.811004 containerd[1442]: 2024-11-12 17:38:35.716 [INFO][4726] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 17:38:35.811004 containerd[1442]: 2024-11-12 17:38:35.719 [INFO][4726] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67" host="localhost" Nov 12 17:38:35.811004 containerd[1442]: 2024-11-12 17:38:35.735 [INFO][4726] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 17:38:35.811004 containerd[1442]: 2024-11-12 17:38:35.750 [INFO][4726] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 17:38:35.811004 containerd[1442]: 2024-11-12 17:38:35.757 [INFO][4726] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 17:38:35.811004 containerd[1442]: 2024-11-12 17:38:35.761 [INFO][4726] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 17:38:35.811004 containerd[1442]: 2024-11-12 17:38:35.761 [INFO][4726] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67" host="localhost" Nov 12 17:38:35.811004 containerd[1442]: 2024-11-12 17:38:35.765 [INFO][4726] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67 Nov 12 17:38:35.811004 containerd[1442]: 2024-11-12 17:38:35.771 [INFO][4726] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67" host="localhost" Nov 12 17:38:35.811004 containerd[1442]: 2024-11-12 17:38:35.779 [INFO][4726] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67" host="localhost" Nov 12 17:38:35.811004 containerd[1442]: 2024-11-12 17:38:35.779 [INFO][4726] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67" host="localhost" Nov 12 17:38:35.811004 containerd[1442]: 2024-11-12 17:38:35.780 [INFO][4726] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:35.811004 containerd[1442]: 2024-11-12 17:38:35.780 [INFO][4726] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67" HandleID="k8s-pod-network.6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67" Workload="localhost-k8s-coredns--76f75df574--6lmj5-eth0" Nov 12 17:38:35.811593 containerd[1442]: 2024-11-12 17:38:35.783 [INFO][4701] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67" Namespace="kube-system" Pod="coredns-76f75df574-6lmj5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--6lmj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--6lmj5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2ba7e0bf-58bb-4e6e-91ed-49866ce4c112", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-6lmj5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0dfbc56a8a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:35.811593 containerd[1442]: 2024-11-12 17:38:35.784 [INFO][4701] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67" Namespace="kube-system" Pod="coredns-76f75df574-6lmj5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--6lmj5-eth0" Nov 12 17:38:35.811593 containerd[1442]: 2024-11-12 17:38:35.784 [INFO][4701] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0dfbc56a8a1 ContainerID="6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67" Namespace="kube-system" Pod="coredns-76f75df574-6lmj5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--6lmj5-eth0" Nov 12 17:38:35.811593 containerd[1442]: 2024-11-12 17:38:35.788 [INFO][4701] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67" Namespace="kube-system" Pod="coredns-76f75df574-6lmj5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--6lmj5-eth0" Nov 12 17:38:35.811593 containerd[1442]: 2024-11-12 17:38:35.792 [INFO][4701] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67" Namespace="kube-system" Pod="coredns-76f75df574-6lmj5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--6lmj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--6lmj5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2ba7e0bf-58bb-4e6e-91ed-49866ce4c112", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67", Pod:"coredns-76f75df574-6lmj5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0dfbc56a8a1", MAC:"1e:28:1d:16:fe:bd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:35.811593 containerd[1442]: 2024-11-12 17:38:35.806 [INFO][4701] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67" Namespace="kube-system" Pod="coredns-76f75df574-6lmj5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--6lmj5-eth0" Nov 12 17:38:35.834187 systemd[1]: Started cri-containerd-972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70.scope - libcontainer container 972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70. Nov 12 17:38:35.850794 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 17:38:35.862340 containerd[1442]: time="2024-11-12T17:38:35.861746090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:38:35.862340 containerd[1442]: time="2024-11-12T17:38:35.861811210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:38:35.862340 containerd[1442]: time="2024-11-12T17:38:35.861825930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:38:35.862340 containerd[1442]: time="2024-11-12T17:38:35.861904610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:38:35.882147 systemd[1]: Started cri-containerd-6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67.scope - libcontainer container 6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67. Nov 12 17:38:35.883969 containerd[1442]: time="2024-11-12T17:38:35.883936253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f97b7c-bg95b,Uid:19027f50-17e2-49b5-8e0e-161e0d0f74db,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70\"" Nov 12 17:38:35.886894 containerd[1442]: time="2024-11-12T17:38:35.886777334Z" level=info msg="CreateContainer within sandbox \"972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 17:38:35.897697 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 17:38:35.898810 containerd[1442]: time="2024-11-12T17:38:35.898668375Z" level=info msg="CreateContainer within sandbox \"972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f676069f2302b3f6bd132c388480d101d4463c29247479c10d0677cf714ef00e\"" Nov 12 17:38:35.899865 containerd[1442]: time="2024-11-12T17:38:35.899434856Z" level=info msg="StartContainer for \"f676069f2302b3f6bd132c388480d101d4463c29247479c10d0677cf714ef00e\"" Nov 12 17:38:35.918726 containerd[1442]: time="2024-11-12T17:38:35.918683459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6lmj5,Uid:2ba7e0bf-58bb-4e6e-91ed-49866ce4c112,Namespace:kube-system,Attempt:1,} returns sandbox id \"6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67\"" Nov 12 17:38:35.919900 kubelet[2558]: E1112 17:38:35.919718 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:35.927825 containerd[1442]: time="2024-11-12T17:38:35.927620700Z" level=info msg="CreateContainer within sandbox \"6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 17:38:35.947303 systemd[1]: Started cri-containerd-f676069f2302b3f6bd132c388480d101d4463c29247479c10d0677cf714ef00e.scope - libcontainer container f676069f2302b3f6bd132c388480d101d4463c29247479c10d0677cf714ef00e. Nov 12 17:38:35.979241 containerd[1442]: time="2024-11-12T17:38:35.979200948Z" level=info msg="StartContainer for \"f676069f2302b3f6bd132c388480d101d4463c29247479c10d0677cf714ef00e\" returns successfully" Nov 12 17:38:35.984997 containerd[1442]: time="2024-11-12T17:38:35.984875509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:35.985578 containerd[1442]: time="2024-11-12T17:38:35.985542389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=9883360" Nov 12 17:38:35.988685 containerd[1442]: time="2024-11-12T17:38:35.988520470Z" level=info msg="ImageCreate event name:\"sha256:fe02b0a9952e3e3b3828f30f55de14ed8db1a2c781e5563c5c70e2a748e28486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:35.989456 containerd[1442]: time="2024-11-12T17:38:35.989420750Z" level=info msg="CreateContainer within sandbox \"6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"586570eb5a9526ff515c84c4070a80faa1da6aa95844e8b2255b5245f45b4097\"" Nov 12 17:38:35.990930 containerd[1442]: time="2024-11-12T17:38:35.990690270Z" level=info msg="StartContainer for \"586570eb5a9526ff515c84c4070a80faa1da6aa95844e8b2255b5245f45b4097\"" Nov 12 17:38:35.993349 containerd[1442]: time="2024-11-12T17:38:35.993266590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:35.993956 containerd[1442]: time="2024-11-12T17:38:35.993795311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:fe02b0a9952e3e3b3828f30f55de14ed8db1a2c781e5563c5c70e2a748e28486\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11252948\" in 1.24822376s" Nov 12 17:38:35.993956 containerd[1442]: time="2024-11-12T17:38:35.993831751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:fe02b0a9952e3e3b3828f30f55de14ed8db1a2c781e5563c5c70e2a748e28486\"" Nov 12 17:38:35.996309 containerd[1442]: time="2024-11-12T17:38:35.995671311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 17:38:35.997861 containerd[1442]: time="2024-11-12T17:38:35.997412071Z" level=info msg="CreateContainer within sandbox \"047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 17:38:36.020457 containerd[1442]: time="2024-11-12T17:38:36.020409755Z" level=info msg="CreateContainer within sandbox \"047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"46e91576a7b178f36b8f083b0829e3dd9001d04c558197fbabf61880bf2d94d4\"" Nov 12 17:38:36.021240 containerd[1442]: time="2024-11-12T17:38:36.021112315Z" level=info msg="StartContainer for \"46e91576a7b178f36b8f083b0829e3dd9001d04c558197fbabf61880bf2d94d4\"" Nov 12 17:38:36.027354 systemd[1]: Started cri-containerd-586570eb5a9526ff515c84c4070a80faa1da6aa95844e8b2255b5245f45b4097.scope - libcontainer container 586570eb5a9526ff515c84c4070a80faa1da6aa95844e8b2255b5245f45b4097. Nov 12 17:38:36.056161 systemd[1]: Started cri-containerd-46e91576a7b178f36b8f083b0829e3dd9001d04c558197fbabf61880bf2d94d4.scope - libcontainer container 46e91576a7b178f36b8f083b0829e3dd9001d04c558197fbabf61880bf2d94d4. Nov 12 17:38:36.059609 containerd[1442]: time="2024-11-12T17:38:36.059565080Z" level=info msg="StartContainer for \"586570eb5a9526ff515c84c4070a80faa1da6aa95844e8b2255b5245f45b4097\" returns successfully" Nov 12 17:38:36.119627 containerd[1442]: time="2024-11-12T17:38:36.119583369Z" level=info msg="StartContainer for \"46e91576a7b178f36b8f083b0829e3dd9001d04c558197fbabf61880bf2d94d4\" returns successfully" Nov 12 17:38:36.188827 kubelet[2558]: I1112 17:38:36.188311 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:38:36.193171 kubelet[2558]: E1112 17:38:36.193130 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:36.571112 kubelet[2558]: I1112 17:38:36.571078 2558 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 17:38:36.578691 kubelet[2558]: I1112 17:38:36.578657 2558 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 17:38:36.601073 systemd-networkd[1384]: cali240bbeb3066: Gained IPv6LL Nov 12 17:38:36.711045 kubelet[2558]: E1112 17:38:36.711013 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:36.711450 kubelet[2558]: E1112 17:38:36.711432 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:36.719961 kubelet[2558]: I1112 17:38:36.719914 2558 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f4f97b7c-bg95b" podStartSLOduration=25.719875418 podStartE2EDuration="25.719875418s" podCreationTimestamp="2024-11-12 17:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:38:36.719695818 +0000 UTC m=+48.326771347" watchObservedRunningTime="2024-11-12 17:38:36.719875418 +0000 UTC m=+48.326950947" Nov 12 17:38:36.734492 kubelet[2558]: I1112 17:38:36.734444 2558 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-26lzj" podStartSLOduration=22.579501004 podStartE2EDuration="25.73440538s" podCreationTimestamp="2024-11-12 17:38:11 +0000 UTC" firstStartedPulling="2024-11-12 17:38:32.839288495 +0000 UTC m=+44.446364024" lastFinishedPulling="2024-11-12 17:38:35.994192871 +0000 UTC m=+47.601268400" observedRunningTime="2024-11-12 17:38:36.73413942 +0000 UTC m=+48.341214949" watchObservedRunningTime="2024-11-12 17:38:36.73440538 +0000 UTC m=+48.341480909" Nov 12 17:38:36.792766 systemd-networkd[1384]: calie322e528534: Gained IPv6LL Nov 12 17:38:36.920192 systemd-networkd[1384]: cali75b185f7688: Gained IPv6LL Nov 12 17:38:36.984201 systemd-networkd[1384]: cali0dfbc56a8a1: Gained IPv6LL Nov 12 17:38:37.538580 containerd[1442]: time="2024-11-12T17:38:37.538067014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:37.540321 containerd[1442]: time="2024-11-12T17:38:37.540265654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=31961371" Nov 12 17:38:37.541028 containerd[1442]: time="2024-11-12T17:38:37.540975975Z" level=info msg="ImageCreate event name:\"sha256:526584192bc71f907fcb2d2ef01be0c760fee2ab7bb1e05e41ad9ade98a986b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:37.543965 containerd[1442]: time="2024-11-12T17:38:37.543915935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:38:37.545129 containerd[1442]: time="2024-11-12T17:38:37.544669215Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:526584192bc71f907fcb2d2ef01be0c760fee2ab7bb1e05e41ad9ade98a986b3\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"33330975\" in 1.548957304s" Nov 12 17:38:37.545129 containerd[1442]: time="2024-11-12T17:38:37.544698615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:526584192bc71f907fcb2d2ef01be0c760fee2ab7bb1e05e41ad9ade98a986b3\"" Nov 12 17:38:37.553433 containerd[1442]: time="2024-11-12T17:38:37.553400136Z" level=info msg="CreateContainer within sandbox \"191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 17:38:37.573853 containerd[1442]: time="2024-11-12T17:38:37.573723419Z" level=info msg="CreateContainer within sandbox \"191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"20f61e0111fb9ddeb5b2e2e4a0783e48471c947bb52d14340ecd874de63e7110\"" Nov 12 17:38:37.574375 containerd[1442]: time="2024-11-12T17:38:37.574332139Z" level=info msg="StartContainer for \"20f61e0111fb9ddeb5b2e2e4a0783e48471c947bb52d14340ecd874de63e7110\"" Nov 12 17:38:37.604646 systemd[1]: Started cri-containerd-20f61e0111fb9ddeb5b2e2e4a0783e48471c947bb52d14340ecd874de63e7110.scope - libcontainer container 20f61e0111fb9ddeb5b2e2e4a0783e48471c947bb52d14340ecd874de63e7110. Nov 12 17:38:37.645779 containerd[1442]: time="2024-11-12T17:38:37.645740189Z" level=info msg="StartContainer for \"20f61e0111fb9ddeb5b2e2e4a0783e48471c947bb52d14340ecd874de63e7110\" returns successfully" Nov 12 17:38:37.716993 kubelet[2558]: I1112 17:38:37.716503 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:38:37.719215 kubelet[2558]: E1112 17:38:37.717181 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:37.720141 kubelet[2558]: E1112 17:38:37.720063 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:37.732828 kubelet[2558]: I1112 17:38:37.731231 2558 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-6lmj5" podStartSLOduration=33.731194601 podStartE2EDuration="33.731194601s" podCreationTimestamp="2024-11-12 17:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:38:36.750480823 +0000 UTC m=+48.357556352" watchObservedRunningTime="2024-11-12 17:38:37.731194601 +0000 UTC m=+49.338270130" Nov 12 17:38:37.734552 kubelet[2558]: I1112 17:38:37.734413 2558 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-bc9b56c48-rq9pz" podStartSLOduration=24.13263605 podStartE2EDuration="26.734373641s" podCreationTimestamp="2024-11-12 17:38:11 +0000 UTC" firstStartedPulling="2024-11-12 17:38:34.943542824 +0000 UTC m=+46.550618353" lastFinishedPulling="2024-11-12 17:38:37.545280415 +0000 UTC m=+49.152355944" observedRunningTime="2024-11-12 17:38:37.730126921 +0000 UTC m=+49.337202450" watchObservedRunningTime="2024-11-12 17:38:37.734373641 +0000 UTC m=+49.341449210" Nov 12 17:38:37.782418 systemd[1]: Started sshd@12-10.0.0.11:22-10.0.0.1:32998.service - OpenSSH per-connection server daemon (10.0.0.1:32998). Nov 12 17:38:37.852130 sshd[5097]: Accepted publickey for core from 10.0.0.1 port 32998 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:38:37.854110 sshd[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:38:37.858564 systemd-logind[1426]: New session 13 of user core. Nov 12 17:38:37.867644 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 17:38:38.041569 sshd[5097]: pam_unix(sshd:session): session closed for user core Nov 12 17:38:38.044926 systemd[1]: sshd@12-10.0.0.11:22-10.0.0.1:32998.service: Deactivated successfully. Nov 12 17:38:38.046463 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 17:38:38.047930 systemd-logind[1426]: Session 13 logged out. Waiting for processes to exit. Nov 12 17:38:38.049101 systemd-logind[1426]: Removed session 13. Nov 12 17:38:38.718411 kubelet[2558]: E1112 17:38:38.718292 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:38:43.053771 systemd[1]: Started sshd@13-10.0.0.11:22-10.0.0.1:53136.service - OpenSSH per-connection server daemon (10.0.0.1:53136). Nov 12 17:38:43.099330 sshd[5123]: Accepted publickey for core from 10.0.0.1 port 53136 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:38:43.100868 sshd[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:38:43.108460 systemd-logind[1426]: New session 14 of user core. Nov 12 17:38:43.117165 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 17:38:43.260397 sshd[5123]: pam_unix(sshd:session): session closed for user core Nov 12 17:38:43.264552 systemd[1]: sshd@13-10.0.0.11:22-10.0.0.1:53136.service: Deactivated successfully. Nov 12 17:38:43.266802 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 17:38:43.267754 systemd-logind[1426]: Session 14 logged out. Waiting for processes to exit. Nov 12 17:38:43.268651 systemd-logind[1426]: Removed session 14. Nov 12 17:38:48.275960 systemd[1]: Started sshd@14-10.0.0.11:22-10.0.0.1:53140.service - OpenSSH per-connection server daemon (10.0.0.1:53140). Nov 12 17:38:48.323245 sshd[5147]: Accepted publickey for core from 10.0.0.1 port 53140 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:38:48.325397 sshd[5147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:38:48.330284 systemd-logind[1426]: New session 15 of user core. Nov 12 17:38:48.339334 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 17:38:48.493966 containerd[1442]: time="2024-11-12T17:38:48.493916782Z" level=info msg="StopPodSandbox for \"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\"" Nov 12 17:38:48.509839 sshd[5147]: pam_unix(sshd:session): session closed for user core Nov 12 17:38:48.514393 systemd[1]: sshd@14-10.0.0.11:22-10.0.0.1:53140.service: Deactivated successfully. Nov 12 17:38:48.518312 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 17:38:48.520507 systemd-logind[1426]: Session 15 logged out. Waiting for processes to exit. Nov 12 17:38:48.521723 systemd-logind[1426]: Removed session 15. Nov 12 17:38:48.579663 containerd[1442]: 2024-11-12 17:38:48.543 [WARNING][5177] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--26lzj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"307248dd-d398-4f72-8974-33e136137cb7", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f", Pod:"csi-node-driver-26lzj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif0064864d5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:48.579663 containerd[1442]: 2024-11-12 17:38:48.544 [INFO][5177] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Nov 12 17:38:48.579663 containerd[1442]: 2024-11-12 17:38:48.544 [INFO][5177] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" iface="eth0" netns="" Nov 12 17:38:48.579663 containerd[1442]: 2024-11-12 17:38:48.544 [INFO][5177] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Nov 12 17:38:48.579663 containerd[1442]: 2024-11-12 17:38:48.544 [INFO][5177] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Nov 12 17:38:48.579663 containerd[1442]: 2024-11-12 17:38:48.565 [INFO][5187] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" HandleID="k8s-pod-network.a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Workload="localhost-k8s-csi--node--driver--26lzj-eth0" Nov 12 17:38:48.579663 containerd[1442]: 2024-11-12 17:38:48.565 [INFO][5187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:48.579663 containerd[1442]: 2024-11-12 17:38:48.565 [INFO][5187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:48.579663 containerd[1442]: 2024-11-12 17:38:48.574 [WARNING][5187] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" HandleID="k8s-pod-network.a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Workload="localhost-k8s-csi--node--driver--26lzj-eth0" Nov 12 17:38:48.579663 containerd[1442]: 2024-11-12 17:38:48.574 [INFO][5187] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" HandleID="k8s-pod-network.a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Workload="localhost-k8s-csi--node--driver--26lzj-eth0" Nov 12 17:38:48.579663 containerd[1442]: 2024-11-12 17:38:48.576 [INFO][5187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:48.579663 containerd[1442]: 2024-11-12 17:38:48.577 [INFO][5177] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Nov 12 17:38:48.579663 containerd[1442]: time="2024-11-12T17:38:48.579623468Z" level=info msg="TearDown network for sandbox \"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\" successfully" Nov 12 17:38:48.580311 containerd[1442]: time="2024-11-12T17:38:48.580277508Z" level=info msg="StopPodSandbox for \"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\" returns successfully" Nov 12 17:38:48.581182 containerd[1442]: time="2024-11-12T17:38:48.581130268Z" level=info msg="RemovePodSandbox for \"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\"" Nov 12 17:38:48.581522 containerd[1442]: time="2024-11-12T17:38:48.581304908Z" level=info msg="Forcibly stopping sandbox \"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\"" Nov 12 17:38:48.648965 containerd[1442]: 2024-11-12 17:38:48.616 [WARNING][5210] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--26lzj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"307248dd-d398-4f72-8974-33e136137cb7", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"047f3c90d6cb290c5462399c50df6d4b8ff9befdaf6c97a2401a3b40d85cee8f", Pod:"csi-node-driver-26lzj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif0064864d5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:48.648965 containerd[1442]: 2024-11-12 17:38:48.616 [INFO][5210] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Nov 12 17:38:48.648965 containerd[1442]: 2024-11-12 17:38:48.616 [INFO][5210] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" iface="eth0" netns="" Nov 12 17:38:48.648965 containerd[1442]: 2024-11-12 17:38:48.616 [INFO][5210] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Nov 12 17:38:48.648965 containerd[1442]: 2024-11-12 17:38:48.616 [INFO][5210] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Nov 12 17:38:48.648965 containerd[1442]: 2024-11-12 17:38:48.636 [INFO][5217] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" HandleID="k8s-pod-network.a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Workload="localhost-k8s-csi--node--driver--26lzj-eth0" Nov 12 17:38:48.648965 containerd[1442]: 2024-11-12 17:38:48.636 [INFO][5217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:48.648965 containerd[1442]: 2024-11-12 17:38:48.636 [INFO][5217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:48.648965 containerd[1442]: 2024-11-12 17:38:48.644 [WARNING][5217] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" HandleID="k8s-pod-network.a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Workload="localhost-k8s-csi--node--driver--26lzj-eth0" Nov 12 17:38:48.648965 containerd[1442]: 2024-11-12 17:38:48.644 [INFO][5217] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" HandleID="k8s-pod-network.a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Workload="localhost-k8s-csi--node--driver--26lzj-eth0" Nov 12 17:38:48.648965 containerd[1442]: 2024-11-12 17:38:48.646 [INFO][5217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:48.648965 containerd[1442]: 2024-11-12 17:38:48.647 [INFO][5210] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c" Nov 12 17:38:48.649382 containerd[1442]: time="2024-11-12T17:38:48.649023392Z" level=info msg="TearDown network for sandbox \"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\" successfully" Nov 12 17:38:48.660157 containerd[1442]: time="2024-11-12T17:38:48.660112393Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:38:48.660266 containerd[1442]: time="2024-11-12T17:38:48.660188833Z" level=info msg="RemovePodSandbox \"a230cd04361bd4ac052cda8d4a23c9a04a540967b842f8035b32d1a615bb1e0c\" returns successfully" Nov 12 17:38:48.660639 containerd[1442]: time="2024-11-12T17:38:48.660613473Z" level=info msg="StopPodSandbox for \"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\"" Nov 12 17:38:48.732374 containerd[1442]: 2024-11-12 17:38:48.697 [WARNING][5239] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0", GenerateName:"calico-apiserver-7f4f97b7c-", Namespace:"calico-apiserver", SelfLink:"", UID:"fc25e9ca-d598-4ffd-89c6-df55cbfb8cfb", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f4f97b7c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089", Pod:"calico-apiserver-7f4f97b7c-bdtjn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie4a62ec6621", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:48.732374 containerd[1442]: 2024-11-12 17:38:48.698 [INFO][5239] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Nov 12 17:38:48.732374 containerd[1442]: 2024-11-12 17:38:48.698 [INFO][5239] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" iface="eth0" netns="" Nov 12 17:38:48.732374 containerd[1442]: 2024-11-12 17:38:48.698 [INFO][5239] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Nov 12 17:38:48.732374 containerd[1442]: 2024-11-12 17:38:48.698 [INFO][5239] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Nov 12 17:38:48.732374 containerd[1442]: 2024-11-12 17:38:48.715 [INFO][5246] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" HandleID="k8s-pod-network.b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0" Nov 12 17:38:48.732374 containerd[1442]: 2024-11-12 17:38:48.715 [INFO][5246] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:48.732374 containerd[1442]: 2024-11-12 17:38:48.716 [INFO][5246] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:48.732374 containerd[1442]: 2024-11-12 17:38:48.728 [WARNING][5246] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" HandleID="k8s-pod-network.b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0" Nov 12 17:38:48.732374 containerd[1442]: 2024-11-12 17:38:48.728 [INFO][5246] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" HandleID="k8s-pod-network.b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0" Nov 12 17:38:48.732374 containerd[1442]: 2024-11-12 17:38:48.729 [INFO][5246] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:48.732374 containerd[1442]: 2024-11-12 17:38:48.730 [INFO][5239] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Nov 12 17:38:48.733410 containerd[1442]: time="2024-11-12T17:38:48.732402198Z" level=info msg="TearDown network for sandbox \"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\" successfully" Nov 12 17:38:48.733410 containerd[1442]: time="2024-11-12T17:38:48.732426718Z" level=info msg="StopPodSandbox for \"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\" returns successfully" Nov 12 17:38:48.733410 containerd[1442]: time="2024-11-12T17:38:48.732928718Z" level=info msg="RemovePodSandbox for \"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\"" Nov 12 17:38:48.733410 containerd[1442]: time="2024-11-12T17:38:48.732960438Z" level=info msg="Forcibly stopping sandbox \"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\"" Nov 12 17:38:48.797232 containerd[1442]: 2024-11-12 17:38:48.767 [WARNING][5268] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0", GenerateName:"calico-apiserver-7f4f97b7c-", Namespace:"calico-apiserver", SelfLink:"", UID:"fc25e9ca-d598-4ffd-89c6-df55cbfb8cfb", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f4f97b7c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25e19d4b7d91b0107da083f4d12d8e230c0afe99de04f581f6b9c60bc7831089", Pod:"calico-apiserver-7f4f97b7c-bdtjn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie4a62ec6621", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:48.797232 containerd[1442]: 2024-11-12 17:38:48.767 [INFO][5268] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Nov 12 17:38:48.797232 containerd[1442]: 2024-11-12 17:38:48.767 [INFO][5268] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" iface="eth0" netns="" Nov 12 17:38:48.797232 containerd[1442]: 2024-11-12 17:38:48.767 [INFO][5268] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Nov 12 17:38:48.797232 containerd[1442]: 2024-11-12 17:38:48.767 [INFO][5268] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Nov 12 17:38:48.797232 containerd[1442]: 2024-11-12 17:38:48.785 [INFO][5275] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" HandleID="k8s-pod-network.b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0" Nov 12 17:38:48.797232 containerd[1442]: 2024-11-12 17:38:48.785 [INFO][5275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:48.797232 containerd[1442]: 2024-11-12 17:38:48.785 [INFO][5275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:48.797232 containerd[1442]: 2024-11-12 17:38:48.793 [WARNING][5275] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" HandleID="k8s-pod-network.b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0" Nov 12 17:38:48.797232 containerd[1442]: 2024-11-12 17:38:48.793 [INFO][5275] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" HandleID="k8s-pod-network.b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bdtjn-eth0" Nov 12 17:38:48.797232 containerd[1442]: 2024-11-12 17:38:48.794 [INFO][5275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:48.797232 containerd[1442]: 2024-11-12 17:38:48.795 [INFO][5268] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef" Nov 12 17:38:48.797686 containerd[1442]: time="2024-11-12T17:38:48.797266882Z" level=info msg="TearDown network for sandbox \"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\" successfully" Nov 12 17:38:48.802041 containerd[1442]: time="2024-11-12T17:38:48.802002283Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:38:48.802120 containerd[1442]: time="2024-11-12T17:38:48.802066403Z" level=info msg="RemovePodSandbox \"b27097ca34b692376a7fda182bc8fd4c070af9733afb2f47d1f172f6482ac6ef\" returns successfully" Nov 12 17:38:48.802648 containerd[1442]: time="2024-11-12T17:38:48.802609123Z" level=info msg="StopPodSandbox for \"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\"" Nov 12 17:38:48.870498 containerd[1442]: 2024-11-12 17:38:48.835 [WARNING][5297] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--t796q-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8e960c16-fd3a-4a1e-b33c-95978141e8c2", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f", Pod:"coredns-76f75df574-t796q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie322e528534", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:48.870498 containerd[1442]: 2024-11-12 17:38:48.836 [INFO][5297] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Nov 12 17:38:48.870498 containerd[1442]: 2024-11-12 17:38:48.836 [INFO][5297] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" iface="eth0" netns="" Nov 12 17:38:48.870498 containerd[1442]: 2024-11-12 17:38:48.836 [INFO][5297] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Nov 12 17:38:48.870498 containerd[1442]: 2024-11-12 17:38:48.836 [INFO][5297] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Nov 12 17:38:48.870498 containerd[1442]: 2024-11-12 17:38:48.857 [INFO][5304] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" HandleID="k8s-pod-network.30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Workload="localhost-k8s-coredns--76f75df574--t796q-eth0" Nov 12 17:38:48.870498 containerd[1442]: 2024-11-12 17:38:48.858 [INFO][5304] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:48.870498 containerd[1442]: 2024-11-12 17:38:48.858 [INFO][5304] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:48.870498 containerd[1442]: 2024-11-12 17:38:48.866 [WARNING][5304] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" HandleID="k8s-pod-network.30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Workload="localhost-k8s-coredns--76f75df574--t796q-eth0" Nov 12 17:38:48.870498 containerd[1442]: 2024-11-12 17:38:48.866 [INFO][5304] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" HandleID="k8s-pod-network.30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Workload="localhost-k8s-coredns--76f75df574--t796q-eth0" Nov 12 17:38:48.870498 containerd[1442]: 2024-11-12 17:38:48.867 [INFO][5304] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:48.870498 containerd[1442]: 2024-11-12 17:38:48.868 [INFO][5297] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Nov 12 17:38:48.870498 containerd[1442]: time="2024-11-12T17:38:48.870460407Z" level=info msg="TearDown network for sandbox \"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\" successfully" Nov 12 17:38:48.870498 containerd[1442]: time="2024-11-12T17:38:48.870484527Z" level=info msg="StopPodSandbox for \"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\" returns successfully" Nov 12 17:38:48.872550 containerd[1442]: time="2024-11-12T17:38:48.872509608Z" level=info msg="RemovePodSandbox for \"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\"" Nov 12 17:38:48.872678 containerd[1442]: time="2024-11-12T17:38:48.872559088Z" level=info msg="Forcibly stopping sandbox \"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\"" Nov 12 17:38:48.936420 containerd[1442]: 2024-11-12 17:38:48.906 [WARNING][5327] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--t796q-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8e960c16-fd3a-4a1e-b33c-95978141e8c2", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a9557821e2a6a8cd41e97ea353151f5d048096f3c871176d32407c5ca150137f", Pod:"coredns-76f75df574-t796q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie322e528534", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:48.936420 containerd[1442]: 2024-11-12 17:38:48.906 [INFO][5327] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Nov 12 17:38:48.936420 containerd[1442]: 2024-11-12 17:38:48.906 [INFO][5327] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" iface="eth0" netns="" Nov 12 17:38:48.936420 containerd[1442]: 2024-11-12 17:38:48.906 [INFO][5327] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Nov 12 17:38:48.936420 containerd[1442]: 2024-11-12 17:38:48.906 [INFO][5327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Nov 12 17:38:48.936420 containerd[1442]: 2024-11-12 17:38:48.922 [INFO][5334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" HandleID="k8s-pod-network.30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Workload="localhost-k8s-coredns--76f75df574--t796q-eth0" Nov 12 17:38:48.936420 containerd[1442]: 2024-11-12 17:38:48.922 [INFO][5334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:48.936420 containerd[1442]: 2024-11-12 17:38:48.922 [INFO][5334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:48.936420 containerd[1442]: 2024-11-12 17:38:48.931 [WARNING][5334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" HandleID="k8s-pod-network.30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Workload="localhost-k8s-coredns--76f75df574--t796q-eth0" Nov 12 17:38:48.936420 containerd[1442]: 2024-11-12 17:38:48.931 [INFO][5334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" HandleID="k8s-pod-network.30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Workload="localhost-k8s-coredns--76f75df574--t796q-eth0" Nov 12 17:38:48.936420 containerd[1442]: 2024-11-12 17:38:48.933 [INFO][5334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:48.936420 containerd[1442]: 2024-11-12 17:38:48.934 [INFO][5327] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1" Nov 12 17:38:48.936791 containerd[1442]: time="2024-11-12T17:38:48.936464732Z" level=info msg="TearDown network for sandbox \"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\" successfully" Nov 12 17:38:48.939418 containerd[1442]: time="2024-11-12T17:38:48.939346852Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:38:48.939471 containerd[1442]: time="2024-11-12T17:38:48.939440332Z" level=info msg="RemovePodSandbox \"30b9c6b732dbe93918856b8f4aa583dc3c430193f0bf6f1106e5ada4f9d31fe1\" returns successfully" Nov 12 17:38:48.939957 containerd[1442]: time="2024-11-12T17:38:48.939919812Z" level=info msg="StopPodSandbox for \"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\"" Nov 12 17:38:49.006127 containerd[1442]: 2024-11-12 17:38:48.973 [WARNING][5357] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--6lmj5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2ba7e0bf-58bb-4e6e-91ed-49866ce4c112", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67", Pod:"coredns-76f75df574-6lmj5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0dfbc56a8a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:49.006127 containerd[1442]: 2024-11-12 17:38:48.974 [INFO][5357] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Nov 12 17:38:49.006127 containerd[1442]: 2024-11-12 17:38:48.974 [INFO][5357] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" iface="eth0" netns="" Nov 12 17:38:49.006127 containerd[1442]: 2024-11-12 17:38:48.974 [INFO][5357] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Nov 12 17:38:49.006127 containerd[1442]: 2024-11-12 17:38:48.974 [INFO][5357] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Nov 12 17:38:49.006127 containerd[1442]: 2024-11-12 17:38:48.993 [INFO][5365] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" HandleID="k8s-pod-network.fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Workload="localhost-k8s-coredns--76f75df574--6lmj5-eth0" Nov 12 17:38:49.006127 containerd[1442]: 2024-11-12 17:38:48.993 [INFO][5365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:49.006127 containerd[1442]: 2024-11-12 17:38:48.993 [INFO][5365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:49.006127 containerd[1442]: 2024-11-12 17:38:49.001 [WARNING][5365] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" HandleID="k8s-pod-network.fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Workload="localhost-k8s-coredns--76f75df574--6lmj5-eth0" Nov 12 17:38:49.006127 containerd[1442]: 2024-11-12 17:38:49.001 [INFO][5365] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" HandleID="k8s-pod-network.fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Workload="localhost-k8s-coredns--76f75df574--6lmj5-eth0" Nov 12 17:38:49.006127 containerd[1442]: 2024-11-12 17:38:49.003 [INFO][5365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:49.006127 containerd[1442]: 2024-11-12 17:38:49.004 [INFO][5357] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Nov 12 17:38:49.006646 containerd[1442]: time="2024-11-12T17:38:49.006184657Z" level=info msg="TearDown network for sandbox \"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\" successfully" Nov 12 17:38:49.006646 containerd[1442]: time="2024-11-12T17:38:49.006228017Z" level=info msg="StopPodSandbox for \"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\" returns successfully" Nov 12 17:38:49.006794 containerd[1442]: time="2024-11-12T17:38:49.006754017Z" level=info msg="RemovePodSandbox for \"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\"" Nov 12 17:38:49.006823 containerd[1442]: time="2024-11-12T17:38:49.006807777Z" level=info msg="Forcibly stopping sandbox \"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\"" Nov 12 17:38:49.072411 containerd[1442]: 2024-11-12 17:38:49.040 [WARNING][5387] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--6lmj5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2ba7e0bf-58bb-4e6e-91ed-49866ce4c112", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ad47782cf8a52fb5c97945d9c5107cf01c8815035f15c47ffd571841e9c6d67", Pod:"coredns-76f75df574-6lmj5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0dfbc56a8a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:49.072411 containerd[1442]: 2024-11-12 17:38:49.040 [INFO][5387] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Nov 12 17:38:49.072411 containerd[1442]: 2024-11-12 17:38:49.040 [INFO][5387] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" iface="eth0" netns="" Nov 12 17:38:49.072411 containerd[1442]: 2024-11-12 17:38:49.040 [INFO][5387] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Nov 12 17:38:49.072411 containerd[1442]: 2024-11-12 17:38:49.040 [INFO][5387] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Nov 12 17:38:49.072411 containerd[1442]: 2024-11-12 17:38:49.059 [INFO][5395] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" HandleID="k8s-pod-network.fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Workload="localhost-k8s-coredns--76f75df574--6lmj5-eth0" Nov 12 17:38:49.072411 containerd[1442]: 2024-11-12 17:38:49.059 [INFO][5395] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:49.072411 containerd[1442]: 2024-11-12 17:38:49.059 [INFO][5395] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:49.072411 containerd[1442]: 2024-11-12 17:38:49.067 [WARNING][5395] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" HandleID="k8s-pod-network.fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Workload="localhost-k8s-coredns--76f75df574--6lmj5-eth0" Nov 12 17:38:49.072411 containerd[1442]: 2024-11-12 17:38:49.067 [INFO][5395] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" HandleID="k8s-pod-network.fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Workload="localhost-k8s-coredns--76f75df574--6lmj5-eth0" Nov 12 17:38:49.072411 containerd[1442]: 2024-11-12 17:38:49.069 [INFO][5395] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:49.072411 containerd[1442]: 2024-11-12 17:38:49.070 [INFO][5387] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41" Nov 12 17:38:49.072797 containerd[1442]: time="2024-11-12T17:38:49.072454981Z" level=info msg="TearDown network for sandbox \"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\" successfully" Nov 12 17:38:49.077404 containerd[1442]: time="2024-11-12T17:38:49.077359301Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:38:49.077485 containerd[1442]: time="2024-11-12T17:38:49.077444581Z" level=info msg="RemovePodSandbox \"fa36b82fbf68dd7e8eddec52122ecf1daf8145d64510aa16c25a00b8c6cd8c41\" returns successfully" Nov 12 17:38:49.078194 containerd[1442]: time="2024-11-12T17:38:49.077884781Z" level=info msg="StopPodSandbox for \"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\"" Nov 12 17:38:49.152370 containerd[1442]: 2024-11-12 17:38:49.115 [WARNING][5418] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0", GenerateName:"calico-kube-controllers-bc9b56c48-", Namespace:"calico-system", SelfLink:"", UID:"54c761a6-cb36-44cf-b192-819c3beafff3", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bc9b56c48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884", Pod:"calico-kube-controllers-bc9b56c48-rq9pz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali240bbeb3066", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:49.152370 containerd[1442]: 2024-11-12 17:38:49.116 [INFO][5418] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Nov 12 17:38:49.152370 containerd[1442]: 2024-11-12 17:38:49.116 [INFO][5418] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" iface="eth0" netns="" Nov 12 17:38:49.152370 containerd[1442]: 2024-11-12 17:38:49.116 [INFO][5418] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Nov 12 17:38:49.152370 containerd[1442]: 2024-11-12 17:38:49.116 [INFO][5418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Nov 12 17:38:49.152370 containerd[1442]: 2024-11-12 17:38:49.134 [INFO][5425] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" HandleID="k8s-pod-network.800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Workload="localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0" Nov 12 17:38:49.152370 containerd[1442]: 2024-11-12 17:38:49.134 [INFO][5425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:49.152370 containerd[1442]: 2024-11-12 17:38:49.134 [INFO][5425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:49.152370 containerd[1442]: 2024-11-12 17:38:49.143 [WARNING][5425] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" HandleID="k8s-pod-network.800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Workload="localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0" Nov 12 17:38:49.152370 containerd[1442]: 2024-11-12 17:38:49.143 [INFO][5425] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" HandleID="k8s-pod-network.800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Workload="localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0" Nov 12 17:38:49.152370 containerd[1442]: 2024-11-12 17:38:49.146 [INFO][5425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:49.152370 containerd[1442]: 2024-11-12 17:38:49.150 [INFO][5418] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Nov 12 17:38:49.152964 containerd[1442]: time="2024-11-12T17:38:49.152841666Z" level=info msg="TearDown network for sandbox \"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\" successfully" Nov 12 17:38:49.152964 containerd[1442]: time="2024-11-12T17:38:49.152872506Z" level=info msg="StopPodSandbox for \"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\" returns successfully" Nov 12 17:38:49.153434 containerd[1442]: time="2024-11-12T17:38:49.153408386Z" level=info msg="RemovePodSandbox for \"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\"" Nov 12 17:38:49.153489 containerd[1442]: time="2024-11-12T17:38:49.153442546Z" level=info msg="Forcibly stopping sandbox \"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\"" Nov 12 17:38:49.226025 containerd[1442]: 2024-11-12 17:38:49.189 [WARNING][5449] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0", GenerateName:"calico-kube-controllers-bc9b56c48-", Namespace:"calico-system", SelfLink:"", UID:"54c761a6-cb36-44cf-b192-819c3beafff3", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bc9b56c48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"191cd33cc4d85ead4139b3e862083c03ad517d39c891df522114c65db84d7884", Pod:"calico-kube-controllers-bc9b56c48-rq9pz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali240bbeb3066", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:49.226025 containerd[1442]: 2024-11-12 17:38:49.189 [INFO][5449] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Nov 12 17:38:49.226025 containerd[1442]: 2024-11-12 17:38:49.189 [INFO][5449] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" iface="eth0" netns="" Nov 12 17:38:49.226025 containerd[1442]: 2024-11-12 17:38:49.189 [INFO][5449] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Nov 12 17:38:49.226025 containerd[1442]: 2024-11-12 17:38:49.189 [INFO][5449] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Nov 12 17:38:49.226025 containerd[1442]: 2024-11-12 17:38:49.209 [INFO][5457] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" HandleID="k8s-pod-network.800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Workload="localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0" Nov 12 17:38:49.226025 containerd[1442]: 2024-11-12 17:38:49.209 [INFO][5457] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:49.226025 containerd[1442]: 2024-11-12 17:38:49.210 [INFO][5457] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:49.226025 containerd[1442]: 2024-11-12 17:38:49.219 [WARNING][5457] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" HandleID="k8s-pod-network.800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Workload="localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0" Nov 12 17:38:49.226025 containerd[1442]: 2024-11-12 17:38:49.219 [INFO][5457] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" HandleID="k8s-pod-network.800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Workload="localhost-k8s-calico--kube--controllers--bc9b56c48--rq9pz-eth0" Nov 12 17:38:49.226025 containerd[1442]: 2024-11-12 17:38:49.221 [INFO][5457] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:49.226025 containerd[1442]: 2024-11-12 17:38:49.222 [INFO][5449] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f" Nov 12 17:38:49.226025 containerd[1442]: time="2024-11-12T17:38:49.225736871Z" level=info msg="TearDown network for sandbox \"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\" successfully" Nov 12 17:38:49.228636 containerd[1442]: time="2024-11-12T17:38:49.228603631Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:38:49.228734 containerd[1442]: time="2024-11-12T17:38:49.228664911Z" level=info msg="RemovePodSandbox \"800acc185ad3e5c135763bd1bb2cf695fcb3323c4056826b4427aa9df71a5f7f\" returns successfully" Nov 12 17:38:49.229316 containerd[1442]: time="2024-11-12T17:38:49.229288511Z" level=info msg="StopPodSandbox for \"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\"" Nov 12 17:38:49.301679 containerd[1442]: 2024-11-12 17:38:49.267 [WARNING][5479] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0", GenerateName:"calico-apiserver-7f4f97b7c-", Namespace:"calico-apiserver", SelfLink:"", UID:"19027f50-17e2-49b5-8e0e-161e0d0f74db", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f4f97b7c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70", Pod:"calico-apiserver-7f4f97b7c-bg95b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali75b185f7688", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:49.301679 containerd[1442]: 2024-11-12 17:38:49.267 [INFO][5479] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Nov 12 17:38:49.301679 containerd[1442]: 2024-11-12 17:38:49.267 [INFO][5479] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" iface="eth0" netns="" Nov 12 17:38:49.301679 containerd[1442]: 2024-11-12 17:38:49.267 [INFO][5479] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Nov 12 17:38:49.301679 containerd[1442]: 2024-11-12 17:38:49.267 [INFO][5479] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Nov 12 17:38:49.301679 containerd[1442]: 2024-11-12 17:38:49.286 [INFO][5486] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" HandleID="k8s-pod-network.0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0" Nov 12 17:38:49.301679 containerd[1442]: 2024-11-12 17:38:49.286 [INFO][5486] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:49.301679 containerd[1442]: 2024-11-12 17:38:49.287 [INFO][5486] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:49.301679 containerd[1442]: 2024-11-12 17:38:49.296 [WARNING][5486] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" HandleID="k8s-pod-network.0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0" Nov 12 17:38:49.301679 containerd[1442]: 2024-11-12 17:38:49.296 [INFO][5486] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" HandleID="k8s-pod-network.0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0" Nov 12 17:38:49.301679 containerd[1442]: 2024-11-12 17:38:49.298 [INFO][5486] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:49.301679 containerd[1442]: 2024-11-12 17:38:49.299 [INFO][5479] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Nov 12 17:38:49.302317 containerd[1442]: time="2024-11-12T17:38:49.301712516Z" level=info msg="TearDown network for sandbox \"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\" successfully" Nov 12 17:38:49.302317 containerd[1442]: time="2024-11-12T17:38:49.301737516Z" level=info msg="StopPodSandbox for \"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\" returns successfully" Nov 12 17:38:49.302317 containerd[1442]: time="2024-11-12T17:38:49.302232676Z" level=info msg="RemovePodSandbox for \"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\"" Nov 12 17:38:49.302317 containerd[1442]: time="2024-11-12T17:38:49.302263116Z" level=info msg="Forcibly stopping sandbox \"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\"" Nov 12 17:38:49.377433 containerd[1442]: 2024-11-12 17:38:49.343 [WARNING][5509] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0", GenerateName:"calico-apiserver-7f4f97b7c-", Namespace:"calico-apiserver", SelfLink:"", UID:"19027f50-17e2-49b5-8e0e-161e0d0f74db", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f4f97b7c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"972bbf23df748c85afa9c793cd9b413537687d90f72cf7f093a740fbaa390c70", Pod:"calico-apiserver-7f4f97b7c-bg95b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali75b185f7688", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:38:49.377433 containerd[1442]: 2024-11-12 17:38:49.343 [INFO][5509] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Nov 12 17:38:49.377433 containerd[1442]: 2024-11-12 17:38:49.344 [INFO][5509] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" iface="eth0" netns="" Nov 12 17:38:49.377433 containerd[1442]: 2024-11-12 17:38:49.344 [INFO][5509] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Nov 12 17:38:49.377433 containerd[1442]: 2024-11-12 17:38:49.344 [INFO][5509] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Nov 12 17:38:49.377433 containerd[1442]: 2024-11-12 17:38:49.363 [INFO][5516] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" HandleID="k8s-pod-network.0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0" Nov 12 17:38:49.377433 containerd[1442]: 2024-11-12 17:38:49.363 [INFO][5516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:38:49.377433 containerd[1442]: 2024-11-12 17:38:49.363 [INFO][5516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:38:49.377433 containerd[1442]: 2024-11-12 17:38:49.371 [WARNING][5516] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" HandleID="k8s-pod-network.0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0" Nov 12 17:38:49.377433 containerd[1442]: 2024-11-12 17:38:49.371 [INFO][5516] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" HandleID="k8s-pod-network.0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Workload="localhost-k8s-calico--apiserver--7f4f97b7c--bg95b-eth0" Nov 12 17:38:49.377433 containerd[1442]: 2024-11-12 17:38:49.373 [INFO][5516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:38:49.377433 containerd[1442]: 2024-11-12 17:38:49.375 [INFO][5509] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88" Nov 12 17:38:49.377816 containerd[1442]: time="2024-11-12T17:38:49.377455640Z" level=info msg="TearDown network for sandbox \"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\" successfully" Nov 12 17:38:49.392804 containerd[1442]: time="2024-11-12T17:38:49.392737401Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:38:49.392936 containerd[1442]: time="2024-11-12T17:38:49.392824841Z" level=info msg="RemovePodSandbox \"0e1c16103c072e3246662a4b6d63d96998c019e264a5c62ada8b311fbc968f88\" returns successfully" Nov 12 17:38:50.595963 kubelet[2558]: I1112 17:38:50.595881 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:38:53.524879 systemd[1]: Started sshd@15-10.0.0.11:22-10.0.0.1:49346.service - OpenSSH per-connection server daemon (10.0.0.1:49346). Nov 12 17:38:53.571847 sshd[5546]: Accepted publickey for core from 10.0.0.1 port 49346 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:38:53.573403 sshd[5546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:38:53.579036 systemd-logind[1426]: New session 16 of user core. Nov 12 17:38:53.582187 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 17:38:53.735157 sshd[5546]: pam_unix(sshd:session): session closed for user core Nov 12 17:38:53.744650 systemd[1]: sshd@15-10.0.0.11:22-10.0.0.1:49346.service: Deactivated successfully. Nov 12 17:38:53.747197 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 17:38:53.748601 systemd-logind[1426]: Session 16 logged out. Waiting for processes to exit. Nov 12 17:38:53.761051 systemd[1]: Started sshd@16-10.0.0.11:22-10.0.0.1:49350.service - OpenSSH per-connection server daemon (10.0.0.1:49350). Nov 12 17:38:53.762326 systemd-logind[1426]: Removed session 16. Nov 12 17:38:53.797696 sshd[5560]: Accepted publickey for core from 10.0.0.1 port 49350 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:38:53.798089 sshd[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:38:53.802583 systemd-logind[1426]: New session 17 of user core. Nov 12 17:38:53.811181 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 17:38:54.122021 sshd[5560]: pam_unix(sshd:session): session closed for user core Nov 12 17:38:54.133880 systemd[1]: sshd@16-10.0.0.11:22-10.0.0.1:49350.service: Deactivated successfully. Nov 12 17:38:54.135764 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 17:38:54.137066 systemd-logind[1426]: Session 17 logged out. Waiting for processes to exit. Nov 12 17:38:54.143296 systemd[1]: Started sshd@17-10.0.0.11:22-10.0.0.1:49362.service - OpenSSH per-connection server daemon (10.0.0.1:49362). Nov 12 17:38:54.147426 systemd-logind[1426]: Removed session 17. Nov 12 17:38:54.180187 sshd[5573]: Accepted publickey for core from 10.0.0.1 port 49362 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:38:54.181653 sshd[5573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:38:54.188265 systemd-logind[1426]: New session 18 of user core. Nov 12 17:38:54.202147 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 17:38:54.982042 kubelet[2558]: I1112 17:38:54.981999 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:38:55.686585 sshd[5573]: pam_unix(sshd:session): session closed for user core Nov 12 17:38:55.695767 systemd[1]: sshd@17-10.0.0.11:22-10.0.0.1:49362.service: Deactivated successfully. Nov 12 17:38:55.698365 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 17:38:55.701126 systemd-logind[1426]: Session 18 logged out. Waiting for processes to exit. Nov 12 17:38:55.706298 systemd[1]: Started sshd@18-10.0.0.11:22-10.0.0.1:49374.service - OpenSSH per-connection server daemon (10.0.0.1:49374). Nov 12 17:38:55.707147 systemd-logind[1426]: Removed session 18. Nov 12 17:38:55.762108 sshd[5597]: Accepted publickey for core from 10.0.0.1 port 49374 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:38:55.764386 sshd[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:38:55.769444 systemd-logind[1426]: New session 19 of user core. Nov 12 17:38:55.778185 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 17:38:56.043884 sshd[5597]: pam_unix(sshd:session): session closed for user core Nov 12 17:38:56.055033 systemd[1]: sshd@18-10.0.0.11:22-10.0.0.1:49374.service: Deactivated successfully. Nov 12 17:38:56.057882 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 17:38:56.059283 systemd-logind[1426]: Session 19 logged out. Waiting for processes to exit. Nov 12 17:38:56.073293 systemd[1]: Started sshd@19-10.0.0.11:22-10.0.0.1:49382.service - OpenSSH per-connection server daemon (10.0.0.1:49382). Nov 12 17:38:56.075037 systemd-logind[1426]: Removed session 19. Nov 12 17:38:56.107105 sshd[5612]: Accepted publickey for core from 10.0.0.1 port 49382 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:38:56.108379 sshd[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:38:56.116591 systemd-logind[1426]: New session 20 of user core. Nov 12 17:38:56.122169 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 17:38:56.257061 sshd[5612]: pam_unix(sshd:session): session closed for user core Nov 12 17:38:56.260975 systemd[1]: sshd@19-10.0.0.11:22-10.0.0.1:49382.service: Deactivated successfully. Nov 12 17:38:56.263212 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 17:38:56.263788 systemd-logind[1426]: Session 20 logged out. Waiting for processes to exit. Nov 12 17:38:56.264846 systemd-logind[1426]: Removed session 20. Nov 12 17:39:01.276080 systemd[1]: Started sshd@20-10.0.0.11:22-10.0.0.1:49398.service - OpenSSH per-connection server daemon (10.0.0.1:49398). Nov 12 17:39:01.318432 sshd[5632]: Accepted publickey for core from 10.0.0.1 port 49398 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:39:01.319956 sshd[5632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:39:01.324730 systemd-logind[1426]: New session 21 of user core. Nov 12 17:39:01.331167 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 17:39:01.477699 sshd[5632]: pam_unix(sshd:session): session closed for user core Nov 12 17:39:01.483454 systemd[1]: sshd@20-10.0.0.11:22-10.0.0.1:49398.service: Deactivated successfully. Nov 12 17:39:01.489430 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 17:39:01.491536 systemd-logind[1426]: Session 21 logged out. Waiting for processes to exit. Nov 12 17:39:01.492711 systemd-logind[1426]: Removed session 21. Nov 12 17:39:04.469341 kubelet[2558]: E1112 17:39:04.469240 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:06.248529 kubelet[2558]: E1112 17:39:06.248494 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:06.497383 systemd[1]: Started sshd@21-10.0.0.11:22-10.0.0.1:42682.service - OpenSSH per-connection server daemon (10.0.0.1:42682). Nov 12 17:39:06.537341 sshd[5671]: Accepted publickey for core from 10.0.0.1 port 42682 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:39:06.538760 sshd[5671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:39:06.545231 systemd-logind[1426]: New session 22 of user core. Nov 12 17:39:06.549145 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 17:39:06.768897 sshd[5671]: pam_unix(sshd:session): session closed for user core Nov 12 17:39:06.773066 systemd[1]: sshd@21-10.0.0.11:22-10.0.0.1:42682.service: Deactivated successfully. Nov 12 17:39:06.777188 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 17:39:06.778828 systemd-logind[1426]: Session 22 logged out. Waiting for processes to exit. Nov 12 17:39:06.779895 systemd-logind[1426]: Removed session 22. Nov 12 17:39:09.468714 kubelet[2558]: E1112 17:39:09.468653 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:39:11.779402 systemd[1]: Started sshd@22-10.0.0.11:22-10.0.0.1:42698.service - OpenSSH per-connection server daemon (10.0.0.1:42698). Nov 12 17:39:11.827973 sshd[5691]: Accepted publickey for core from 10.0.0.1 port 42698 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:39:11.829333 sshd[5691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:39:11.834037 systemd-logind[1426]: New session 23 of user core. Nov 12 17:39:11.839195 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 17:39:11.987940 sshd[5691]: pam_unix(sshd:session): session closed for user core Nov 12 17:39:11.991930 systemd[1]: sshd@22-10.0.0.11:22-10.0.0.1:42698.service: Deactivated successfully. Nov 12 17:39:11.991975 systemd-logind[1426]: Session 23 logged out. Waiting for processes to exit. Nov 12 17:39:11.993777 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 17:39:11.994648 systemd-logind[1426]: Removed session 23.