Nov 12 18:05:43.914443 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 12 18:05:43.914463 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Nov 12 16:24:35 -00 2024 Nov 12 18:05:43.914472 kernel: KASLR enabled Nov 12 18:05:43.914478 kernel: efi: EFI v2.7 by EDK II Nov 12 18:05:43.914484 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Nov 12 18:05:43.914489 kernel: random: crng init done Nov 12 18:05:43.914496 kernel: ACPI: Early table checksum verification disabled Nov 12 18:05:43.914502 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Nov 12 18:05:43.914515 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 12 18:05:43.914523 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 18:05:43.914529 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 18:05:43.914535 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 18:05:43.914541 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 18:05:43.914547 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 18:05:43.914554 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 18:05:43.914562 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 18:05:43.914568 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 18:05:43.914575 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 18:05:43.914581 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 12 18:05:43.914587 kernel: NUMA: Failed to initialise from firmware Nov 12 18:05:43.914594 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 18:05:43.914600 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Nov 12 18:05:43.914606 kernel: Zone ranges: Nov 12 18:05:43.914612 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 18:05:43.914618 kernel: DMA32 empty Nov 12 18:05:43.914626 kernel: Normal empty Nov 12 18:05:43.914632 kernel: Movable zone start for each node Nov 12 18:05:43.914638 kernel: Early memory node ranges Nov 12 18:05:43.914645 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Nov 12 18:05:43.914651 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Nov 12 18:05:43.914663 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Nov 12 18:05:43.914669 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 12 18:05:43.914676 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 12 18:05:43.914682 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 12 18:05:43.914688 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 12 18:05:43.914695 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 18:05:43.914701 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 12 18:05:43.914709 kernel: psci: probing for conduit method from ACPI. Nov 12 18:05:43.914715 kernel: psci: PSCIv1.1 detected in firmware. Nov 12 18:05:43.914721 kernel: psci: Using standard PSCI v0.2 function IDs Nov 12 18:05:43.914730 kernel: psci: Trusted OS migration not required Nov 12 18:05:43.914737 kernel: psci: SMC Calling Convention v1.1 Nov 12 18:05:43.914744 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 12 18:05:43.914751 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Nov 12 18:05:43.914758 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Nov 12 18:05:43.914765 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 12 18:05:43.914772 kernel: Detected PIPT I-cache on CPU0 Nov 12 18:05:43.914779 kernel: CPU features: detected: GIC system register CPU interface Nov 12 18:05:43.914803 kernel: CPU features: detected: Hardware dirty bit management Nov 12 18:05:43.914810 kernel: CPU features: detected: Spectre-v4 Nov 12 18:05:43.914817 kernel: CPU features: detected: Spectre-BHB Nov 12 18:05:43.914824 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 12 18:05:43.914831 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 12 18:05:43.914839 kernel: CPU features: detected: ARM erratum 1418040 Nov 12 18:05:43.914846 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 12 18:05:43.914853 kernel: alternatives: applying boot alternatives Nov 12 18:05:43.914861 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8c276c03cfeb31103ba0b5f1af613bdc698463ad3d29e6750e34154929bf187e Nov 12 18:05:43.914868 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 18:05:43.914875 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 18:05:43.914882 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 18:05:43.914888 kernel: Fallback order for Node 0: 0 Nov 12 18:05:43.914895 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Nov 12 18:05:43.914902 kernel: Policy zone: DMA Nov 12 18:05:43.914908 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 18:05:43.914916 kernel: software IO TLB: area num 4. Nov 12 18:05:43.914923 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Nov 12 18:05:43.914930 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Nov 12 18:05:43.914937 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 18:05:43.914944 kernel: trace event string verifier disabled Nov 12 18:05:43.914950 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 18:05:43.914958 kernel: rcu: RCU event tracing is enabled. Nov 12 18:05:43.914965 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 18:05:43.914972 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 18:05:43.914978 kernel: Tracing variant of Tasks RCU enabled. Nov 12 18:05:43.914985 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 18:05:43.914992 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 18:05:43.915000 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 12 18:05:43.915006 kernel: GICv3: 256 SPIs implemented Nov 12 18:05:43.915013 kernel: GICv3: 0 Extended SPIs implemented Nov 12 18:05:43.915020 kernel: Root IRQ handler: gic_handle_irq Nov 12 18:05:43.915026 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 12 18:05:43.915033 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 12 18:05:43.915040 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 12 18:05:43.915047 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Nov 12 18:05:43.915054 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Nov 12 18:05:43.915060 kernel: GICv3: using LPI property table @0x00000000400f0000 Nov 12 18:05:43.915067 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Nov 12 18:05:43.915075 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 18:05:43.915082 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 18:05:43.915089 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 12 18:05:43.915096 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 12 18:05:43.915103 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 12 18:05:43.915110 kernel: arm-pv: using stolen time PV Nov 12 18:05:43.915117 kernel: Console: colour dummy device 80x25 Nov 12 18:05:43.915124 kernel: ACPI: Core revision 20230628 Nov 12 18:05:43.915131 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 12 18:05:43.915138 kernel: pid_max: default: 32768 minimum: 301 Nov 12 18:05:43.915146 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 18:05:43.915153 kernel: landlock: Up and running. Nov 12 18:05:43.915160 kernel: SELinux: Initializing. Nov 12 18:05:43.915167 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 18:05:43.915174 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 18:05:43.915181 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 18:05:43.915188 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 18:05:43.915195 kernel: rcu: Hierarchical SRCU implementation. Nov 12 18:05:43.915202 kernel: rcu: Max phase no-delay instances is 400. Nov 12 18:05:43.915210 kernel: Platform MSI: ITS@0x8080000 domain created Nov 12 18:05:43.915216 kernel: PCI/MSI: ITS@0x8080000 domain created Nov 12 18:05:43.915223 kernel: Remapping and enabling EFI services. Nov 12 18:05:43.915230 kernel: smp: Bringing up secondary CPUs ... Nov 12 18:05:43.915237 kernel: Detected PIPT I-cache on CPU1 Nov 12 18:05:43.915244 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 12 18:05:43.915251 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Nov 12 18:05:43.915258 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 18:05:43.915265 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 12 18:05:43.915272 kernel: Detected PIPT I-cache on CPU2 Nov 12 18:05:43.915280 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 12 18:05:43.915287 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Nov 12 18:05:43.915298 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 18:05:43.915307 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 12 18:05:43.915314 kernel: Detected PIPT I-cache on CPU3 Nov 12 18:05:43.915321 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 12 18:05:43.915329 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Nov 12 18:05:43.915336 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 18:05:43.915343 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 12 18:05:43.915352 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 18:05:43.915359 kernel: SMP: Total of 4 processors activated. Nov 12 18:05:43.915366 kernel: CPU features: detected: 32-bit EL0 Support Nov 12 18:05:43.915374 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 12 18:05:43.915381 kernel: CPU features: detected: Common not Private translations Nov 12 18:05:43.915388 kernel: CPU features: detected: CRC32 instructions Nov 12 18:05:43.915396 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 12 18:05:43.915403 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 12 18:05:43.915411 kernel: CPU features: detected: LSE atomic instructions Nov 12 18:05:43.915419 kernel: CPU features: detected: Privileged Access Never Nov 12 18:05:43.915426 kernel: CPU features: detected: RAS Extension Support Nov 12 18:05:43.915433 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 12 18:05:43.915440 kernel: CPU: All CPU(s) started at EL1 Nov 12 18:05:43.915448 kernel: alternatives: applying system-wide alternatives Nov 12 18:05:43.915455 kernel: devtmpfs: initialized Nov 12 18:05:43.915462 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 18:05:43.915470 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 18:05:43.915478 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 18:05:43.915485 kernel: SMBIOS 3.0.0 present. Nov 12 18:05:43.915493 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Nov 12 18:05:43.915500 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 18:05:43.915507 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 12 18:05:43.915515 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 12 18:05:43.915522 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 12 18:05:43.915529 kernel: audit: initializing netlink subsys (disabled) Nov 12 18:05:43.915537 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Nov 12 18:05:43.915545 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 18:05:43.915552 kernel: cpuidle: using governor menu Nov 12 18:05:43.915560 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 12 18:05:43.915567 kernel: ASID allocator initialised with 32768 entries Nov 12 18:05:43.915574 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 18:05:43.915582 kernel: Serial: AMBA PL011 UART driver Nov 12 18:05:43.915589 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 12 18:05:43.915596 kernel: Modules: 0 pages in range for non-PLT usage Nov 12 18:05:43.915603 kernel: Modules: 509040 pages in range for PLT usage Nov 12 18:05:43.915612 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 18:05:43.915619 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 18:05:43.915627 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 12 18:05:43.915634 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 12 18:05:43.915641 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 18:05:43.915649 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 18:05:43.915660 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 12 18:05:43.915668 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 12 18:05:43.915675 kernel: ACPI: Added _OSI(Module Device) Nov 12 18:05:43.915684 kernel: ACPI: Added _OSI(Processor Device) Nov 12 18:05:43.915691 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 18:05:43.915698 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 18:05:43.915705 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 18:05:43.915712 kernel: ACPI: Interpreter enabled Nov 12 18:05:43.915720 kernel: ACPI: Using GIC for interrupt routing Nov 12 18:05:43.915727 kernel: ACPI: MCFG table detected, 1 entries Nov 12 18:05:43.915735 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 12 18:05:43.915742 kernel: printk: console [ttyAMA0] enabled Nov 12 18:05:43.915750 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 18:05:43.915879 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 18:05:43.915954 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 12 18:05:43.916029 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 12 18:05:43.916095 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 12 18:05:43.916158 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 12 18:05:43.916187 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 12 18:05:43.916199 kernel: PCI host bridge to bus 0000:00 Nov 12 18:05:43.916275 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 12 18:05:43.916351 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 12 18:05:43.916411 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 12 18:05:43.916468 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 18:05:43.916551 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Nov 12 18:05:43.916631 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 18:05:43.916708 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Nov 12 18:05:43.916774 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Nov 12 18:05:43.916857 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Nov 12 18:05:43.916925 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Nov 12 18:05:43.916991 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Nov 12 18:05:43.917060 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Nov 12 18:05:43.917121 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 12 18:05:43.917184 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 12 18:05:43.917244 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 12 18:05:43.917254 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 12 18:05:43.917262 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 12 18:05:43.917269 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 12 18:05:43.917276 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 12 18:05:43.917284 kernel: iommu: Default domain type: Translated Nov 12 18:05:43.917291 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 12 18:05:43.917300 kernel: efivars: Registered efivars operations Nov 12 18:05:43.917307 kernel: vgaarb: loaded Nov 12 18:05:43.917315 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 12 18:05:43.917322 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 18:05:43.917329 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 18:05:43.917336 kernel: pnp: PnP ACPI init Nov 12 18:05:43.917405 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 12 18:05:43.917415 kernel: pnp: PnP ACPI: found 1 devices Nov 12 18:05:43.917424 kernel: NET: Registered PF_INET protocol family Nov 12 18:05:43.917432 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 18:05:43.917440 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 18:05:43.917447 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 18:05:43.917454 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 18:05:43.917462 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 18:05:43.917469 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 18:05:43.917477 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 18:05:43.917484 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 18:05:43.917513 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 18:05:43.917520 kernel: PCI: CLS 0 bytes, default 64 Nov 12 18:05:43.917528 kernel: kvm [1]: HYP mode not available Nov 12 18:05:43.917535 kernel: Initialise system trusted keyrings Nov 12 18:05:43.917542 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 18:05:43.917550 kernel: Key type asymmetric registered Nov 12 18:05:43.917557 kernel: Asymmetric key parser 'x509' registered Nov 12 18:05:43.917564 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 12 18:05:43.917571 kernel: io scheduler mq-deadline registered Nov 12 18:05:43.917581 kernel: io scheduler kyber registered Nov 12 18:05:43.917588 kernel: io scheduler bfq registered Nov 12 18:05:43.917595 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 12 18:05:43.917602 kernel: ACPI: button: Power Button [PWRB] Nov 12 18:05:43.917610 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 12 18:05:43.917683 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 12 18:05:43.917693 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 18:05:43.917701 kernel: thunder_xcv, ver 1.0 Nov 12 18:05:43.917708 kernel: thunder_bgx, ver 1.0 Nov 12 18:05:43.917717 kernel: nicpf, ver 1.0 Nov 12 18:05:43.917724 kernel: nicvf, ver 1.0 Nov 12 18:05:43.917808 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 12 18:05:43.917890 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-11-12T18:05:43 UTC (1731434743) Nov 12 18:05:43.917900 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 12 18:05:43.917908 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Nov 12 18:05:43.917915 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 12 18:05:43.917923 kernel: watchdog: Hard watchdog permanently disabled Nov 12 18:05:43.917932 kernel: NET: Registered PF_INET6 protocol family Nov 12 18:05:43.917940 kernel: Segment Routing with IPv6 Nov 12 18:05:43.917947 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 18:05:43.917954 kernel: NET: Registered PF_PACKET protocol family Nov 12 18:05:43.917962 kernel: Key type dns_resolver registered Nov 12 18:05:43.917969 kernel: registered taskstats version 1 Nov 12 18:05:43.917976 kernel: Loading compiled-in X.509 certificates Nov 12 18:05:43.917983 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 277bea35d8d47c9841f307ab609d4271c3622dcb' Nov 12 18:05:43.917991 kernel: Key type .fscrypt registered Nov 12 18:05:43.917999 kernel: Key type fscrypt-provisioning registered Nov 12 18:05:43.918006 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 18:05:43.918013 kernel: ima: Allocated hash algorithm: sha1 Nov 12 18:05:43.918021 kernel: ima: No architecture policies found Nov 12 18:05:43.918028 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 12 18:05:43.918036 kernel: clk: Disabling unused clocks Nov 12 18:05:43.918043 kernel: Freeing unused kernel memory: 39360K Nov 12 18:05:43.918050 kernel: Run /init as init process Nov 12 18:05:43.918057 kernel: with arguments: Nov 12 18:05:43.918066 kernel: /init Nov 12 18:05:43.918073 kernel: with environment: Nov 12 18:05:43.918080 kernel: HOME=/ Nov 12 18:05:43.918087 kernel: TERM=linux Nov 12 18:05:43.918094 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 18:05:43.918103 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 18:05:43.918112 systemd[1]: Detected virtualization kvm. Nov 12 18:05:43.918120 systemd[1]: Detected architecture arm64. Nov 12 18:05:43.918129 systemd[1]: Running in initrd. Nov 12 18:05:43.918136 systemd[1]: No hostname configured, using default hostname. Nov 12 18:05:43.918144 systemd[1]: Hostname set to . Nov 12 18:05:43.918152 systemd[1]: Initializing machine ID from VM UUID. Nov 12 18:05:43.918159 systemd[1]: Queued start job for default target initrd.target. Nov 12 18:05:43.918167 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 18:05:43.918175 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 18:05:43.918183 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 18:05:43.918192 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 18:05:43.918200 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 18:05:43.918208 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 18:05:43.918218 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 18:05:43.918226 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 18:05:43.918234 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 18:05:43.918241 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 18:05:43.918250 systemd[1]: Reached target paths.target - Path Units. Nov 12 18:05:43.918258 systemd[1]: Reached target slices.target - Slice Units. Nov 12 18:05:43.918266 systemd[1]: Reached target swap.target - Swaps. Nov 12 18:05:43.918274 systemd[1]: Reached target timers.target - Timer Units. Nov 12 18:05:43.918282 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 18:05:43.918289 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 18:05:43.918297 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 18:05:43.918305 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 18:05:43.918314 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 18:05:43.918322 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 18:05:43.918330 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 18:05:43.918338 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 18:05:43.918345 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 18:05:43.918353 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 18:05:43.918361 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 18:05:43.918369 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 18:05:43.918377 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 18:05:43.918398 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 18:05:43.918407 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 18:05:43.918414 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 18:05:43.918422 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 18:05:43.918430 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 18:05:43.918438 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 18:05:43.918448 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 18:05:43.918471 systemd-journald[238]: Collecting audit messages is disabled. Nov 12 18:05:43.918491 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 18:05:43.918499 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 18:05:43.918507 systemd-journald[238]: Journal started Nov 12 18:05:43.918526 systemd-journald[238]: Runtime Journal (/run/log/journal/21a3ddb06c6d4185bb58cc3d2d273037) is 5.9M, max 47.3M, 41.4M free. Nov 12 18:05:43.905987 systemd-modules-load[239]: Inserted module 'overlay' Nov 12 18:05:43.920801 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 18:05:43.920826 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 18:05:43.923899 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 18:05:43.923925 kernel: Bridge firewalling registered Nov 12 18:05:43.924300 systemd-modules-load[239]: Inserted module 'br_netfilter' Nov 12 18:05:43.925137 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 18:05:43.928162 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 18:05:43.930091 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 18:05:43.932303 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 18:05:43.934172 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 18:05:43.936923 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 18:05:43.942778 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 18:05:43.944193 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 18:05:43.946382 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 18:05:43.951856 dracut-cmdline[272]: dracut-dracut-053 Nov 12 18:05:43.957428 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8c276c03cfeb31103ba0b5f1af613bdc698463ad3d29e6750e34154929bf187e Nov 12 18:05:43.983687 systemd-resolved[280]: Positive Trust Anchors: Nov 12 18:05:43.983703 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 18:05:43.983736 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 18:05:43.988463 systemd-resolved[280]: Defaulting to hostname 'linux'. Nov 12 18:05:43.993155 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 18:05:43.993988 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 18:05:44.022821 kernel: SCSI subsystem initialized Nov 12 18:05:44.027804 kernel: Loading iSCSI transport class v2.0-870. Nov 12 18:05:44.034809 kernel: iscsi: registered transport (tcp) Nov 12 18:05:44.047820 kernel: iscsi: registered transport (qla4xxx) Nov 12 18:05:44.047835 kernel: QLogic iSCSI HBA Driver Nov 12 18:05:44.087688 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 18:05:44.092911 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 18:05:44.109041 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 18:05:44.109078 kernel: device-mapper: uevent: version 1.0.3 Nov 12 18:05:44.109099 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 18:05:44.155814 kernel: raid6: neonx8 gen() 15741 MB/s Nov 12 18:05:44.172815 kernel: raid6: neonx4 gen() 15615 MB/s Nov 12 18:05:44.189803 kernel: raid6: neonx2 gen() 13164 MB/s Nov 12 18:05:44.206808 kernel: raid6: neonx1 gen() 10409 MB/s Nov 12 18:05:44.223805 kernel: raid6: int64x8 gen() 6936 MB/s Nov 12 18:05:44.240811 kernel: raid6: int64x4 gen() 7331 MB/s Nov 12 18:05:44.257813 kernel: raid6: int64x2 gen() 6099 MB/s Nov 12 18:05:44.274810 kernel: raid6: int64x1 gen() 5040 MB/s Nov 12 18:05:44.274836 kernel: raid6: using algorithm neonx8 gen() 15741 MB/s Nov 12 18:05:44.291814 kernel: raid6: .... xor() 11871 MB/s, rmw enabled Nov 12 18:05:44.291840 kernel: raid6: using neon recovery algorithm Nov 12 18:05:44.296800 kernel: xor: measuring software checksum speed Nov 12 18:05:44.296818 kernel: 8regs : 19783 MB/sec Nov 12 18:05:44.298178 kernel: 32regs : 18448 MB/sec Nov 12 18:05:44.298191 kernel: arm64_neon : 26972 MB/sec Nov 12 18:05:44.298204 kernel: xor: using function: arm64_neon (26972 MB/sec) Nov 12 18:05:44.347970 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 18:05:44.358299 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 18:05:44.371927 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 18:05:44.382491 systemd-udevd[461]: Using default interface naming scheme 'v255'. Nov 12 18:05:44.385618 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 18:05:44.388287 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 18:05:44.402108 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Nov 12 18:05:44.426541 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 18:05:44.434028 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 18:05:44.473536 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 18:05:44.480263 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 18:05:44.492397 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 18:05:44.494497 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 18:05:44.496901 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 18:05:44.498524 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 18:05:44.505958 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 18:05:44.517886 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 18:05:44.533382 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 12 18:05:44.538474 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 18:05:44.538585 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 18:05:44.538604 kernel: GPT:9289727 != 19775487 Nov 12 18:05:44.538614 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 18:05:44.538623 kernel: GPT:9289727 != 19775487 Nov 12 18:05:44.538639 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 18:05:44.538648 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 18:05:44.540884 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 18:05:44.541008 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 18:05:44.543514 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 18:05:44.544898 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 18:05:44.545028 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 18:05:44.546689 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 18:05:44.554814 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (523) Nov 12 18:05:44.558049 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 18:05:44.560198 kernel: BTRFS: device fsid 93a9d474-e751-47b7-a65f-e39ca9abd47a devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (514) Nov 12 18:05:44.570539 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 18:05:44.572476 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 18:05:44.584078 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 18:05:44.588933 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 18:05:44.592983 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 18:05:44.593927 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 18:05:44.607930 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 18:05:44.610197 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 18:05:44.613214 disk-uuid[549]: Primary Header is updated. Nov 12 18:05:44.613214 disk-uuid[549]: Secondary Entries is updated. Nov 12 18:05:44.613214 disk-uuid[549]: Secondary Header is updated. Nov 12 18:05:44.616812 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 18:05:44.631566 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 18:05:45.628251 disk-uuid[551]: The operation has completed successfully. Nov 12 18:05:45.629170 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 18:05:45.646819 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 18:05:45.646913 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 18:05:45.678930 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 18:05:45.681748 sh[573]: Success Nov 12 18:05:45.694816 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 12 18:05:45.721327 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 18:05:45.740106 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 18:05:45.742813 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 18:05:45.749981 kernel: BTRFS info (device dm-0): first mount of filesystem 93a9d474-e751-47b7-a65f-e39ca9abd47a Nov 12 18:05:45.750023 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 12 18:05:45.750033 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 18:05:45.751293 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 18:05:45.751308 kernel: BTRFS info (device dm-0): using free space tree Nov 12 18:05:45.754539 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 18:05:45.755664 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 18:05:45.765008 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 18:05:45.766265 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 18:05:45.772197 kernel: BTRFS info (device vda6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 18:05:45.772231 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 18:05:45.772241 kernel: BTRFS info (device vda6): using free space tree Nov 12 18:05:45.774808 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 18:05:45.781383 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 18:05:45.782823 kernel: BTRFS info (device vda6): last unmount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 18:05:45.787201 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 18:05:45.791962 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 18:05:45.857953 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 18:05:45.867934 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 18:05:45.886151 ignition[660]: Ignition 2.19.0 Nov 12 18:05:45.886160 ignition[660]: Stage: fetch-offline Nov 12 18:05:45.886198 ignition[660]: no configs at "/usr/lib/ignition/base.d" Nov 12 18:05:45.886208 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 18:05:45.886403 ignition[660]: parsed url from cmdline: "" Nov 12 18:05:45.886406 ignition[660]: no config URL provided Nov 12 18:05:45.886411 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 18:05:45.886418 ignition[660]: no config at "/usr/lib/ignition/user.ign" Nov 12 18:05:45.886440 ignition[660]: op(1): [started] loading QEMU firmware config module Nov 12 18:05:45.892840 systemd-networkd[763]: lo: Link UP Nov 12 18:05:45.886444 ignition[660]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 18:05:45.892843 systemd-networkd[763]: lo: Gained carrier Nov 12 18:05:45.892250 ignition[660]: op(1): [finished] loading QEMU firmware config module Nov 12 18:05:45.893485 systemd-networkd[763]: Enumeration completed Nov 12 18:05:45.893596 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 18:05:45.894002 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 18:05:45.894005 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 18:05:45.894923 systemd-networkd[763]: eth0: Link UP Nov 12 18:05:45.894926 systemd-networkd[763]: eth0: Gained carrier Nov 12 18:05:45.894933 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 18:05:45.895676 systemd[1]: Reached target network.target - Network. Nov 12 18:05:45.912830 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 18:05:45.942440 ignition[660]: parsing config with SHA512: cc5f42ab85193e1d9455fc628a2cdee0746e9a7d3e11014c352dfc83830b5c3070ea2386c95bd67d09858be0752e046d666ba16a199699cfd44a0b53a0487d32 Nov 12 18:05:45.946473 unknown[660]: fetched base config from "system" Nov 12 18:05:45.946482 unknown[660]: fetched user config from "qemu" Nov 12 18:05:45.947321 ignition[660]: fetch-offline: fetch-offline passed Nov 12 18:05:45.948533 ignition[660]: Ignition finished successfully Nov 12 18:05:45.950049 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 18:05:45.951063 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 18:05:45.956988 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 18:05:45.967036 ignition[771]: Ignition 2.19.0 Nov 12 18:05:45.967045 ignition[771]: Stage: kargs Nov 12 18:05:45.967198 ignition[771]: no configs at "/usr/lib/ignition/base.d" Nov 12 18:05:45.967207 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 18:05:45.968088 ignition[771]: kargs: kargs passed Nov 12 18:05:45.968135 ignition[771]: Ignition finished successfully Nov 12 18:05:45.971731 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 18:05:45.982998 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 18:05:45.992149 ignition[779]: Ignition 2.19.0 Nov 12 18:05:45.992160 ignition[779]: Stage: disks Nov 12 18:05:45.992322 ignition[779]: no configs at "/usr/lib/ignition/base.d" Nov 12 18:05:45.992334 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 18:05:45.993255 ignition[779]: disks: disks passed Nov 12 18:05:45.993297 ignition[779]: Ignition finished successfully Nov 12 18:05:45.994929 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 18:05:45.995829 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 18:05:45.996821 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 18:05:45.998283 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 18:05:45.999553 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 18:05:46.000951 systemd[1]: Reached target basic.target - Basic System. Nov 12 18:05:46.011942 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 18:05:46.020897 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 18:05:46.024708 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 18:05:46.037912 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 18:05:46.079809 kernel: EXT4-fs (vda9): mounted filesystem b3af0fd7-3c7c-4cdc-9b88-dae3d10ea922 r/w with ordered data mode. Quota mode: none. Nov 12 18:05:46.079996 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 18:05:46.080979 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 18:05:46.094866 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 18:05:46.096368 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 18:05:46.098368 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 18:05:46.098416 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 18:05:46.098438 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 18:05:46.104115 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (798) Nov 12 18:05:46.102272 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 18:05:46.104149 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 18:05:46.108847 kernel: BTRFS info (device vda6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 18:05:46.108865 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 18:05:46.108875 kernel: BTRFS info (device vda6): using free space tree Nov 12 18:05:46.108894 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 18:05:46.110455 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 18:05:46.146411 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 18:05:46.149622 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Nov 12 18:05:46.152810 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 18:05:46.155589 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 18:05:46.219433 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 18:05:46.235921 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 18:05:46.237251 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 18:05:46.241819 kernel: BTRFS info (device vda6): last unmount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 18:05:46.258001 ignition[911]: INFO : Ignition 2.19.0 Nov 12 18:05:46.258001 ignition[911]: INFO : Stage: mount Nov 12 18:05:46.259926 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 18:05:46.259926 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 18:05:46.259926 ignition[911]: INFO : mount: mount passed Nov 12 18:05:46.259926 ignition[911]: INFO : Ignition finished successfully Nov 12 18:05:46.259112 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 18:05:46.261820 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 18:05:46.267951 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 18:05:46.749551 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 18:05:46.761948 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 18:05:46.767394 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (926) Nov 12 18:05:46.767427 kernel: BTRFS info (device vda6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 18:05:46.767438 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 18:05:46.768078 kernel: BTRFS info (device vda6): using free space tree Nov 12 18:05:46.770816 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 18:05:46.771420 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 18:05:46.785968 ignition[943]: INFO : Ignition 2.19.0 Nov 12 18:05:46.785968 ignition[943]: INFO : Stage: files Nov 12 18:05:46.787457 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 18:05:46.787457 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 18:05:46.787457 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Nov 12 18:05:46.790628 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 18:05:46.790628 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 18:05:46.793092 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 18:05:46.793092 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 18:05:46.793092 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 18:05:46.792564 unknown[943]: wrote ssh authorized keys file for user: core Nov 12 18:05:46.797754 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 18:05:46.797754 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 18:05:46.797754 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 18:05:46.797754 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Nov 12 18:05:47.083442 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 18:05:47.617512 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 18:05:47.618885 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 12 18:05:47.618885 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 18:05:47.618885 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 18:05:47.618885 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 18:05:47.618885 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 18:05:47.618885 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 18:05:47.618885 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 18:05:47.618885 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 18:05:47.618885 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 18:05:47.629909 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 18:05:47.629909 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 18:05:47.629909 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 18:05:47.629909 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 18:05:47.629909 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Nov 12 18:05:47.718927 systemd-networkd[763]: eth0: Gained IPv6LL Nov 12 18:05:47.991213 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 12 18:05:48.604619 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 18:05:48.606243 ignition[943]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 12 18:05:48.606243 ignition[943]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 18:05:48.606243 ignition[943]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 18:05:48.606243 ignition[943]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 12 18:05:48.606243 ignition[943]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 12 18:05:48.606243 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 18:05:48.606243 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 18:05:48.606243 ignition[943]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 12 18:05:48.606243 ignition[943]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Nov 12 18:05:48.606243 ignition[943]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 18:05:48.606243 ignition[943]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 18:05:48.606243 ignition[943]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Nov 12 18:05:48.606243 ignition[943]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 18:05:48.626991 ignition[943]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 18:05:48.630375 ignition[943]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 18:05:48.632359 ignition[943]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 18:05:48.632359 ignition[943]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Nov 12 18:05:48.632359 ignition[943]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 18:05:48.632359 ignition[943]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 18:05:48.632359 ignition[943]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 18:05:48.632359 ignition[943]: INFO : files: files passed Nov 12 18:05:48.632359 ignition[943]: INFO : Ignition finished successfully Nov 12 18:05:48.634446 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 18:05:48.641025 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 18:05:48.643134 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 18:05:48.646054 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 18:05:48.646923 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 18:05:48.650105 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 18:05:48.653348 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 18:05:48.653348 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 18:05:48.655806 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 18:05:48.657880 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 18:05:48.658920 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 18:05:48.669967 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 18:05:48.688118 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 18:05:48.688233 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 18:05:48.689805 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 18:05:48.691212 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 18:05:48.691937 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 18:05:48.692624 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 18:05:48.706865 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 18:05:48.709079 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 18:05:48.719601 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 18:05:48.720546 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 18:05:48.721985 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 18:05:48.723228 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 18:05:48.723343 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 18:05:48.725113 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 18:05:48.726627 systemd[1]: Stopped target basic.target - Basic System. Nov 12 18:05:48.727941 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 18:05:48.729166 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 18:05:48.730562 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 18:05:48.732051 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 18:05:48.733354 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 18:05:48.734700 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 18:05:48.736215 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 18:05:48.737440 systemd[1]: Stopped target swap.target - Swaps. Nov 12 18:05:48.738478 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 18:05:48.738590 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 18:05:48.740231 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 18:05:48.741591 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 18:05:48.743041 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 18:05:48.743890 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 18:05:48.745194 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 18:05:48.745310 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 18:05:48.747387 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 18:05:48.747574 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 18:05:48.748887 systemd[1]: Stopped target paths.target - Path Units. Nov 12 18:05:48.750001 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 18:05:48.754841 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 18:05:48.755764 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 18:05:48.757410 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 18:05:48.758478 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 18:05:48.758567 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 18:05:48.759629 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 18:05:48.759719 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 18:05:48.760942 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 18:05:48.761052 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 18:05:48.762274 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 18:05:48.762380 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 18:05:48.775945 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 18:05:48.776579 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 18:05:48.776714 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 18:05:48.779262 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 18:05:48.780019 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 18:05:48.780139 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 18:05:48.781434 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 18:05:48.781533 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 18:05:48.786850 ignition[998]: INFO : Ignition 2.19.0 Nov 12 18:05:48.786850 ignition[998]: INFO : Stage: umount Nov 12 18:05:48.788847 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 18:05:48.788847 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 18:05:48.788847 ignition[998]: INFO : umount: umount passed Nov 12 18:05:48.788847 ignition[998]: INFO : Ignition finished successfully Nov 12 18:05:48.788946 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 18:05:48.789027 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 18:05:48.790634 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 18:05:48.790729 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 18:05:48.792942 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 18:05:48.793527 systemd[1]: Stopped target network.target - Network. Nov 12 18:05:48.794623 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 18:05:48.794704 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 18:05:48.796311 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 18:05:48.796359 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 18:05:48.797527 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 18:05:48.797568 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 18:05:48.799404 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 18:05:48.799486 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 18:05:48.801512 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 18:05:48.802623 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 18:05:48.804316 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 18:05:48.804402 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 18:05:48.806089 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 18:05:48.806186 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 18:05:48.809138 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 18:05:48.809270 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 18:05:48.810390 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 18:05:48.810439 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 18:05:48.811134 systemd-networkd[763]: eth0: DHCPv6 lease lost Nov 12 18:05:48.813649 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 18:05:48.813761 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 18:05:48.815013 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 18:05:48.815045 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 18:05:48.820913 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 18:05:48.822073 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 18:05:48.822133 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 18:05:48.823577 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 18:05:48.823621 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 18:05:48.824799 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 18:05:48.824840 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 18:05:48.826357 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 18:05:48.834860 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 18:05:48.834965 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 18:05:48.846439 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 18:05:48.846581 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 18:05:48.847766 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 18:05:48.847844 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 18:05:48.850162 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 18:05:48.850201 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 18:05:48.851471 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 18:05:48.851517 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 18:05:48.853595 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 18:05:48.853652 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 18:05:48.855711 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 18:05:48.855756 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 18:05:48.866940 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 18:05:48.867685 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 18:05:48.867741 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 18:05:48.869528 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 18:05:48.869571 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 18:05:48.873985 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 18:05:48.874759 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 18:05:48.876541 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 18:05:48.878576 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 18:05:48.886926 systemd[1]: Switching root. Nov 12 18:05:48.914504 systemd-journald[238]: Journal stopped Nov 12 18:05:49.590456 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Nov 12 18:05:49.590506 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 18:05:49.590519 kernel: SELinux: policy capability open_perms=1 Nov 12 18:05:49.590528 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 18:05:49.590538 kernel: SELinux: policy capability always_check_network=0 Nov 12 18:05:49.590550 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 18:05:49.590560 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 18:05:49.590569 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 18:05:49.590578 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 18:05:49.590588 kernel: audit: type=1403 audit(1731434749.082:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 18:05:49.590598 systemd[1]: Successfully loaded SELinux policy in 33.958ms. Nov 12 18:05:49.590617 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.056ms. Nov 12 18:05:49.590629 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 18:05:49.590652 systemd[1]: Detected virtualization kvm. Nov 12 18:05:49.590665 systemd[1]: Detected architecture arm64. Nov 12 18:05:49.590676 systemd[1]: Detected first boot. Nov 12 18:05:49.590686 systemd[1]: Initializing machine ID from VM UUID. Nov 12 18:05:49.590696 zram_generator::config[1065]: No configuration found. Nov 12 18:05:49.590707 systemd[1]: Populated /etc with preset unit settings. Nov 12 18:05:49.590717 systemd[1]: Queued start job for default target multi-user.target. Nov 12 18:05:49.590728 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 18:05:49.590739 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 18:05:49.590751 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 18:05:49.590761 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 18:05:49.590772 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 18:05:49.590782 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 18:05:49.591286 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 18:05:49.591299 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 18:05:49.591309 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 18:05:49.591320 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 18:05:49.591332 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 18:05:49.591347 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 18:05:49.591359 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 18:05:49.591371 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 18:05:49.591381 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 18:05:49.591391 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 12 18:05:49.591402 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 18:05:49.591412 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 18:05:49.591422 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 18:05:49.591433 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 18:05:49.591445 systemd[1]: Reached target slices.target - Slice Units. Nov 12 18:05:49.591456 systemd[1]: Reached target swap.target - Swaps. Nov 12 18:05:49.591466 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 18:05:49.591476 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 18:05:49.591487 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 18:05:49.591497 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 18:05:49.591508 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 18:05:49.591518 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 18:05:49.591536 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 18:05:49.591547 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 18:05:49.591557 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 18:05:49.591567 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 18:05:49.591577 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 18:05:49.591589 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 18:05:49.591599 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 18:05:49.591609 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 18:05:49.591620 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 18:05:49.591643 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 18:05:49.591658 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 18:05:49.591669 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 18:05:49.591679 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 18:05:49.591689 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 18:05:49.591699 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 18:05:49.591710 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 18:05:49.591720 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 18:05:49.591733 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 18:05:49.591744 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 12 18:05:49.591761 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 12 18:05:49.591771 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 18:05:49.591781 kernel: fuse: init (API version 7.39) Nov 12 18:05:49.591815 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 18:05:49.591827 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 18:05:49.591837 kernel: ACPI: bus type drm_connector registered Nov 12 18:05:49.591846 kernel: loop: module loaded Nov 12 18:05:49.591860 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 18:05:49.591870 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 18:05:49.591881 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 18:05:49.591914 systemd-journald[1136]: Collecting audit messages is disabled. Nov 12 18:05:49.591937 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 18:05:49.591947 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 18:05:49.591958 systemd-journald[1136]: Journal started Nov 12 18:05:49.591981 systemd-journald[1136]: Runtime Journal (/run/log/journal/21a3ddb06c6d4185bb58cc3d2d273037) is 5.9M, max 47.3M, 41.4M free. Nov 12 18:05:49.593533 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 18:05:49.595410 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 18:05:49.596348 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 18:05:49.597229 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 18:05:49.598182 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 18:05:49.599288 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 18:05:49.599450 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 18:05:49.600543 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 18:05:49.600703 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 18:05:49.601890 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 18:05:49.602025 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 18:05:49.603104 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 18:05:49.603255 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 18:05:49.604564 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 18:05:49.604724 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 18:05:49.605883 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 18:05:49.606945 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 18:05:49.607185 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 18:05:49.608529 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 18:05:49.609837 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 18:05:49.610972 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 18:05:49.621313 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 18:05:49.635882 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 18:05:49.637771 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 18:05:49.638563 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 18:05:49.641583 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 18:05:49.645082 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 18:05:49.645910 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 18:05:49.646829 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 18:05:49.647648 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 18:05:49.650973 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 18:05:49.652325 systemd-journald[1136]: Time spent on flushing to /var/log/journal/21a3ddb06c6d4185bb58cc3d2d273037 is 17.599ms for 843 entries. Nov 12 18:05:49.652325 systemd-journald[1136]: System Journal (/var/log/journal/21a3ddb06c6d4185bb58cc3d2d273037) is 8.0M, max 195.6M, 187.6M free. Nov 12 18:05:49.674553 systemd-journald[1136]: Received client request to flush runtime journal. Nov 12 18:05:49.653533 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 18:05:49.656954 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 18:05:49.658014 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 18:05:49.659109 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 18:05:49.660223 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 18:05:49.663342 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 18:05:49.672049 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 18:05:49.678347 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 18:05:49.679849 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 18:05:49.681762 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Nov 12 18:05:49.681781 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Nov 12 18:05:49.683520 udevadm[1204]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 12 18:05:49.685668 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 18:05:49.695090 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 18:05:49.713066 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 18:05:49.721904 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 18:05:49.732984 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Nov 12 18:05:49.733005 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Nov 12 18:05:49.736455 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 18:05:50.156979 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 18:05:50.169098 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 18:05:50.190098 systemd-udevd[1224]: Using default interface naming scheme 'v255'. Nov 12 18:05:50.202247 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 18:05:50.212962 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 18:05:50.231150 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 18:05:50.233130 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Nov 12 18:05:50.234829 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1241) Nov 12 18:05:50.236938 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1241) Nov 12 18:05:50.242890 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1227) Nov 12 18:05:50.265210 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 18:05:50.292368 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 18:05:50.329906 systemd-networkd[1231]: lo: Link UP Nov 12 18:05:50.329917 systemd-networkd[1231]: lo: Gained carrier Nov 12 18:05:50.330574 systemd-networkd[1231]: Enumeration completed Nov 12 18:05:50.331050 systemd-networkd[1231]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 18:05:50.331053 systemd-networkd[1231]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 18:05:50.331620 systemd-networkd[1231]: eth0: Link UP Nov 12 18:05:50.331624 systemd-networkd[1231]: eth0: Gained carrier Nov 12 18:05:50.331646 systemd-networkd[1231]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 18:05:50.340762 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 18:05:50.341819 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 18:05:50.345710 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 18:05:50.347856 systemd-networkd[1231]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 18:05:50.351853 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 18:05:50.354592 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 18:05:50.368439 lvm[1263]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 18:05:50.373976 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 18:05:50.399106 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 18:05:50.400190 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 18:05:50.412050 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 18:05:50.414842 lvm[1270]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 18:05:50.456931 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 18:05:50.457972 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 18:05:50.458848 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 18:05:50.458875 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 18:05:50.459557 systemd[1]: Reached target machines.target - Containers. Nov 12 18:05:50.461188 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 18:05:50.473904 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 18:05:50.475709 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 18:05:50.476560 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 18:05:50.477424 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 18:05:50.479305 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 18:05:50.483943 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 18:05:50.485413 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 18:05:50.495515 kernel: loop0: detected capacity change from 0 to 114328 Nov 12 18:05:50.494918 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 18:05:50.501345 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 18:05:50.501993 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 18:05:50.505853 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 18:05:50.545875 kernel: loop1: detected capacity change from 0 to 114432 Nov 12 18:05:50.593288 kernel: loop2: detected capacity change from 0 to 194512 Nov 12 18:05:50.630824 kernel: loop3: detected capacity change from 0 to 114328 Nov 12 18:05:50.635807 kernel: loop4: detected capacity change from 0 to 114432 Nov 12 18:05:50.639805 kernel: loop5: detected capacity change from 0 to 194512 Nov 12 18:05:50.643982 (sd-merge)[1292]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 18:05:50.644985 (sd-merge)[1292]: Merged extensions into '/usr'. Nov 12 18:05:50.648463 systemd[1]: Reloading requested from client PID 1278 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 18:05:50.648478 systemd[1]: Reloading... Nov 12 18:05:50.691831 zram_generator::config[1321]: No configuration found. Nov 12 18:05:50.714679 ldconfig[1274]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 18:05:50.788875 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 18:05:50.830648 systemd[1]: Reloading finished in 181 ms. Nov 12 18:05:50.845400 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 18:05:50.846564 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 18:05:50.862155 systemd[1]: Starting ensure-sysext.service... Nov 12 18:05:50.863756 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 18:05:50.866864 systemd[1]: Reloading requested from client PID 1362 ('systemctl') (unit ensure-sysext.service)... Nov 12 18:05:50.866878 systemd[1]: Reloading... Nov 12 18:05:50.879264 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 18:05:50.879526 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 18:05:50.880287 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 18:05:50.880507 systemd-tmpfiles[1363]: ACLs are not supported, ignoring. Nov 12 18:05:50.880556 systemd-tmpfiles[1363]: ACLs are not supported, ignoring. Nov 12 18:05:50.882905 systemd-tmpfiles[1363]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 18:05:50.882918 systemd-tmpfiles[1363]: Skipping /boot Nov 12 18:05:50.889486 systemd-tmpfiles[1363]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 18:05:50.889504 systemd-tmpfiles[1363]: Skipping /boot Nov 12 18:05:50.908901 zram_generator::config[1391]: No configuration found. Nov 12 18:05:50.995281 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 18:05:51.036962 systemd[1]: Reloading finished in 169 ms. Nov 12 18:05:51.051285 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 18:05:51.073396 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 18:05:51.075498 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 18:05:51.077465 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 18:05:51.079968 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 18:05:51.083871 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 18:05:51.092909 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 18:05:51.094031 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 18:05:51.099049 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 18:05:51.101290 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 18:05:51.102897 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 18:05:51.103675 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 18:05:51.104501 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 18:05:51.108160 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 18:05:51.108295 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 18:05:51.110415 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 18:05:51.110602 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 18:05:51.117287 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 18:05:51.125178 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 18:05:51.130105 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 18:05:51.133171 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 18:05:51.135943 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 18:05:51.137345 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 18:05:51.138888 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 18:05:51.139242 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 18:05:51.140573 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 18:05:51.140737 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 18:05:51.142338 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 18:05:51.144963 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 18:05:51.151154 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 18:05:51.157537 systemd-resolved[1437]: Positive Trust Anchors: Nov 12 18:05:51.158804 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 18:05:51.159331 systemd-resolved[1437]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 18:05:51.159366 systemd-resolved[1437]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 18:05:51.159816 augenrules[1470]: No rules Nov 12 18:05:51.163336 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 18:05:51.165222 systemd[1]: Finished ensure-sysext.service. Nov 12 18:05:51.165309 systemd-resolved[1437]: Defaulting to hostname 'linux'. Nov 12 18:05:51.166565 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 18:05:51.172116 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 18:05:51.173834 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 18:05:51.175479 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 18:05:51.177340 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 18:05:51.178205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 18:05:51.180991 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 18:05:51.182954 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 18:05:51.184257 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 18:05:51.184462 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 18:05:51.185889 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 18:05:51.186134 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 18:05:51.187340 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 18:05:51.187578 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 18:05:51.188999 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 18:05:51.189252 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 18:05:51.190462 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 18:05:51.190774 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 18:05:51.194617 systemd[1]: Reached target network.target - Network. Nov 12 18:05:51.195650 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 18:05:51.196769 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 18:05:51.196947 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 18:05:51.197370 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 18:05:51.234684 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 18:05:51.235927 systemd-timesyncd[1495]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 18:05:51.235978 systemd-timesyncd[1495]: Initial clock synchronization to Tue 2024-11-12 18:05:51.058999 UTC. Nov 12 18:05:51.236139 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 18:05:51.236993 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 18:05:51.237879 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 18:05:51.238744 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 18:05:51.239791 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 18:05:51.239824 systemd[1]: Reached target paths.target - Path Units. Nov 12 18:05:51.240437 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 18:05:51.241300 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 18:05:51.242177 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 18:05:51.243046 systemd[1]: Reached target timers.target - Timer Units. Nov 12 18:05:51.244520 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 18:05:51.246724 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 18:05:51.248727 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 18:05:51.253745 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 18:05:51.254574 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 18:05:51.255319 systemd[1]: Reached target basic.target - Basic System. Nov 12 18:05:51.256130 systemd[1]: System is tainted: cgroupsv1 Nov 12 18:05:51.256174 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 18:05:51.256195 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 18:05:51.257283 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 18:05:51.259066 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 18:05:51.260714 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 18:05:51.264993 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 18:05:51.265722 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 18:05:51.266777 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 18:05:51.272922 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 18:05:51.275128 jq[1511]: false Nov 12 18:05:51.274615 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 18:05:51.279025 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 18:05:51.283214 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 18:05:51.285150 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 18:05:51.288987 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 18:05:51.293275 extend-filesystems[1512]: Found loop3 Nov 12 18:05:51.298250 extend-filesystems[1512]: Found loop4 Nov 12 18:05:51.298250 extend-filesystems[1512]: Found loop5 Nov 12 18:05:51.298250 extend-filesystems[1512]: Found vda Nov 12 18:05:51.298250 extend-filesystems[1512]: Found vda1 Nov 12 18:05:51.298250 extend-filesystems[1512]: Found vda2 Nov 12 18:05:51.298250 extend-filesystems[1512]: Found vda3 Nov 12 18:05:51.298250 extend-filesystems[1512]: Found usr Nov 12 18:05:51.298250 extend-filesystems[1512]: Found vda4 Nov 12 18:05:51.298250 extend-filesystems[1512]: Found vda6 Nov 12 18:05:51.298250 extend-filesystems[1512]: Found vda7 Nov 12 18:05:51.298250 extend-filesystems[1512]: Found vda9 Nov 12 18:05:51.298250 extend-filesystems[1512]: Checking size of /dev/vda9 Nov 12 18:05:51.294242 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 18:05:51.299768 dbus-daemon[1510]: [system] SELinux support is enabled Nov 12 18:05:51.301409 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 18:05:51.307364 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 18:05:51.316686 jq[1527]: true Nov 12 18:05:51.307602 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 18:05:51.323026 extend-filesystems[1512]: Resized partition /dev/vda9 Nov 12 18:05:51.307946 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 18:05:51.308154 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 18:05:51.322143 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 18:05:51.322381 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 18:05:51.329844 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1240) Nov 12 18:05:51.339179 extend-filesystems[1542]: resize2fs 1.47.1 (20-May-2024) Nov 12 18:05:51.349203 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 18:05:51.342892 (ntainerd)[1544]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 18:05:51.349456 tar[1540]: linux-arm64/helm Nov 12 18:05:51.349728 jq[1543]: true Nov 12 18:05:51.349716 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 18:05:51.349748 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 18:05:51.350770 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 18:05:51.350800 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 18:05:51.382839 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 18:05:51.389131 update_engine[1524]: I20241112 18:05:51.388816 1524 main.cc:92] Flatcar Update Engine starting Nov 12 18:05:51.391532 systemd[1]: Started update-engine.service - Update Engine. Nov 12 18:05:51.402234 update_engine[1524]: I20241112 18:05:51.391541 1524 update_check_scheduler.cc:74] Next update check in 9m12s Nov 12 18:05:51.393479 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 18:05:51.402846 systemd-logind[1523]: Watching system buttons on /dev/input/event0 (Power Button) Nov 12 18:05:51.403653 extend-filesystems[1542]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 18:05:51.403653 extend-filesystems[1542]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 18:05:51.403653 extend-filesystems[1542]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 18:05:51.403031 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 18:05:51.425451 extend-filesystems[1512]: Resized filesystem in /dev/vda9 Nov 12 18:05:51.408965 systemd-logind[1523]: New seat seat0. Nov 12 18:05:51.410122 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 18:05:51.411456 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 18:05:51.411744 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 18:05:51.453814 bash[1570]: Updated "/home/core/.ssh/authorized_keys" Nov 12 18:05:51.459082 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 18:05:51.460923 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 18:05:51.482362 locksmithd[1571]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 18:05:51.586581 containerd[1544]: time="2024-11-12T18:05:51.586445880Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 18:05:51.613216 containerd[1544]: time="2024-11-12T18:05:51.613175200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 18:05:51.614826 containerd[1544]: time="2024-11-12T18:05:51.614578600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 18:05:51.614826 containerd[1544]: time="2024-11-12T18:05:51.614613840Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 18:05:51.614826 containerd[1544]: time="2024-11-12T18:05:51.614637600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 18:05:51.614826 containerd[1544]: time="2024-11-12T18:05:51.614805320Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 18:05:51.614826 containerd[1544]: time="2024-11-12T18:05:51.614823760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 18:05:51.614977 containerd[1544]: time="2024-11-12T18:05:51.614881080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 18:05:51.614977 containerd[1544]: time="2024-11-12T18:05:51.614893360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 18:05:51.615102 containerd[1544]: time="2024-11-12T18:05:51.615077080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 18:05:51.615127 containerd[1544]: time="2024-11-12T18:05:51.615100320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 18:05:51.615127 containerd[1544]: time="2024-11-12T18:05:51.615113640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 18:05:51.615127 containerd[1544]: time="2024-11-12T18:05:51.615123280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 18:05:51.615214 containerd[1544]: time="2024-11-12T18:05:51.615199960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 18:05:51.615413 containerd[1544]: time="2024-11-12T18:05:51.615385680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 18:05:51.615530 containerd[1544]: time="2024-11-12T18:05:51.615513320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 18:05:51.615551 containerd[1544]: time="2024-11-12T18:05:51.615531360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 18:05:51.615629 containerd[1544]: time="2024-11-12T18:05:51.615610960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 18:05:51.615680 containerd[1544]: time="2024-11-12T18:05:51.615667320Z" level=info msg="metadata content store policy set" policy=shared Nov 12 18:05:51.618906 containerd[1544]: time="2024-11-12T18:05:51.618872600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 18:05:51.618965 containerd[1544]: time="2024-11-12T18:05:51.618921560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 18:05:51.618965 containerd[1544]: time="2024-11-12T18:05:51.618939480Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 18:05:51.618965 containerd[1544]: time="2024-11-12T18:05:51.618959480Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 18:05:51.619017 containerd[1544]: time="2024-11-12T18:05:51.618973960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 18:05:51.619252 containerd[1544]: time="2024-11-12T18:05:51.619216000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 18:05:51.621776 containerd[1544]: time="2024-11-12T18:05:51.619650560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 18:05:51.621776 containerd[1544]: time="2024-11-12T18:05:51.619810800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 18:05:51.621776 containerd[1544]: time="2024-11-12T18:05:51.619828440Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 18:05:51.621776 containerd[1544]: time="2024-11-12T18:05:51.619842040Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 18:05:51.621776 containerd[1544]: time="2024-11-12T18:05:51.619855520Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 18:05:51.621776 containerd[1544]: time="2024-11-12T18:05:51.619868160Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 18:05:51.621776 containerd[1544]: time="2024-11-12T18:05:51.619880920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 18:05:51.621776 containerd[1544]: time="2024-11-12T18:05:51.619894280Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 18:05:51.621776 containerd[1544]: time="2024-11-12T18:05:51.619907720Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 18:05:51.621776 containerd[1544]: time="2024-11-12T18:05:51.619919680Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 18:05:51.621776 containerd[1544]: time="2024-11-12T18:05:51.619931200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 18:05:51.621776 containerd[1544]: time="2024-11-12T18:05:51.619941920Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 18:05:51.621776 containerd[1544]: time="2024-11-12T18:05:51.619970360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 18:05:51.621776 containerd[1544]: time="2024-11-12T18:05:51.619983600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 18:05:51.622097 containerd[1544]: time="2024-11-12T18:05:51.619996360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 18:05:51.622097 containerd[1544]: time="2024-11-12T18:05:51.620008120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 18:05:51.622097 containerd[1544]: time="2024-11-12T18:05:51.620020200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 18:05:51.622097 containerd[1544]: time="2024-11-12T18:05:51.620033480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 18:05:51.622097 containerd[1544]: time="2024-11-12T18:05:51.620045000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 18:05:51.622097 containerd[1544]: time="2024-11-12T18:05:51.620058720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 18:05:51.622097 containerd[1544]: time="2024-11-12T18:05:51.620071360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 18:05:51.622097 containerd[1544]: time="2024-11-12T18:05:51.620084600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 18:05:51.622097 containerd[1544]: time="2024-11-12T18:05:51.620095880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 18:05:51.622097 containerd[1544]: time="2024-11-12T18:05:51.620107400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 18:05:51.622097 containerd[1544]: time="2024-11-12T18:05:51.620118360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 18:05:51.622097 containerd[1544]: time="2024-11-12T18:05:51.620133560Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 18:05:51.622097 containerd[1544]: time="2024-11-12T18:05:51.620154080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 18:05:51.622097 containerd[1544]: time="2024-11-12T18:05:51.620165880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 18:05:51.622097 containerd[1544]: time="2024-11-12T18:05:51.620175960Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 18:05:51.622346 containerd[1544]: time="2024-11-12T18:05:51.620281360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 18:05:51.622346 containerd[1544]: time="2024-11-12T18:05:51.620298840Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 18:05:51.622346 containerd[1544]: time="2024-11-12T18:05:51.620309280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 18:05:51.622346 containerd[1544]: time="2024-11-12T18:05:51.620321200Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 18:05:51.622346 containerd[1544]: time="2024-11-12T18:05:51.620330520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 18:05:51.622346 containerd[1544]: time="2024-11-12T18:05:51.620341160Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 18:05:51.622346 containerd[1544]: time="2024-11-12T18:05:51.620350160Z" level=info msg="NRI interface is disabled by configuration." Nov 12 18:05:51.622346 containerd[1544]: time="2024-11-12T18:05:51.620360040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 18:05:51.622481 containerd[1544]: time="2024-11-12T18:05:51.620695480Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 18:05:51.622481 containerd[1544]: time="2024-11-12T18:05:51.620750480Z" level=info msg="Connect containerd service" Nov 12 18:05:51.622481 containerd[1544]: time="2024-11-12T18:05:51.620774680Z" level=info msg="using legacy CRI server" Nov 12 18:05:51.622481 containerd[1544]: time="2024-11-12T18:05:51.620781040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 18:05:51.622481 containerd[1544]: time="2024-11-12T18:05:51.620889920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 18:05:51.622481 containerd[1544]: time="2024-11-12T18:05:51.621419320Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 18:05:51.622963 containerd[1544]: time="2024-11-12T18:05:51.622904840Z" level=info msg="Start subscribing containerd event" Nov 12 18:05:51.622971 systemd-networkd[1231]: eth0: Gained IPv6LL Nov 12 18:05:51.623242 containerd[1544]: time="2024-11-12T18:05:51.623041480Z" level=info msg="Start recovering state" Nov 12 18:05:51.623242 containerd[1544]: time="2024-11-12T18:05:51.623126720Z" level=info msg="Start event monitor" Nov 12 18:05:51.623242 containerd[1544]: time="2024-11-12T18:05:51.623138720Z" level=info msg="Start snapshots syncer" Nov 12 18:05:51.623242 containerd[1544]: time="2024-11-12T18:05:51.623147920Z" level=info msg="Start cni network conf syncer for default" Nov 12 18:05:51.623242 containerd[1544]: time="2024-11-12T18:05:51.623159880Z" level=info msg="Start streaming server" Nov 12 18:05:51.623381 containerd[1544]: time="2024-11-12T18:05:51.623355600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 18:05:51.623479 containerd[1544]: time="2024-11-12T18:05:51.623465040Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 18:05:51.623594 containerd[1544]: time="2024-11-12T18:05:51.623577840Z" level=info msg="containerd successfully booted in 0.038545s" Nov 12 18:05:51.624561 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 18:05:51.629922 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 18:05:51.633286 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 18:05:51.644049 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 18:05:51.647033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 18:05:51.649582 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 18:05:51.672487 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 18:05:51.672933 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 18:05:51.675660 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 18:05:51.682330 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 18:05:51.753675 tar[1540]: linux-arm64/LICENSE Nov 12 18:05:51.753675 tar[1540]: linux-arm64/README.md Nov 12 18:05:51.762269 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 18:05:51.954352 sshd_keygen[1541]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 18:05:51.973509 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 18:05:51.985023 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 18:05:51.989851 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 18:05:51.990079 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 18:05:51.992923 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 18:05:52.008087 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 18:05:52.019099 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 18:05:52.021010 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 12 18:05:52.022002 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 18:05:52.133889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 18:05:52.135098 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 18:05:52.137518 (kubelet)[1647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 18:05:52.139966 systemd[1]: Startup finished in 5.907s (kernel) + 3.088s (userspace) = 8.996s. Nov 12 18:05:52.616100 kubelet[1647]: E1112 18:05:52.616024 1647 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 18:05:52.619023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 18:05:52.619195 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 18:05:56.656649 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 18:05:56.672001 systemd[1]: Started sshd@0-10.0.0.144:22-10.0.0.1:34618.service - OpenSSH per-connection server daemon (10.0.0.1:34618). Nov 12 18:05:56.717300 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 34618 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:05:56.718764 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:05:56.725477 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 18:05:56.733984 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 18:05:56.735584 systemd-logind[1523]: New session 1 of user core. Nov 12 18:05:56.743034 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 18:05:56.746111 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 18:05:56.752467 (systemd)[1668]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 18:05:56.817375 systemd[1668]: Queued start job for default target default.target. Nov 12 18:05:56.817714 systemd[1668]: Created slice app.slice - User Application Slice. Nov 12 18:05:56.817738 systemd[1668]: Reached target paths.target - Paths. Nov 12 18:05:56.817750 systemd[1668]: Reached target timers.target - Timers. Nov 12 18:05:56.835919 systemd[1668]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 18:05:56.841157 systemd[1668]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 18:05:56.841215 systemd[1668]: Reached target sockets.target - Sockets. Nov 12 18:05:56.841226 systemd[1668]: Reached target basic.target - Basic System. Nov 12 18:05:56.841260 systemd[1668]: Reached target default.target - Main User Target. Nov 12 18:05:56.841282 systemd[1668]: Startup finished in 84ms. Nov 12 18:05:56.841512 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 18:05:56.842687 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 18:05:56.896996 systemd[1]: Started sshd@1-10.0.0.144:22-10.0.0.1:34632.service - OpenSSH per-connection server daemon (10.0.0.1:34632). Nov 12 18:05:56.929458 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 34632 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:05:56.930514 sshd[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:05:56.934107 systemd-logind[1523]: New session 2 of user core. Nov 12 18:05:56.945988 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 18:05:56.995387 sshd[1680]: pam_unix(sshd:session): session closed for user core Nov 12 18:05:57.008088 systemd[1]: Started sshd@2-10.0.0.144:22-10.0.0.1:34640.service - OpenSSH per-connection server daemon (10.0.0.1:34640). Nov 12 18:05:57.008478 systemd[1]: sshd@1-10.0.0.144:22-10.0.0.1:34632.service: Deactivated successfully. Nov 12 18:05:57.010159 systemd-logind[1523]: Session 2 logged out. Waiting for processes to exit. Nov 12 18:05:57.010727 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 18:05:57.011693 systemd-logind[1523]: Removed session 2. Nov 12 18:05:57.039718 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 34640 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:05:57.040967 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:05:57.044567 systemd-logind[1523]: New session 3 of user core. Nov 12 18:05:57.056982 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 18:05:57.104154 sshd[1685]: pam_unix(sshd:session): session closed for user core Nov 12 18:05:57.111986 systemd[1]: Started sshd@3-10.0.0.144:22-10.0.0.1:34650.service - OpenSSH per-connection server daemon (10.0.0.1:34650). Nov 12 18:05:57.112318 systemd[1]: sshd@2-10.0.0.144:22-10.0.0.1:34640.service: Deactivated successfully. Nov 12 18:05:57.113892 systemd-logind[1523]: Session 3 logged out. Waiting for processes to exit. Nov 12 18:05:57.114456 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 18:05:57.116200 systemd-logind[1523]: Removed session 3. Nov 12 18:05:57.143458 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 34650 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:05:57.144573 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:05:57.148161 systemd-logind[1523]: New session 4 of user core. Nov 12 18:05:57.161996 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 18:05:57.212358 sshd[1693]: pam_unix(sshd:session): session closed for user core Nov 12 18:05:57.229027 systemd[1]: Started sshd@4-10.0.0.144:22-10.0.0.1:34654.service - OpenSSH per-connection server daemon (10.0.0.1:34654). Nov 12 18:05:57.229412 systemd[1]: sshd@3-10.0.0.144:22-10.0.0.1:34650.service: Deactivated successfully. Nov 12 18:05:57.231186 systemd-logind[1523]: Session 4 logged out. Waiting for processes to exit. Nov 12 18:05:57.231692 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 18:05:57.233101 systemd-logind[1523]: Removed session 4. Nov 12 18:05:57.261104 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 34654 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:05:57.262246 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:05:57.266537 systemd-logind[1523]: New session 5 of user core. Nov 12 18:05:57.276029 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 18:05:57.343705 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 18:05:57.343997 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 18:05:57.359527 sudo[1708]: pam_unix(sudo:session): session closed for user root Nov 12 18:05:57.361146 sshd[1701]: pam_unix(sshd:session): session closed for user core Nov 12 18:05:57.377138 systemd[1]: Started sshd@5-10.0.0.144:22-10.0.0.1:34664.service - OpenSSH per-connection server daemon (10.0.0.1:34664). Nov 12 18:05:57.377564 systemd[1]: sshd@4-10.0.0.144:22-10.0.0.1:34654.service: Deactivated successfully. Nov 12 18:05:57.379368 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 18:05:57.379801 systemd-logind[1523]: Session 5 logged out. Waiting for processes to exit. Nov 12 18:05:57.381366 systemd-logind[1523]: Removed session 5. Nov 12 18:05:57.408976 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 34664 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:05:57.410241 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:05:57.413846 systemd-logind[1523]: New session 6 of user core. Nov 12 18:05:57.427026 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 18:05:57.476753 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 18:05:57.477052 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 18:05:57.480016 sudo[1718]: pam_unix(sudo:session): session closed for user root Nov 12 18:05:57.484221 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 18:05:57.484487 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 18:05:57.500229 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 18:05:57.501394 auditctl[1721]: No rules Nov 12 18:05:57.502174 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 18:05:57.502404 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 18:05:57.504247 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 18:05:57.526039 augenrules[1740]: No rules Nov 12 18:05:57.527146 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 18:05:57.528261 sudo[1717]: pam_unix(sudo:session): session closed for user root Nov 12 18:05:57.529679 sshd[1710]: pam_unix(sshd:session): session closed for user core Nov 12 18:05:57.539110 systemd[1]: Started sshd@6-10.0.0.144:22-10.0.0.1:34676.service - OpenSSH per-connection server daemon (10.0.0.1:34676). Nov 12 18:05:57.539482 systemd[1]: sshd@5-10.0.0.144:22-10.0.0.1:34664.service: Deactivated successfully. Nov 12 18:05:57.540617 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 18:05:57.541521 systemd-logind[1523]: Session 6 logged out. Waiting for processes to exit. Nov 12 18:05:57.542565 systemd-logind[1523]: Removed session 6. Nov 12 18:05:57.572984 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 34676 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:05:57.574026 sshd[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:05:57.577812 systemd-logind[1523]: New session 7 of user core. Nov 12 18:05:57.587998 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 18:05:57.637136 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 18:05:57.637386 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 18:05:57.945293 (dockerd)[1772]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 18:05:57.945760 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 18:05:58.196977 dockerd[1772]: time="2024-11-12T18:05:58.196543696Z" level=info msg="Starting up" Nov 12 18:05:58.420477 dockerd[1772]: time="2024-11-12T18:05:58.420435266Z" level=info msg="Loading containers: start." Nov 12 18:05:58.507804 kernel: Initializing XFRM netlink socket Nov 12 18:05:58.559601 systemd-networkd[1231]: docker0: Link UP Nov 12 18:05:58.576069 dockerd[1772]: time="2024-11-12T18:05:58.576028476Z" level=info msg="Loading containers: done." Nov 12 18:05:58.587250 dockerd[1772]: time="2024-11-12T18:05:58.587199817Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 18:05:58.587368 dockerd[1772]: time="2024-11-12T18:05:58.587290509Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 18:05:58.587396 dockerd[1772]: time="2024-11-12T18:05:58.587383655Z" level=info msg="Daemon has completed initialization" Nov 12 18:05:58.612698 dockerd[1772]: time="2024-11-12T18:05:58.612574519Z" level=info msg="API listen on /run/docker.sock" Nov 12 18:05:58.612799 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 18:05:59.189761 containerd[1544]: time="2024-11-12T18:05:59.189702616Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 18:05:59.889334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount342074580.mount: Deactivated successfully. Nov 12 18:06:01.065245 containerd[1544]: time="2024-11-12T18:06:01.065201529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:01.065873 containerd[1544]: time="2024-11-12T18:06:01.065836004Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=32201617" Nov 12 18:06:01.066712 containerd[1544]: time="2024-11-12T18:06:01.066665852Z" level=info msg="ImageCreate event name:\"sha256:001ac07c2bb7d0e08d405a19d935c926c393c971a2801756755b8958a7306ca0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:01.069315 containerd[1544]: time="2024-11-12T18:06:01.069264346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:01.071384 containerd[1544]: time="2024-11-12T18:06:01.071348284Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:001ac07c2bb7d0e08d405a19d935c926c393c971a2801756755b8958a7306ca0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"32198415\" in 1.881605956s" Nov 12 18:06:01.071437 containerd[1544]: time="2024-11-12T18:06:01.071387581Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:001ac07c2bb7d0e08d405a19d935c926c393c971a2801756755b8958a7306ca0\"" Nov 12 18:06:01.089595 containerd[1544]: time="2024-11-12T18:06:01.089559480Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 18:06:02.576555 containerd[1544]: time="2024-11-12T18:06:02.576491951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:02.577072 containerd[1544]: time="2024-11-12T18:06:02.577030441Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=29381046" Nov 12 18:06:02.577876 containerd[1544]: time="2024-11-12T18:06:02.577838475Z" level=info msg="ImageCreate event name:\"sha256:27bef186b28e50ade2a010ef9201877431fb732ef6e370cb79149e8bd65220d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:02.581645 containerd[1544]: time="2024-11-12T18:06:02.581582296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:02.582584 containerd[1544]: time="2024-11-12T18:06:02.582536355Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:27bef186b28e50ade2a010ef9201877431fb732ef6e370cb79149e8bd65220d7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"30783669\" in 1.492941503s" Nov 12 18:06:02.582584 containerd[1544]: time="2024-11-12T18:06:02.582569839Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:27bef186b28e50ade2a010ef9201877431fb732ef6e370cb79149e8bd65220d7\"" Nov 12 18:06:02.599975 containerd[1544]: time="2024-11-12T18:06:02.599948226Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 18:06:02.869635 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 18:06:02.879936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 18:06:02.967815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 18:06:02.971515 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 18:06:03.011437 kubelet[2011]: E1112 18:06:03.011351 2011 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 18:06:03.015019 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 18:06:03.015191 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 18:06:03.748831 containerd[1544]: time="2024-11-12T18:06:03.748658348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:03.749316 containerd[1544]: time="2024-11-12T18:06:03.749139288Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=15770290" Nov 12 18:06:03.749978 containerd[1544]: time="2024-11-12T18:06:03.749952887Z" level=info msg="ImageCreate event name:\"sha256:a8e5012443313f8a99b528b68845e2bcb151785ed5c057613dad7ca5b03c7e60\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:03.752852 containerd[1544]: time="2024-11-12T18:06:03.752795867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:03.754067 containerd[1544]: time="2024-11-12T18:06:03.753942721Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:a8e5012443313f8a99b528b68845e2bcb151785ed5c057613dad7ca5b03c7e60\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"17172931\" in 1.153962237s" Nov 12 18:06:03.754067 containerd[1544]: time="2024-11-12T18:06:03.753975076Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:a8e5012443313f8a99b528b68845e2bcb151785ed5c057613dad7ca5b03c7e60\"" Nov 12 18:06:03.771308 containerd[1544]: time="2024-11-12T18:06:03.771281403Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 18:06:04.757301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount561302987.mount: Deactivated successfully. Nov 12 18:06:05.074573 containerd[1544]: time="2024-11-12T18:06:05.074447646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:05.075227 containerd[1544]: time="2024-11-12T18:06:05.075173646Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=25272231" Nov 12 18:06:05.076093 containerd[1544]: time="2024-11-12T18:06:05.076058107Z" level=info msg="ImageCreate event name:\"sha256:4e66440765478454d48b169d648b000501e24066c0bad7c378bd9e8506bb919f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:05.077871 containerd[1544]: time="2024-11-12T18:06:05.077837069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:05.078536 containerd[1544]: time="2024-11-12T18:06:05.078497646Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:4e66440765478454d48b169d648b000501e24066c0bad7c378bd9e8506bb919f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"25271248\" in 1.307180851s" Nov 12 18:06:05.078568 containerd[1544]: time="2024-11-12T18:06:05.078534342Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:4e66440765478454d48b169d648b000501e24066c0bad7c378bd9e8506bb919f\"" Nov 12 18:06:05.095717 containerd[1544]: time="2024-11-12T18:06:05.095690045Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 18:06:05.794949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3458440232.mount: Deactivated successfully. Nov 12 18:06:06.395248 containerd[1544]: time="2024-11-12T18:06:06.395201598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:06.396729 containerd[1544]: time="2024-11-12T18:06:06.396680300Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Nov 12 18:06:06.399270 containerd[1544]: time="2024-11-12T18:06:06.398749263Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:06.402664 containerd[1544]: time="2024-11-12T18:06:06.402634733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:06.403851 containerd[1544]: time="2024-11-12T18:06:06.403823706Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.30796647s" Nov 12 18:06:06.403942 containerd[1544]: time="2024-11-12T18:06:06.403927551Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Nov 12 18:06:06.421129 containerd[1544]: time="2024-11-12T18:06:06.421101447Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 18:06:06.817967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount245518355.mount: Deactivated successfully. Nov 12 18:06:06.821752 containerd[1544]: time="2024-11-12T18:06:06.821714741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:06.822641 containerd[1544]: time="2024-11-12T18:06:06.822610198Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Nov 12 18:06:06.823508 containerd[1544]: time="2024-11-12T18:06:06.823465592Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:06.825506 containerd[1544]: time="2024-11-12T18:06:06.825474401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:06.826938 containerd[1544]: time="2024-11-12T18:06:06.826914475Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 405.776712ms" Nov 12 18:06:06.826995 containerd[1544]: time="2024-11-12T18:06:06.826943655Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Nov 12 18:06:06.844456 containerd[1544]: time="2024-11-12T18:06:06.844411586Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 18:06:07.372996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2390484118.mount: Deactivated successfully. Nov 12 18:06:09.558852 containerd[1544]: time="2024-11-12T18:06:09.558433303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:09.560055 containerd[1544]: time="2024-11-12T18:06:09.560008936Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Nov 12 18:06:09.561299 containerd[1544]: time="2024-11-12T18:06:09.561247262Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:09.564202 containerd[1544]: time="2024-11-12T18:06:09.564161631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:09.565461 containerd[1544]: time="2024-11-12T18:06:09.565386588Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.720940351s" Nov 12 18:06:09.565461 containerd[1544]: time="2024-11-12T18:06:09.565414524Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Nov 12 18:06:13.265509 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 18:06:13.274029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 18:06:13.357958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 18:06:13.360681 (kubelet)[2238]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 18:06:13.398595 kubelet[2238]: E1112 18:06:13.398547 2238 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 18:06:13.401356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 18:06:13.401528 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 18:06:14.824356 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 18:06:14.838017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 18:06:14.854697 systemd[1]: Reloading requested from client PID 2255 ('systemctl') (unit session-7.scope)... Nov 12 18:06:14.854710 systemd[1]: Reloading... Nov 12 18:06:14.918836 zram_generator::config[2297]: No configuration found. Nov 12 18:06:15.026551 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 18:06:15.073687 systemd[1]: Reloading finished in 218 ms. Nov 12 18:06:15.111662 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 18:06:15.114972 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 18:06:15.115204 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 18:06:15.117074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 18:06:15.207939 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 18:06:15.212049 (kubelet)[2354]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 18:06:15.250991 kubelet[2354]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 18:06:15.250991 kubelet[2354]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 18:06:15.250991 kubelet[2354]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 18:06:15.251289 kubelet[2354]: I1112 18:06:15.251033 2354 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 18:06:16.304024 kubelet[2354]: I1112 18:06:16.303983 2354 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 18:06:16.304024 kubelet[2354]: I1112 18:06:16.304014 2354 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 18:06:16.304377 kubelet[2354]: I1112 18:06:16.304219 2354 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 18:06:16.345437 kubelet[2354]: I1112 18:06:16.345370 2354 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 18:06:16.346135 kubelet[2354]: E1112 18:06:16.346113 2354 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.144:6443: connect: connection refused Nov 12 18:06:16.354659 kubelet[2354]: I1112 18:06:16.354631 2354 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 18:06:16.355724 kubelet[2354]: I1112 18:06:16.355691 2354 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 18:06:16.355926 kubelet[2354]: I1112 18:06:16.355900 2354 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 18:06:16.355926 kubelet[2354]: I1112 18:06:16.355926 2354 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 18:06:16.356033 kubelet[2354]: I1112 18:06:16.355934 2354 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 18:06:16.357039 kubelet[2354]: I1112 18:06:16.357008 2354 state_mem.go:36] "Initialized new in-memory state store" Nov 12 18:06:16.359133 kubelet[2354]: I1112 18:06:16.359084 2354 kubelet.go:396] "Attempting to sync node with API server" Nov 12 18:06:16.359133 kubelet[2354]: I1112 18:06:16.359107 2354 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 18:06:16.359133 kubelet[2354]: I1112 18:06:16.359127 2354 kubelet.go:312] "Adding apiserver pod source" Nov 12 18:06:16.359133 kubelet[2354]: I1112 18:06:16.359141 2354 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 18:06:16.361793 kubelet[2354]: W1112 18:06:16.359500 2354 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Nov 12 18:06:16.361793 kubelet[2354]: E1112 18:06:16.359557 2354 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Nov 12 18:06:16.361793 kubelet[2354]: I1112 18:06:16.361013 2354 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 18:06:16.361793 kubelet[2354]: I1112 18:06:16.361673 2354 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 18:06:16.362218 kubelet[2354]: W1112 18:06:16.362158 2354 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.144:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Nov 12 18:06:16.362218 kubelet[2354]: E1112 18:06:16.362201 2354 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.144:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Nov 12 18:06:16.362292 kubelet[2354]: W1112 18:06:16.362255 2354 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 18:06:16.363129 kubelet[2354]: I1112 18:06:16.363002 2354 server.go:1256] "Started kubelet" Nov 12 18:06:16.363129 kubelet[2354]: I1112 18:06:16.363084 2354 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 18:06:16.363410 kubelet[2354]: I1112 18:06:16.363391 2354 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 18:06:16.363666 kubelet[2354]: I1112 18:06:16.363651 2354 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 18:06:16.363942 kubelet[2354]: I1112 18:06:16.363921 2354 server.go:461] "Adding debug handlers to kubelet server" Nov 12 18:06:16.365205 kubelet[2354]: I1112 18:06:16.365177 2354 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 18:06:16.369438 kubelet[2354]: I1112 18:06:16.366820 2354 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 18:06:16.369438 kubelet[2354]: I1112 18:06:16.366890 2354 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 18:06:16.369438 kubelet[2354]: I1112 18:06:16.366937 2354 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 18:06:16.369438 kubelet[2354]: E1112 18:06:16.367102 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="200ms" Nov 12 18:06:16.369438 kubelet[2354]: W1112 18:06:16.367159 2354 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Nov 12 18:06:16.369438 kubelet[2354]: E1112 18:06:16.367188 2354 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Nov 12 18:06:16.369438 kubelet[2354]: I1112 18:06:16.368817 2354 factory.go:221] Registration of the containerd container factory successfully Nov 12 18:06:16.369438 kubelet[2354]: I1112 18:06:16.368829 2354 factory.go:221] Registration of the systemd container factory successfully Nov 12 18:06:16.369438 kubelet[2354]: I1112 18:06:16.368881 2354 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 18:06:16.369800 kubelet[2354]: E1112 18:06:16.369767 2354 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 18:06:16.371749 kubelet[2354]: E1112 18:06:16.371729 2354 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.144:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.144:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18074ac16cdf8c1d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 18:06:16.362978333 +0000 UTC m=+1.147913668,LastTimestamp:2024-11-12 18:06:16.362978333 +0000 UTC m=+1.147913668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 18:06:16.378324 kubelet[2354]: I1112 18:06:16.378300 2354 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 18:06:16.379487 kubelet[2354]: I1112 18:06:16.379461 2354 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 18:06:16.379487 kubelet[2354]: I1112 18:06:16.379485 2354 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 18:06:16.379575 kubelet[2354]: I1112 18:06:16.379515 2354 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 18:06:16.379597 kubelet[2354]: E1112 18:06:16.379576 2354 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 18:06:16.380905 kubelet[2354]: W1112 18:06:16.380860 2354 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Nov 12 18:06:16.380905 kubelet[2354]: E1112 18:06:16.380907 2354 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Nov 12 18:06:16.388501 kubelet[2354]: I1112 18:06:16.388484 2354 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 18:06:16.388634 kubelet[2354]: I1112 18:06:16.388626 2354 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 18:06:16.388715 kubelet[2354]: I1112 18:06:16.388706 2354 state_mem.go:36] "Initialized new in-memory state store" Nov 12 18:06:16.460207 kubelet[2354]: I1112 18:06:16.460175 2354 policy_none.go:49] "None policy: Start" Nov 12 18:06:16.461087 kubelet[2354]: I1112 18:06:16.461071 2354 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 18:06:16.461211 kubelet[2354]: I1112 18:06:16.461201 2354 state_mem.go:35] "Initializing new in-memory state store" Nov 12 18:06:16.466429 kubelet[2354]: I1112 18:06:16.466398 2354 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 18:06:16.466813 kubelet[2354]: I1112 18:06:16.466774 2354 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 18:06:16.468278 kubelet[2354]: E1112 18:06:16.468258 2354 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 18:06:16.468387 kubelet[2354]: I1112 18:06:16.468343 2354 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 18:06:16.468829 kubelet[2354]: E1112 18:06:16.468805 2354 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Nov 12 18:06:16.479943 kubelet[2354]: I1112 18:06:16.479909 2354 topology_manager.go:215] "Topology Admit Handler" podUID="57708ffe6213785466d38730743c5f54" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 18:06:16.480677 kubelet[2354]: I1112 18:06:16.480655 2354 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 18:06:16.482061 kubelet[2354]: I1112 18:06:16.481629 2354 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 18:06:16.567482 kubelet[2354]: I1112 18:06:16.567364 2354 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 18:06:16.567482 kubelet[2354]: E1112 18:06:16.567399 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="400ms" Nov 12 18:06:16.667893 kubelet[2354]: I1112 18:06:16.667828 2354 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57708ffe6213785466d38730743c5f54-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"57708ffe6213785466d38730743c5f54\") " pod="kube-system/kube-apiserver-localhost" Nov 12 18:06:16.667893 kubelet[2354]: I1112 18:06:16.667874 2354 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57708ffe6213785466d38730743c5f54-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"57708ffe6213785466d38730743c5f54\") " pod="kube-system/kube-apiserver-localhost" Nov 12 18:06:16.667998 kubelet[2354]: I1112 18:06:16.667913 2354 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57708ffe6213785466d38730743c5f54-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"57708ffe6213785466d38730743c5f54\") " pod="kube-system/kube-apiserver-localhost" Nov 12 18:06:16.667998 kubelet[2354]: I1112 18:06:16.667962 2354 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 18:06:16.668067 kubelet[2354]: I1112 18:06:16.668000 2354 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 18:06:16.668067 kubelet[2354]: I1112 18:06:16.668038 2354 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 18:06:16.668067 kubelet[2354]: I1112 18:06:16.668060 2354 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 18:06:16.668154 kubelet[2354]: I1112 18:06:16.668091 2354 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 18:06:16.671120 kubelet[2354]: I1112 18:06:16.671066 2354 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 18:06:16.671423 kubelet[2354]: E1112 18:06:16.671407 2354 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Nov 12 18:06:16.784794 kubelet[2354]: E1112 18:06:16.784753 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:16.785102 kubelet[2354]: E1112 18:06:16.785086 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:16.785528 containerd[1544]: time="2024-11-12T18:06:16.785324997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:57708ffe6213785466d38730743c5f54,Namespace:kube-system,Attempt:0,}" Nov 12 18:06:16.785528 containerd[1544]: time="2024-11-12T18:06:16.785402767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,}" Nov 12 18:06:16.786002 kubelet[2354]: E1112 18:06:16.785983 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:16.786613 containerd[1544]: time="2024-11-12T18:06:16.786391240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,}" Nov 12 18:06:16.968375 kubelet[2354]: E1112 18:06:16.968294 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="800ms" Nov 12 18:06:17.072756 kubelet[2354]: I1112 18:06:17.072729 2354 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 18:06:17.073058 kubelet[2354]: E1112 18:06:17.073043 2354 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Nov 12 18:06:17.160599 kubelet[2354]: W1112 18:06:17.160527 2354 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Nov 12 18:06:17.160599 kubelet[2354]: E1112 18:06:17.160599 2354 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Nov 12 18:06:17.196293 kubelet[2354]: W1112 18:06:17.196245 2354 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Nov 12 18:06:17.196293 kubelet[2354]: E1112 18:06:17.196276 2354 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Nov 12 18:06:17.213618 kubelet[2354]: W1112 18:06:17.213561 2354 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Nov 12 18:06:17.213618 kubelet[2354]: E1112 18:06:17.213603 2354 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Nov 12 18:06:17.267345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount237588141.mount: Deactivated successfully. Nov 12 18:06:17.272569 containerd[1544]: time="2024-11-12T18:06:17.272284581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 18:06:17.273504 containerd[1544]: time="2024-11-12T18:06:17.273461018Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 18:06:17.274059 containerd[1544]: time="2024-11-12T18:06:17.274023336Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 18:06:17.274997 containerd[1544]: time="2024-11-12T18:06:17.274967674Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 18:06:17.276622 containerd[1544]: time="2024-11-12T18:06:17.276442676Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 18:06:17.276819 containerd[1544]: time="2024-11-12T18:06:17.276712504Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 18:06:17.277474 containerd[1544]: time="2024-11-12T18:06:17.277424705Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Nov 12 18:06:17.278433 containerd[1544]: time="2024-11-12T18:06:17.278367964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 18:06:17.280758 containerd[1544]: time="2024-11-12T18:06:17.280733986Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 495.277066ms" Nov 12 18:06:17.282112 containerd[1544]: time="2024-11-12T18:06:17.281988561Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 496.586392ms" Nov 12 18:06:17.284396 containerd[1544]: time="2024-11-12T18:06:17.284336077Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 497.895121ms" Nov 12 18:06:17.425226 containerd[1544]: time="2024-11-12T18:06:17.425119240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:06:17.425226 containerd[1544]: time="2024-11-12T18:06:17.425174316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:06:17.425518 containerd[1544]: time="2024-11-12T18:06:17.425198138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:06:17.425518 containerd[1544]: time="2024-11-12T18:06:17.425347900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:06:17.426122 containerd[1544]: time="2024-11-12T18:06:17.425875046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:06:17.426122 containerd[1544]: time="2024-11-12T18:06:17.425918452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:06:17.426122 containerd[1544]: time="2024-11-12T18:06:17.425936678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:06:17.426122 containerd[1544]: time="2024-11-12T18:06:17.426013138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:06:17.426682 containerd[1544]: time="2024-11-12T18:06:17.426472177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:06:17.426682 containerd[1544]: time="2024-11-12T18:06:17.426525615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:06:17.426682 containerd[1544]: time="2024-11-12T18:06:17.426540284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:06:17.427835 containerd[1544]: time="2024-11-12T18:06:17.427740101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:06:17.474872 containerd[1544]: time="2024-11-12T18:06:17.474717450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"5573aab090494ca264419ef57b004f26021e36d0571d1f35716a58bd394f738c\"" Nov 12 18:06:17.475775 containerd[1544]: time="2024-11-12T18:06:17.475720502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"030a44bf1f23a42ffb985747671fa86bc5ea818bc210473868d69120cf406dad\"" Nov 12 18:06:17.475847 kubelet[2354]: E1112 18:06:17.475822 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:17.477597 containerd[1544]: time="2024-11-12T18:06:17.476150165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:57708ffe6213785466d38730743c5f54,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb153c584e08626699321d1c73d8e45b101c9d5e5e07443e4a02166fce3901bd\"" Nov 12 18:06:17.477677 kubelet[2354]: E1112 18:06:17.477461 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:17.477677 kubelet[2354]: E1112 18:06:17.477533 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:17.478454 containerd[1544]: time="2024-11-12T18:06:17.478420861Z" level=info msg="CreateContainer within sandbox \"5573aab090494ca264419ef57b004f26021e36d0571d1f35716a58bd394f738c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 18:06:17.480023 containerd[1544]: time="2024-11-12T18:06:17.479994306Z" level=info msg="CreateContainer within sandbox \"030a44bf1f23a42ffb985747671fa86bc5ea818bc210473868d69120cf406dad\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 18:06:17.481051 containerd[1544]: time="2024-11-12T18:06:17.481026415Z" level=info msg="CreateContainer within sandbox \"cb153c584e08626699321d1c73d8e45b101c9d5e5e07443e4a02166fce3901bd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 18:06:17.495406 containerd[1544]: time="2024-11-12T18:06:17.495362477Z" level=info msg="CreateContainer within sandbox \"5573aab090494ca264419ef57b004f26021e36d0571d1f35716a58bd394f738c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a3b7d90648df6703b341660fd0f8c29c21f155f2a773e617b8f4247a79ed01a9\"" Nov 12 18:06:17.496059 containerd[1544]: time="2024-11-12T18:06:17.496025756Z" level=info msg="StartContainer for \"a3b7d90648df6703b341660fd0f8c29c21f155f2a773e617b8f4247a79ed01a9\"" Nov 12 18:06:17.502541 containerd[1544]: time="2024-11-12T18:06:17.502497074Z" level=info msg="CreateContainer within sandbox \"030a44bf1f23a42ffb985747671fa86bc5ea818bc210473868d69120cf406dad\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"81ea74c877e873665e197121779167b3d627d5bd6f03313db268b0d948570082\"" Nov 12 18:06:17.503191 containerd[1544]: time="2024-11-12T18:06:17.502929854Z" level=info msg="StartContainer for \"81ea74c877e873665e197121779167b3d627d5bd6f03313db268b0d948570082\"" Nov 12 18:06:17.504904 containerd[1544]: time="2024-11-12T18:06:17.504813735Z" level=info msg="CreateContainer within sandbox \"cb153c584e08626699321d1c73d8e45b101c9d5e5e07443e4a02166fce3901bd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fb0410d190fd75cfc864e35826173a13eff2af435de4dfe9c7d7d02e84e121ef\"" Nov 12 18:06:17.505238 containerd[1544]: time="2024-11-12T18:06:17.505218337Z" level=info msg="StartContainer for \"fb0410d190fd75cfc864e35826173a13eff2af435de4dfe9c7d7d02e84e121ef\"" Nov 12 18:06:17.567699 containerd[1544]: time="2024-11-12T18:06:17.567565416Z" level=info msg="StartContainer for \"fb0410d190fd75cfc864e35826173a13eff2af435de4dfe9c7d7d02e84e121ef\" returns successfully" Nov 12 18:06:17.567699 containerd[1544]: time="2024-11-12T18:06:17.567588078Z" level=info msg="StartContainer for \"81ea74c877e873665e197121779167b3d627d5bd6f03313db268b0d948570082\" returns successfully" Nov 12 18:06:17.567699 containerd[1544]: time="2024-11-12T18:06:17.567591435Z" level=info msg="StartContainer for \"a3b7d90648df6703b341660fd0f8c29c21f155f2a773e617b8f4247a79ed01a9\" returns successfully" Nov 12 18:06:17.877160 kubelet[2354]: I1112 18:06:17.877072 2354 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 18:06:18.389065 kubelet[2354]: E1112 18:06:18.389039 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:18.391289 kubelet[2354]: E1112 18:06:18.390752 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:18.395465 kubelet[2354]: E1112 18:06:18.395437 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:18.707438 kubelet[2354]: E1112 18:06:18.707110 2354 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 12 18:06:18.794323 kubelet[2354]: I1112 18:06:18.794282 2354 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 18:06:18.804337 kubelet[2354]: E1112 18:06:18.804307 2354 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 18:06:18.905307 kubelet[2354]: E1112 18:06:18.905271 2354 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 18:06:19.005944 kubelet[2354]: E1112 18:06:19.005843 2354 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 18:06:19.106491 kubelet[2354]: E1112 18:06:19.106455 2354 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 18:06:19.207181 kubelet[2354]: E1112 18:06:19.207145 2354 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 18:06:19.307775 kubelet[2354]: E1112 18:06:19.307678 2354 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 18:06:19.395341 kubelet[2354]: E1112 18:06:19.395319 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:19.407793 kubelet[2354]: E1112 18:06:19.407758 2354 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 18:06:19.565376 kubelet[2354]: E1112 18:06:19.564781 2354 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 12 18:06:19.565376 kubelet[2354]: E1112 18:06:19.565184 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:20.361303 kubelet[2354]: I1112 18:06:20.361267 2354 apiserver.go:52] "Watching apiserver" Nov 12 18:06:20.367357 kubelet[2354]: I1112 18:06:20.367311 2354 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 18:06:20.615802 kubelet[2354]: E1112 18:06:20.615236 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:21.332527 systemd[1]: Reloading requested from client PID 2631 ('systemctl') (unit session-7.scope)... Nov 12 18:06:21.332543 systemd[1]: Reloading... Nov 12 18:06:21.391821 zram_generator::config[2673]: No configuration found. Nov 12 18:06:21.397328 kubelet[2354]: E1112 18:06:21.397304 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:21.478195 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 18:06:21.533625 systemd[1]: Reloading finished in 200 ms. Nov 12 18:06:21.561571 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 18:06:21.575688 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 18:06:21.576078 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 18:06:21.583135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 18:06:21.668014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 18:06:21.671710 (kubelet)[2722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 18:06:21.710678 kubelet[2722]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 18:06:21.710678 kubelet[2722]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 18:06:21.710678 kubelet[2722]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 18:06:21.710678 kubelet[2722]: I1112 18:06:21.710284 2722 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 18:06:21.714276 kubelet[2722]: I1112 18:06:21.714062 2722 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 18:06:21.714276 kubelet[2722]: I1112 18:06:21.714083 2722 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 18:06:21.714276 kubelet[2722]: I1112 18:06:21.714232 2722 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 18:06:21.715728 kubelet[2722]: I1112 18:06:21.715670 2722 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 18:06:21.717461 kubelet[2722]: I1112 18:06:21.717328 2722 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 18:06:21.724257 kubelet[2722]: I1112 18:06:21.724230 2722 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 18:06:21.724641 kubelet[2722]: I1112 18:06:21.724589 2722 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 18:06:21.724753 kubelet[2722]: I1112 18:06:21.724735 2722 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 18:06:21.724864 kubelet[2722]: I1112 18:06:21.724759 2722 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 18:06:21.724864 kubelet[2722]: I1112 18:06:21.724768 2722 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 18:06:21.724864 kubelet[2722]: I1112 18:06:21.724827 2722 state_mem.go:36] "Initialized new in-memory state store" Nov 12 18:06:21.724956 kubelet[2722]: I1112 18:06:21.724914 2722 kubelet.go:396] "Attempting to sync node with API server" Nov 12 18:06:21.724956 kubelet[2722]: I1112 18:06:21.724936 2722 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 18:06:21.724956 kubelet[2722]: I1112 18:06:21.724956 2722 kubelet.go:312] "Adding apiserver pod source" Nov 12 18:06:21.725023 kubelet[2722]: I1112 18:06:21.724969 2722 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 18:06:21.726159 kubelet[2722]: I1112 18:06:21.726100 2722 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 18:06:21.726861 kubelet[2722]: I1112 18:06:21.726838 2722 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 18:06:21.727264 kubelet[2722]: I1112 18:06:21.727203 2722 server.go:1256] "Started kubelet" Nov 12 18:06:21.727423 kubelet[2722]: I1112 18:06:21.727372 2722 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 18:06:21.727499 kubelet[2722]: I1112 18:06:21.727466 2722 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 18:06:21.727962 kubelet[2722]: I1112 18:06:21.727901 2722 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 18:06:21.729281 kubelet[2722]: I1112 18:06:21.729172 2722 server.go:461] "Adding debug handlers to kubelet server" Nov 12 18:06:21.730966 kubelet[2722]: I1112 18:06:21.730946 2722 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 18:06:21.741374 kubelet[2722]: I1112 18:06:21.741350 2722 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 18:06:21.741828 kubelet[2722]: I1112 18:06:21.741682 2722 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 18:06:21.741871 kubelet[2722]: I1112 18:06:21.741856 2722 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 18:06:21.746056 kubelet[2722]: I1112 18:06:21.746032 2722 factory.go:221] Registration of the systemd container factory successfully Nov 12 18:06:21.746199 kubelet[2722]: I1112 18:06:21.746181 2722 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 18:06:21.746813 kubelet[2722]: E1112 18:06:21.746781 2722 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 18:06:21.748496 kubelet[2722]: I1112 18:06:21.748475 2722 factory.go:221] Registration of the containerd container factory successfully Nov 12 18:06:21.752641 kubelet[2722]: I1112 18:06:21.752593 2722 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 18:06:21.756371 kubelet[2722]: I1112 18:06:21.756304 2722 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 18:06:21.756371 kubelet[2722]: I1112 18:06:21.756328 2722 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 18:06:21.756371 kubelet[2722]: I1112 18:06:21.756342 2722 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 18:06:21.757427 kubelet[2722]: E1112 18:06:21.756882 2722 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 18:06:21.792210 kubelet[2722]: I1112 18:06:21.792187 2722 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 18:06:21.792437 kubelet[2722]: I1112 18:06:21.792425 2722 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 18:06:21.792553 kubelet[2722]: I1112 18:06:21.792531 2722 state_mem.go:36] "Initialized new in-memory state store" Nov 12 18:06:21.792834 kubelet[2722]: I1112 18:06:21.792819 2722 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 18:06:21.792920 kubelet[2722]: I1112 18:06:21.792909 2722 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 18:06:21.792998 kubelet[2722]: I1112 18:06:21.792987 2722 policy_none.go:49] "None policy: Start" Nov 12 18:06:21.793722 kubelet[2722]: I1112 18:06:21.793693 2722 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 18:06:21.793722 kubelet[2722]: I1112 18:06:21.793721 2722 state_mem.go:35] "Initializing new in-memory state store" Nov 12 18:06:21.793885 kubelet[2722]: I1112 18:06:21.793863 2722 state_mem.go:75] "Updated machine memory state" Nov 12 18:06:21.798043 kubelet[2722]: I1112 18:06:21.797380 2722 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 18:06:21.798043 kubelet[2722]: I1112 18:06:21.797686 2722 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 18:06:21.845148 kubelet[2722]: I1112 18:06:21.844843 2722 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 18:06:21.850642 kubelet[2722]: I1112 18:06:21.850607 2722 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Nov 12 18:06:21.850711 kubelet[2722]: I1112 18:06:21.850677 2722 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 18:06:21.857460 kubelet[2722]: I1112 18:06:21.857409 2722 topology_manager.go:215] "Topology Admit Handler" podUID="57708ffe6213785466d38730743c5f54" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 18:06:21.857531 kubelet[2722]: I1112 18:06:21.857487 2722 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 18:06:21.857615 kubelet[2722]: I1112 18:06:21.857537 2722 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 18:06:21.864545 kubelet[2722]: E1112 18:06:21.863990 2722 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 12 18:06:21.943426 kubelet[2722]: I1112 18:06:21.943399 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 18:06:21.943556 kubelet[2722]: I1112 18:06:21.943543 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 18:06:21.943623 kubelet[2722]: I1112 18:06:21.943615 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 18:06:21.943712 kubelet[2722]: I1112 18:06:21.943700 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 18:06:21.943782 kubelet[2722]: I1112 18:06:21.943774 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57708ffe6213785466d38730743c5f54-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"57708ffe6213785466d38730743c5f54\") " pod="kube-system/kube-apiserver-localhost" Nov 12 18:06:21.944067 kubelet[2722]: I1112 18:06:21.943878 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57708ffe6213785466d38730743c5f54-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"57708ffe6213785466d38730743c5f54\") " pod="kube-system/kube-apiserver-localhost" Nov 12 18:06:21.944067 kubelet[2722]: I1112 18:06:21.943904 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57708ffe6213785466d38730743c5f54-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"57708ffe6213785466d38730743c5f54\") " pod="kube-system/kube-apiserver-localhost" Nov 12 18:06:21.944067 kubelet[2722]: I1112 18:06:21.943922 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 18:06:21.944067 kubelet[2722]: I1112 18:06:21.943941 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 18:06:22.165582 kubelet[2722]: E1112 18:06:22.165298 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:22.165582 kubelet[2722]: E1112 18:06:22.165375 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:22.166366 kubelet[2722]: E1112 18:06:22.166348 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:22.726813 kubelet[2722]: I1112 18:06:22.726757 2722 apiserver.go:52] "Watching apiserver" Nov 12 18:06:22.742304 kubelet[2722]: I1112 18:06:22.742266 2722 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 18:06:22.766759 kubelet[2722]: E1112 18:06:22.766314 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:22.766759 kubelet[2722]: E1112 18:06:22.766689 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:22.771251 kubelet[2722]: E1112 18:06:22.771150 2722 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 18:06:22.772125 kubelet[2722]: E1112 18:06:22.771561 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:22.791486 kubelet[2722]: I1112 18:06:22.791435 2722 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.791378895 podStartE2EDuration="1.791378895s" podCreationTimestamp="2024-11-12 18:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 18:06:22.784895025 +0000 UTC m=+1.109878361" watchObservedRunningTime="2024-11-12 18:06:22.791378895 +0000 UTC m=+1.116362231" Nov 12 18:06:22.798266 kubelet[2722]: I1112 18:06:22.798233 2722 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.798205266 podStartE2EDuration="2.798205266s" podCreationTimestamp="2024-11-12 18:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 18:06:22.791542909 +0000 UTC m=+1.116526245" watchObservedRunningTime="2024-11-12 18:06:22.798205266 +0000 UTC m=+1.123188602" Nov 12 18:06:23.767680 kubelet[2722]: E1112 18:06:23.767649 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:25.998837 sudo[1753]: pam_unix(sudo:session): session closed for user root Nov 12 18:06:26.000978 sshd[1747]: pam_unix(sshd:session): session closed for user core Nov 12 18:06:26.004395 systemd-logind[1523]: Session 7 logged out. Waiting for processes to exit. Nov 12 18:06:26.004594 systemd[1]: sshd@6-10.0.0.144:22-10.0.0.1:34676.service: Deactivated successfully. Nov 12 18:06:26.006422 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 18:06:26.007360 systemd-logind[1523]: Removed session 7. Nov 12 18:06:26.405109 kubelet[2722]: E1112 18:06:26.405058 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:31.877450 kubelet[2722]: E1112 18:06:31.877366 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:31.891362 kubelet[2722]: I1112 18:06:31.891295 2722 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=10.891261739 podStartE2EDuration="10.891261739s" podCreationTimestamp="2024-11-12 18:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 18:06:22.798509024 +0000 UTC m=+1.123492360" watchObservedRunningTime="2024-11-12 18:06:31.891261739 +0000 UTC m=+10.216245075" Nov 12 18:06:32.536904 kubelet[2722]: E1112 18:06:32.536759 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:32.779918 kubelet[2722]: E1112 18:06:32.779554 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:33.317255 kubelet[2722]: I1112 18:06:33.317209 2722 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 18:06:33.338637 containerd[1544]: time="2024-11-12T18:06:33.338562622Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 18:06:33.339040 kubelet[2722]: I1112 18:06:33.338876 2722 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 18:06:34.009577 kubelet[2722]: I1112 18:06:34.007770 2722 topology_manager.go:215] "Topology Admit Handler" podUID="b741a26e-5d4e-40e3-84e2-7d124eeb6874" podNamespace="kube-system" podName="kube-proxy-47bpz" Nov 12 18:06:34.115655 kubelet[2722]: I1112 18:06:34.115610 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b741a26e-5d4e-40e3-84e2-7d124eeb6874-xtables-lock\") pod \"kube-proxy-47bpz\" (UID: \"b741a26e-5d4e-40e3-84e2-7d124eeb6874\") " pod="kube-system/kube-proxy-47bpz" Nov 12 18:06:34.115655 kubelet[2722]: I1112 18:06:34.115656 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttrb9\" (UniqueName: \"kubernetes.io/projected/b741a26e-5d4e-40e3-84e2-7d124eeb6874-kube-api-access-ttrb9\") pod \"kube-proxy-47bpz\" (UID: \"b741a26e-5d4e-40e3-84e2-7d124eeb6874\") " pod="kube-system/kube-proxy-47bpz" Nov 12 18:06:34.115828 kubelet[2722]: I1112 18:06:34.115678 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b741a26e-5d4e-40e3-84e2-7d124eeb6874-lib-modules\") pod \"kube-proxy-47bpz\" (UID: \"b741a26e-5d4e-40e3-84e2-7d124eeb6874\") " pod="kube-system/kube-proxy-47bpz" Nov 12 18:06:34.115828 kubelet[2722]: I1112 18:06:34.115712 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b741a26e-5d4e-40e3-84e2-7d124eeb6874-kube-proxy\") pod \"kube-proxy-47bpz\" (UID: \"b741a26e-5d4e-40e3-84e2-7d124eeb6874\") " pod="kube-system/kube-proxy-47bpz" Nov 12 18:06:34.313916 kubelet[2722]: E1112 18:06:34.313770 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:34.314464 containerd[1544]: time="2024-11-12T18:06:34.314379469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-47bpz,Uid:b741a26e-5d4e-40e3-84e2-7d124eeb6874,Namespace:kube-system,Attempt:0,}" Nov 12 18:06:34.331628 containerd[1544]: time="2024-11-12T18:06:34.331256338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:06:34.331819 containerd[1544]: time="2024-11-12T18:06:34.331639981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:06:34.331819 containerd[1544]: time="2024-11-12T18:06:34.331656982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:06:34.331894 containerd[1544]: time="2024-11-12T18:06:34.331760742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:06:34.359803 containerd[1544]: time="2024-11-12T18:06:34.359744469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-47bpz,Uid:b741a26e-5d4e-40e3-84e2-7d124eeb6874,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bf171e053d30e8d6feaacb1aa20af60da9adcab97b0d619e73b794b780500c3\"" Nov 12 18:06:34.363387 kubelet[2722]: E1112 18:06:34.363359 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:34.365207 containerd[1544]: time="2024-11-12T18:06:34.365155757Z" level=info msg="CreateContainer within sandbox \"9bf171e053d30e8d6feaacb1aa20af60da9adcab97b0d619e73b794b780500c3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 18:06:34.376837 containerd[1544]: time="2024-11-12T18:06:34.376798740Z" level=info msg="CreateContainer within sandbox \"9bf171e053d30e8d6feaacb1aa20af60da9adcab97b0d619e73b794b780500c3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"76a65303136b7df2b7b4a731fedf9b1a758a3f2e57776685063cfe479aebce27\"" Nov 12 18:06:34.378209 containerd[1544]: time="2024-11-12T18:06:34.377281464Z" level=info msg="StartContainer for \"76a65303136b7df2b7b4a731fedf9b1a758a3f2e57776685063cfe479aebce27\"" Nov 12 18:06:34.425545 kubelet[2722]: I1112 18:06:34.425480 2722 topology_manager.go:215] "Topology Admit Handler" podUID="05eda405-25a2-4cbd-9884-b2d59cf15838" podNamespace="tigera-operator" podName="tigera-operator-56b74f76df-56jwr" Nov 12 18:06:34.464041 containerd[1544]: time="2024-11-12T18:06:34.464000110Z" level=info msg="StartContainer for \"76a65303136b7df2b7b4a731fedf9b1a758a3f2e57776685063cfe479aebce27\" returns successfully" Nov 12 18:06:34.619424 kubelet[2722]: I1112 18:06:34.619267 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m985f\" (UniqueName: \"kubernetes.io/projected/05eda405-25a2-4cbd-9884-b2d59cf15838-kube-api-access-m985f\") pod \"tigera-operator-56b74f76df-56jwr\" (UID: \"05eda405-25a2-4cbd-9884-b2d59cf15838\") " pod="tigera-operator/tigera-operator-56b74f76df-56jwr" Nov 12 18:06:34.619424 kubelet[2722]: I1112 18:06:34.619321 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/05eda405-25a2-4cbd-9884-b2d59cf15838-var-lib-calico\") pod \"tigera-operator-56b74f76df-56jwr\" (UID: \"05eda405-25a2-4cbd-9884-b2d59cf15838\") " pod="tigera-operator/tigera-operator-56b74f76df-56jwr" Nov 12 18:06:34.783597 kubelet[2722]: E1112 18:06:34.783546 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:34.792528 kubelet[2722]: I1112 18:06:34.792477 2722 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-47bpz" podStartSLOduration=0.792440368 podStartE2EDuration="792.440368ms" podCreationTimestamp="2024-11-12 18:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 18:06:34.792325567 +0000 UTC m=+13.117308903" watchObservedRunningTime="2024-11-12 18:06:34.792440368 +0000 UTC m=+13.117423784" Nov 12 18:06:35.030999 containerd[1544]: time="2024-11-12T18:06:35.030963780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-56jwr,Uid:05eda405-25a2-4cbd-9884-b2d59cf15838,Namespace:tigera-operator,Attempt:0,}" Nov 12 18:06:35.050270 containerd[1544]: time="2024-11-12T18:06:35.050101341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:06:35.050827 containerd[1544]: time="2024-11-12T18:06:35.050620585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:06:35.050827 containerd[1544]: time="2024-11-12T18:06:35.050643305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:06:35.050827 containerd[1544]: time="2024-11-12T18:06:35.050730386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:06:35.089771 containerd[1544]: time="2024-11-12T18:06:35.089738913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-56jwr,Uid:05eda405-25a2-4cbd-9884-b2d59cf15838,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b39c206eae9dbefaae0b7810d848e594b03a4599667c4adf26bce8f72ffe4071\"" Nov 12 18:06:35.090960 containerd[1544]: time="2024-11-12T18:06:35.090935123Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 18:06:36.154224 update_engine[1524]: I20241112 18:06:36.154143 1524 update_attempter.cc:509] Updating boot flags... Nov 12 18:06:36.181806 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3058) Nov 12 18:06:36.197458 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3059) Nov 12 18:06:36.218821 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3059) Nov 12 18:06:36.415620 kubelet[2722]: E1112 18:06:36.415488 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:38.017174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2296616023.mount: Deactivated successfully. Nov 12 18:06:38.387922 containerd[1544]: time="2024-11-12T18:06:38.387826671Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:38.388858 containerd[1544]: time="2024-11-12T18:06:38.388829758Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=19123633" Nov 12 18:06:38.389700 containerd[1544]: time="2024-11-12T18:06:38.389668884Z" level=info msg="ImageCreate event name:\"sha256:43f5078c762aa5421f1f6830afd7f91e05937aac6b1d97f0516065571164e9ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:38.391849 containerd[1544]: time="2024-11-12T18:06:38.391823059Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:38.393396 containerd[1544]: time="2024-11-12T18:06:38.393359270Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:43f5078c762aa5421f1f6830afd7f91e05937aac6b1d97f0516065571164e9ee\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"19117824\" in 3.302396347s" Nov 12 18:06:38.393441 containerd[1544]: time="2024-11-12T18:06:38.393392791Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:43f5078c762aa5421f1f6830afd7f91e05937aac6b1d97f0516065571164e9ee\"" Nov 12 18:06:38.414577 containerd[1544]: time="2024-11-12T18:06:38.414540783Z" level=info msg="CreateContainer within sandbox \"b39c206eae9dbefaae0b7810d848e594b03a4599667c4adf26bce8f72ffe4071\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 18:06:38.423028 containerd[1544]: time="2024-11-12T18:06:38.422998244Z" level=info msg="CreateContainer within sandbox \"b39c206eae9dbefaae0b7810d848e594b03a4599667c4adf26bce8f72ffe4071\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"580e3abe4f7cef5e2ecfd484f59cde8ac04b1c9d2da7d1872ae9f2fed629f56f\"" Nov 12 18:06:38.423503 containerd[1544]: time="2024-11-12T18:06:38.423481208Z" level=info msg="StartContainer for \"580e3abe4f7cef5e2ecfd484f59cde8ac04b1c9d2da7d1872ae9f2fed629f56f\"" Nov 12 18:06:38.465296 containerd[1544]: time="2024-11-12T18:06:38.465094387Z" level=info msg="StartContainer for \"580e3abe4f7cef5e2ecfd484f59cde8ac04b1c9d2da7d1872ae9f2fed629f56f\" returns successfully" Nov 12 18:06:38.803748 kubelet[2722]: I1112 18:06:38.803711 2722 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-56b74f76df-56jwr" podStartSLOduration=1.499114384 podStartE2EDuration="4.803671107s" podCreationTimestamp="2024-11-12 18:06:34 +0000 UTC" firstStartedPulling="2024-11-12 18:06:35.09054504 +0000 UTC m=+13.415528336" lastFinishedPulling="2024-11-12 18:06:38.395101723 +0000 UTC m=+16.720085059" observedRunningTime="2024-11-12 18:06:38.803553986 +0000 UTC m=+17.128537322" watchObservedRunningTime="2024-11-12 18:06:38.803671107 +0000 UTC m=+17.128654443" Nov 12 18:06:38.992282 systemd[1]: run-containerd-runc-k8s.io-580e3abe4f7cef5e2ecfd484f59cde8ac04b1c9d2da7d1872ae9f2fed629f56f-runc.wdke3i.mount: Deactivated successfully. Nov 12 18:06:42.400461 kubelet[2722]: I1112 18:06:42.400408 2722 topology_manager.go:215] "Topology Admit Handler" podUID="45e92cfb-32ef-44dc-b739-40b99874c319" podNamespace="calico-system" podName="calico-typha-74cb6dc955-lmc66" Nov 12 18:06:42.444963 kubelet[2722]: I1112 18:06:42.444042 2722 topology_manager.go:215] "Topology Admit Handler" podUID="d8420ff5-190b-4472-b457-8ab7b3d6a22a" podNamespace="calico-system" podName="calico-node-fdlgs" Nov 12 18:06:42.557851 kubelet[2722]: I1112 18:06:42.556184 2722 topology_manager.go:215] "Topology Admit Handler" podUID="a282df54-f6aa-450e-a3f8-2feaec5bf123" podNamespace="calico-system" podName="csi-node-driver-g9kgv" Nov 12 18:06:42.557851 kubelet[2722]: E1112 18:06:42.556458 2722 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g9kgv" podUID="a282df54-f6aa-450e-a3f8-2feaec5bf123" Nov 12 18:06:42.572464 kubelet[2722]: I1112 18:06:42.572019 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d8420ff5-190b-4472-b457-8ab7b3d6a22a-cni-bin-dir\") pod \"calico-node-fdlgs\" (UID: \"d8420ff5-190b-4472-b457-8ab7b3d6a22a\") " pod="calico-system/calico-node-fdlgs" Nov 12 18:06:42.572464 kubelet[2722]: I1112 18:06:42.572063 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d8420ff5-190b-4472-b457-8ab7b3d6a22a-cni-log-dir\") pod \"calico-node-fdlgs\" (UID: \"d8420ff5-190b-4472-b457-8ab7b3d6a22a\") " pod="calico-system/calico-node-fdlgs" Nov 12 18:06:42.572464 kubelet[2722]: I1112 18:06:42.572087 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/45e92cfb-32ef-44dc-b739-40b99874c319-typha-certs\") pod \"calico-typha-74cb6dc955-lmc66\" (UID: \"45e92cfb-32ef-44dc-b739-40b99874c319\") " pod="calico-system/calico-typha-74cb6dc955-lmc66" Nov 12 18:06:42.572464 kubelet[2722]: I1112 18:06:42.572109 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d8420ff5-190b-4472-b457-8ab7b3d6a22a-var-lib-calico\") pod \"calico-node-fdlgs\" (UID: \"d8420ff5-190b-4472-b457-8ab7b3d6a22a\") " pod="calico-system/calico-node-fdlgs" Nov 12 18:06:42.572464 kubelet[2722]: I1112 18:06:42.572137 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2fhl\" (UniqueName: \"kubernetes.io/projected/45e92cfb-32ef-44dc-b739-40b99874c319-kube-api-access-h2fhl\") pod \"calico-typha-74cb6dc955-lmc66\" (UID: \"45e92cfb-32ef-44dc-b739-40b99874c319\") " pod="calico-system/calico-typha-74cb6dc955-lmc66" Nov 12 18:06:42.572829 kubelet[2722]: I1112 18:06:42.572166 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8420ff5-190b-4472-b457-8ab7b3d6a22a-tigera-ca-bundle\") pod \"calico-node-fdlgs\" (UID: \"d8420ff5-190b-4472-b457-8ab7b3d6a22a\") " pod="calico-system/calico-node-fdlgs" Nov 12 18:06:42.572829 kubelet[2722]: I1112 18:06:42.572192 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d8420ff5-190b-4472-b457-8ab7b3d6a22a-cni-net-dir\") pod \"calico-node-fdlgs\" (UID: \"d8420ff5-190b-4472-b457-8ab7b3d6a22a\") " pod="calico-system/calico-node-fdlgs" Nov 12 18:06:42.572829 kubelet[2722]: I1112 18:06:42.572212 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45e92cfb-32ef-44dc-b739-40b99874c319-tigera-ca-bundle\") pod \"calico-typha-74cb6dc955-lmc66\" (UID: \"45e92cfb-32ef-44dc-b739-40b99874c319\") " pod="calico-system/calico-typha-74cb6dc955-lmc66" Nov 12 18:06:42.572829 kubelet[2722]: I1112 18:06:42.572232 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8420ff5-190b-4472-b457-8ab7b3d6a22a-xtables-lock\") pod \"calico-node-fdlgs\" (UID: \"d8420ff5-190b-4472-b457-8ab7b3d6a22a\") " pod="calico-system/calico-node-fdlgs" Nov 12 18:06:42.572829 kubelet[2722]: I1112 18:06:42.572251 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d8420ff5-190b-4472-b457-8ab7b3d6a22a-policysync\") pod \"calico-node-fdlgs\" (UID: \"d8420ff5-190b-4472-b457-8ab7b3d6a22a\") " pod="calico-system/calico-node-fdlgs" Nov 12 18:06:42.573042 kubelet[2722]: I1112 18:06:42.572270 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d8420ff5-190b-4472-b457-8ab7b3d6a22a-var-run-calico\") pod \"calico-node-fdlgs\" (UID: \"d8420ff5-190b-4472-b457-8ab7b3d6a22a\") " pod="calico-system/calico-node-fdlgs" Nov 12 18:06:42.573042 kubelet[2722]: I1112 18:06:42.572290 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d8420ff5-190b-4472-b457-8ab7b3d6a22a-flexvol-driver-host\") pod \"calico-node-fdlgs\" (UID: \"d8420ff5-190b-4472-b457-8ab7b3d6a22a\") " pod="calico-system/calico-node-fdlgs" Nov 12 18:06:42.573042 kubelet[2722]: I1112 18:06:42.572309 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52f44\" (UniqueName: \"kubernetes.io/projected/d8420ff5-190b-4472-b457-8ab7b3d6a22a-kube-api-access-52f44\") pod \"calico-node-fdlgs\" (UID: \"d8420ff5-190b-4472-b457-8ab7b3d6a22a\") " pod="calico-system/calico-node-fdlgs" Nov 12 18:06:42.573042 kubelet[2722]: I1112 18:06:42.572329 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8420ff5-190b-4472-b457-8ab7b3d6a22a-lib-modules\") pod \"calico-node-fdlgs\" (UID: \"d8420ff5-190b-4472-b457-8ab7b3d6a22a\") " pod="calico-system/calico-node-fdlgs" Nov 12 18:06:42.573042 kubelet[2722]: I1112 18:06:42.572349 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d8420ff5-190b-4472-b457-8ab7b3d6a22a-node-certs\") pod \"calico-node-fdlgs\" (UID: \"d8420ff5-190b-4472-b457-8ab7b3d6a22a\") " pod="calico-system/calico-node-fdlgs" Nov 12 18:06:42.673237 kubelet[2722]: I1112 18:06:42.673120 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a282df54-f6aa-450e-a3f8-2feaec5bf123-kubelet-dir\") pod \"csi-node-driver-g9kgv\" (UID: \"a282df54-f6aa-450e-a3f8-2feaec5bf123\") " pod="calico-system/csi-node-driver-g9kgv" Nov 12 18:06:42.674047 kubelet[2722]: I1112 18:06:42.673462 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a282df54-f6aa-450e-a3f8-2feaec5bf123-socket-dir\") pod \"csi-node-driver-g9kgv\" (UID: \"a282df54-f6aa-450e-a3f8-2feaec5bf123\") " pod="calico-system/csi-node-driver-g9kgv" Nov 12 18:06:42.674047 kubelet[2722]: I1112 18:06:42.673619 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a282df54-f6aa-450e-a3f8-2feaec5bf123-varrun\") pod \"csi-node-driver-g9kgv\" (UID: \"a282df54-f6aa-450e-a3f8-2feaec5bf123\") " pod="calico-system/csi-node-driver-g9kgv" Nov 12 18:06:42.674047 kubelet[2722]: I1112 18:06:42.673768 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s86zs\" (UniqueName: \"kubernetes.io/projected/a282df54-f6aa-450e-a3f8-2feaec5bf123-kube-api-access-s86zs\") pod \"csi-node-driver-g9kgv\" (UID: \"a282df54-f6aa-450e-a3f8-2feaec5bf123\") " pod="calico-system/csi-node-driver-g9kgv" Nov 12 18:06:42.674047 kubelet[2722]: I1112 18:06:42.673934 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a282df54-f6aa-450e-a3f8-2feaec5bf123-registration-dir\") pod \"csi-node-driver-g9kgv\" (UID: \"a282df54-f6aa-450e-a3f8-2feaec5bf123\") " pod="calico-system/csi-node-driver-g9kgv" Nov 12 18:06:42.682292 kubelet[2722]: E1112 18:06:42.682267 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.682401 kubelet[2722]: W1112 18:06:42.682385 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.682464 kubelet[2722]: E1112 18:06:42.682454 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.682841 kubelet[2722]: E1112 18:06:42.682759 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.682841 kubelet[2722]: W1112 18:06:42.682772 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.682841 kubelet[2722]: E1112 18:06:42.682801 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.683520 kubelet[2722]: E1112 18:06:42.683337 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.683520 kubelet[2722]: W1112 18:06:42.683459 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.683520 kubelet[2722]: E1112 18:06:42.683475 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.686242 kubelet[2722]: E1112 18:06:42.686199 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.686242 kubelet[2722]: W1112 18:06:42.686219 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.686242 kubelet[2722]: E1112 18:06:42.686234 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.689942 kubelet[2722]: E1112 18:06:42.689924 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.689942 kubelet[2722]: W1112 18:06:42.689940 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.690057 kubelet[2722]: E1112 18:06:42.689956 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.714225 kubelet[2722]: E1112 18:06:42.714197 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:42.714778 containerd[1544]: time="2024-11-12T18:06:42.714735369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74cb6dc955-lmc66,Uid:45e92cfb-32ef-44dc-b739-40b99874c319,Namespace:calico-system,Attempt:0,}" Nov 12 18:06:42.751798 kubelet[2722]: E1112 18:06:42.751592 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:42.752157 containerd[1544]: time="2024-11-12T18:06:42.752098071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fdlgs,Uid:d8420ff5-190b-4472-b457-8ab7b3d6a22a,Namespace:calico-system,Attempt:0,}" Nov 12 18:06:42.774676 kubelet[2722]: E1112 18:06:42.774612 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.774676 kubelet[2722]: W1112 18:06:42.774631 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.774676 kubelet[2722]: E1112 18:06:42.774671 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.775043 kubelet[2722]: E1112 18:06:42.774934 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.775043 kubelet[2722]: W1112 18:06:42.774944 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.775043 kubelet[2722]: E1112 18:06:42.774973 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.775710 kubelet[2722]: E1112 18:06:42.775388 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.775710 kubelet[2722]: W1112 18:06:42.775399 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.775710 kubelet[2722]: E1112 18:06:42.775412 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.775710 kubelet[2722]: E1112 18:06:42.775573 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.775710 kubelet[2722]: W1112 18:06:42.775581 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.775710 kubelet[2722]: E1112 18:06:42.775596 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.775863 kubelet[2722]: E1112 18:06:42.775826 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.775863 kubelet[2722]: W1112 18:06:42.775835 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.775863 kubelet[2722]: E1112 18:06:42.775856 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.776353 kubelet[2722]: E1112 18:06:42.776160 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.776353 kubelet[2722]: W1112 18:06:42.776173 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.776353 kubelet[2722]: E1112 18:06:42.776197 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.776708 kubelet[2722]: E1112 18:06:42.776694 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.776708 kubelet[2722]: W1112 18:06:42.776708 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.776818 kubelet[2722]: E1112 18:06:42.776776 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.776933 kubelet[2722]: E1112 18:06:42.776923 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.776965 kubelet[2722]: W1112 18:06:42.776933 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.777043 kubelet[2722]: E1112 18:06:42.777028 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.777257 kubelet[2722]: E1112 18:06:42.777190 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.777257 kubelet[2722]: W1112 18:06:42.777204 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.777320 kubelet[2722]: E1112 18:06:42.777313 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.777553 kubelet[2722]: E1112 18:06:42.777496 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.777553 kubelet[2722]: W1112 18:06:42.777508 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.777661 kubelet[2722]: E1112 18:06:42.777645 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.777779 kubelet[2722]: E1112 18:06:42.777769 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.777779 kubelet[2722]: W1112 18:06:42.777779 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.777925 kubelet[2722]: E1112 18:06:42.777892 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.778340 kubelet[2722]: E1112 18:06:42.778325 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.778340 kubelet[2722]: W1112 18:06:42.778340 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.778400 kubelet[2722]: E1112 18:06:42.778354 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.786595 kubelet[2722]: E1112 18:06:42.786565 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.786595 kubelet[2722]: W1112 18:06:42.786617 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.787450 kubelet[2722]: E1112 18:06:42.786864 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.787450 kubelet[2722]: E1112 18:06:42.787065 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.787450 kubelet[2722]: W1112 18:06:42.787081 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.787450 kubelet[2722]: E1112 18:06:42.787115 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.787450 kubelet[2722]: E1112 18:06:42.787320 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.787450 kubelet[2722]: W1112 18:06:42.787329 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.787450 kubelet[2722]: E1112 18:06:42.787370 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.787760 kubelet[2722]: E1112 18:06:42.787639 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.787760 kubelet[2722]: W1112 18:06:42.787652 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.787760 kubelet[2722]: E1112 18:06:42.787683 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.787954 kubelet[2722]: E1112 18:06:42.787829 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.787954 kubelet[2722]: W1112 18:06:42.787838 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.787954 kubelet[2722]: E1112 18:06:42.787862 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.788148 kubelet[2722]: E1112 18:06:42.787995 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.788148 kubelet[2722]: W1112 18:06:42.788003 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.788148 kubelet[2722]: E1112 18:06:42.788021 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.788951 kubelet[2722]: E1112 18:06:42.788388 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.788951 kubelet[2722]: W1112 18:06:42.788404 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.788951 kubelet[2722]: E1112 18:06:42.788422 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.788951 kubelet[2722]: E1112 18:06:42.788607 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.788951 kubelet[2722]: W1112 18:06:42.788616 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.788951 kubelet[2722]: E1112 18:06:42.788627 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.789118 kubelet[2722]: E1112 18:06:42.789078 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.789118 kubelet[2722]: W1112 18:06:42.789090 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.789118 kubelet[2722]: E1112 18:06:42.789105 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.791643 kubelet[2722]: E1112 18:06:42.789397 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.791643 kubelet[2722]: W1112 18:06:42.789407 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.791643 kubelet[2722]: E1112 18:06:42.789601 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.791643 kubelet[2722]: E1112 18:06:42.789756 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.791643 kubelet[2722]: W1112 18:06:42.789765 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.791643 kubelet[2722]: E1112 18:06:42.789780 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.791643 kubelet[2722]: E1112 18:06:42.790136 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.791643 kubelet[2722]: W1112 18:06:42.790147 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.791643 kubelet[2722]: E1112 18:06:42.790188 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.791643 kubelet[2722]: E1112 18:06:42.790489 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.792877 kubelet[2722]: W1112 18:06:42.790501 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.792877 kubelet[2722]: E1112 18:06:42.790515 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.792945 containerd[1544]: time="2024-11-12T18:06:42.792191750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:06:42.792945 containerd[1544]: time="2024-11-12T18:06:42.792249390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:06:42.792945 containerd[1544]: time="2024-11-12T18:06:42.792273270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:06:42.792945 containerd[1544]: time="2024-11-12T18:06:42.792360231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:06:42.801705 kubelet[2722]: E1112 18:06:42.801650 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 18:06:42.801705 kubelet[2722]: W1112 18:06:42.801692 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 18:06:42.801705 kubelet[2722]: E1112 18:06:42.801711 2722 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 18:06:42.812037 containerd[1544]: time="2024-11-12T18:06:42.811725066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:06:42.812037 containerd[1544]: time="2024-11-12T18:06:42.811779267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:06:42.812037 containerd[1544]: time="2024-11-12T18:06:42.811806147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:06:42.812037 containerd[1544]: time="2024-11-12T18:06:42.811886787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:06:42.850135 containerd[1544]: time="2024-11-12T18:06:42.850078055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74cb6dc955-lmc66,Uid:45e92cfb-32ef-44dc-b739-40b99874c319,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ad988afbea06a72e9ed7f59dde509aac344264ecab255125efcd37ca210523c\"" Nov 12 18:06:42.851936 containerd[1544]: time="2024-11-12T18:06:42.851743184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fdlgs,Uid:d8420ff5-190b-4472-b457-8ab7b3d6a22a,Namespace:calico-system,Attempt:0,} returns sandbox id \"b642b392cd185e54775dbcd4f420b35eeb59a60b67f0a6d7dc54401df0071335\"" Nov 12 18:06:42.852050 kubelet[2722]: E1112 18:06:42.851968 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:42.852769 kubelet[2722]: E1112 18:06:42.852508 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:42.855689 containerd[1544]: time="2024-11-12T18:06:42.855664808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 18:06:43.914400 containerd[1544]: time="2024-11-12T18:06:43.914358266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:43.915704 containerd[1544]: time="2024-11-12T18:06:43.914759028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5117816" Nov 12 18:06:43.915832 containerd[1544]: time="2024-11-12T18:06:43.915779914Z" level=info msg="ImageCreate event name:\"sha256:bd15f6fc4f6c943c0f50373a7141cb17e8f12e21aaad47c24b6667c3f1c9947e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:43.917564 containerd[1544]: time="2024-11-12T18:06:43.917525924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:43.918434 containerd[1544]: time="2024-11-12T18:06:43.918405969Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:bd15f6fc4f6c943c0f50373a7141cb17e8f12e21aaad47c24b6667c3f1c9947e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6487412\" in 1.062708281s" Nov 12 18:06:43.918529 containerd[1544]: time="2024-11-12T18:06:43.918511250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:bd15f6fc4f6c943c0f50373a7141cb17e8f12e21aaad47c24b6667c3f1c9947e\"" Nov 12 18:06:43.919324 containerd[1544]: time="2024-11-12T18:06:43.919301534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 18:06:43.921075 containerd[1544]: time="2024-11-12T18:06:43.921053304Z" level=info msg="CreateContainer within sandbox \"b642b392cd185e54775dbcd4f420b35eeb59a60b67f0a6d7dc54401df0071335\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 18:06:43.940117 containerd[1544]: time="2024-11-12T18:06:43.940075412Z" level=info msg="CreateContainer within sandbox \"b642b392cd185e54775dbcd4f420b35eeb59a60b67f0a6d7dc54401df0071335\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0cd7803a6c9b35ca25fdc386eb16424cd2f247b6e4a9cf6ccb35c56e8efa216f\"" Nov 12 18:06:43.940462 containerd[1544]: time="2024-11-12T18:06:43.940440454Z" level=info msg="StartContainer for \"0cd7803a6c9b35ca25fdc386eb16424cd2f247b6e4a9cf6ccb35c56e8efa216f\"" Nov 12 18:06:43.993994 containerd[1544]: time="2024-11-12T18:06:43.993908878Z" level=info msg="StartContainer for \"0cd7803a6c9b35ca25fdc386eb16424cd2f247b6e4a9cf6ccb35c56e8efa216f\" returns successfully" Nov 12 18:06:44.074597 containerd[1544]: time="2024-11-12T18:06:44.074436198Z" level=info msg="shim disconnected" id=0cd7803a6c9b35ca25fdc386eb16424cd2f247b6e4a9cf6ccb35c56e8efa216f namespace=k8s.io Nov 12 18:06:44.074597 containerd[1544]: time="2024-11-12T18:06:44.074486318Z" level=warning msg="cleaning up after shim disconnected" id=0cd7803a6c9b35ca25fdc386eb16424cd2f247b6e4a9cf6ccb35c56e8efa216f namespace=k8s.io Nov 12 18:06:44.074597 containerd[1544]: time="2024-11-12T18:06:44.074494158Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 18:06:44.677927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cd7803a6c9b35ca25fdc386eb16424cd2f247b6e4a9cf6ccb35c56e8efa216f-rootfs.mount: Deactivated successfully. Nov 12 18:06:44.757434 kubelet[2722]: E1112 18:06:44.757362 2722 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g9kgv" podUID="a282df54-f6aa-450e-a3f8-2feaec5bf123" Nov 12 18:06:44.806870 kubelet[2722]: E1112 18:06:44.806837 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:45.601461 containerd[1544]: time="2024-11-12T18:06:45.601405639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:45.601968 containerd[1544]: time="2024-11-12T18:06:45.601928282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=27849584" Nov 12 18:06:45.602638 containerd[1544]: time="2024-11-12T18:06:45.602600485Z" level=info msg="ImageCreate event name:\"sha256:b2bb88f3f42552b429baa4766d841334e258ac314fd6372cf3b9700487183ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:45.605030 containerd[1544]: time="2024-11-12T18:06:45.604994938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:45.605626 containerd[1544]: time="2024-11-12T18:06:45.605586141Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:b2bb88f3f42552b429baa4766d841334e258ac314fd6372cf3b9700487183ad3\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"29219212\" in 1.686169606s" Nov 12 18:06:45.605656 containerd[1544]: time="2024-11-12T18:06:45.605626261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:b2bb88f3f42552b429baa4766d841334e258ac314fd6372cf3b9700487183ad3\"" Nov 12 18:06:45.606179 containerd[1544]: time="2024-11-12T18:06:45.606149864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 18:06:45.613228 containerd[1544]: time="2024-11-12T18:06:45.613184820Z" level=info msg="CreateContainer within sandbox \"8ad988afbea06a72e9ed7f59dde509aac344264ecab255125efcd37ca210523c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 18:06:45.624604 containerd[1544]: time="2024-11-12T18:06:45.624552800Z" level=info msg="CreateContainer within sandbox \"8ad988afbea06a72e9ed7f59dde509aac344264ecab255125efcd37ca210523c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1e501ec9357f4674035005a41296603717e9f7314be0174af565a04726eb83b7\"" Nov 12 18:06:45.624981 containerd[1544]: time="2024-11-12T18:06:45.624948082Z" level=info msg="StartContainer for \"1e501ec9357f4674035005a41296603717e9f7314be0174af565a04726eb83b7\"" Nov 12 18:06:45.691937 containerd[1544]: time="2024-11-12T18:06:45.691302387Z" level=info msg="StartContainer for \"1e501ec9357f4674035005a41296603717e9f7314be0174af565a04726eb83b7\" returns successfully" Nov 12 18:06:45.811574 kubelet[2722]: E1112 18:06:45.811221 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:45.820492 kubelet[2722]: I1112 18:06:45.820448 2722 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-74cb6dc955-lmc66" podStartSLOduration=1.069102718 podStartE2EDuration="3.820414779s" podCreationTimestamp="2024-11-12 18:06:42 +0000 UTC" firstStartedPulling="2024-11-12 18:06:42.854685522 +0000 UTC m=+21.179668858" lastFinishedPulling="2024-11-12 18:06:45.605997583 +0000 UTC m=+23.930980919" observedRunningTime="2024-11-12 18:06:45.818932571 +0000 UTC m=+24.143915907" watchObservedRunningTime="2024-11-12 18:06:45.820414779 +0000 UTC m=+24.145398115" Nov 12 18:06:46.757824 kubelet[2722]: E1112 18:06:46.757306 2722 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g9kgv" podUID="a282df54-f6aa-450e-a3f8-2feaec5bf123" Nov 12 18:06:46.813486 kubelet[2722]: I1112 18:06:46.813452 2722 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 18:06:46.814087 kubelet[2722]: E1112 18:06:46.814053 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:48.287928 containerd[1544]: time="2024-11-12T18:06:48.287889677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:48.288823 containerd[1544]: time="2024-11-12T18:06:48.288672880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=89700517" Nov 12 18:06:48.289624 containerd[1544]: time="2024-11-12T18:06:48.289555564Z" level=info msg="ImageCreate event name:\"sha256:9c7b7d79ea478f25cd5de34ec1519a0aaa7ac440910e61075e65092a94aea41f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:48.291827 containerd[1544]: time="2024-11-12T18:06:48.291697774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:48.292366 containerd[1544]: time="2024-11-12T18:06:48.292344617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:9c7b7d79ea478f25cd5de34ec1519a0aaa7ac440910e61075e65092a94aea41f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"91070153\" in 2.686158913s" Nov 12 18:06:48.292418 containerd[1544]: time="2024-11-12T18:06:48.292375857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:9c7b7d79ea478f25cd5de34ec1519a0aaa7ac440910e61075e65092a94aea41f\"" Nov 12 18:06:48.294471 containerd[1544]: time="2024-11-12T18:06:48.294442187Z" level=info msg="CreateContainer within sandbox \"b642b392cd185e54775dbcd4f420b35eeb59a60b67f0a6d7dc54401df0071335\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 18:06:48.308263 containerd[1544]: time="2024-11-12T18:06:48.308218130Z" level=info msg="CreateContainer within sandbox \"b642b392cd185e54775dbcd4f420b35eeb59a60b67f0a6d7dc54401df0071335\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5a91c486f001129dc4be1d24cde1fb4050f7f18a73563be5d713ebc2aa781ce6\"" Nov 12 18:06:48.308924 containerd[1544]: time="2024-11-12T18:06:48.308648812Z" level=info msg="StartContainer for \"5a91c486f001129dc4be1d24cde1fb4050f7f18a73563be5d713ebc2aa781ce6\"" Nov 12 18:06:48.357069 containerd[1544]: time="2024-11-12T18:06:48.357020914Z" level=info msg="StartContainer for \"5a91c486f001129dc4be1d24cde1fb4050f7f18a73563be5d713ebc2aa781ce6\" returns successfully" Nov 12 18:06:48.757079 kubelet[2722]: E1112 18:06:48.756999 2722 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g9kgv" podUID="a282df54-f6aa-450e-a3f8-2feaec5bf123" Nov 12 18:06:48.818481 kubelet[2722]: E1112 18:06:48.817936 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:48.993568 containerd[1544]: time="2024-11-12T18:06:48.993466593Z" level=info msg="shim disconnected" id=5a91c486f001129dc4be1d24cde1fb4050f7f18a73563be5d713ebc2aa781ce6 namespace=k8s.io Nov 12 18:06:48.993568 containerd[1544]: time="2024-11-12T18:06:48.993522393Z" level=warning msg="cleaning up after shim disconnected" id=5a91c486f001129dc4be1d24cde1fb4050f7f18a73563be5d713ebc2aa781ce6 namespace=k8s.io Nov 12 18:06:48.993568 containerd[1544]: time="2024-11-12T18:06:48.993530793Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 18:06:48.998378 kubelet[2722]: I1112 18:06:48.998197 2722 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 18:06:49.024992 kubelet[2722]: I1112 18:06:49.024868 2722 topology_manager.go:215] "Topology Admit Handler" podUID="4735fef9-5e79-40fb-ba5e-7a6cff344df8" podNamespace="calico-system" podName="calico-kube-controllers-5bd4854968-v99gp" Nov 12 18:06:49.031068 kubelet[2722]: I1112 18:06:49.030390 2722 topology_manager.go:215] "Topology Admit Handler" podUID="4c5a9c64-6667-4267-a885-4e8b234758e4" podNamespace="calico-apiserver" podName="calico-apiserver-55594cbfc8-xtvnb" Nov 12 18:06:49.031068 kubelet[2722]: I1112 18:06:49.030806 2722 topology_manager.go:215] "Topology Admit Handler" podUID="28a6568b-1b3e-478e-ba5b-d89b95125e3f" podNamespace="kube-system" podName="coredns-76f75df574-pgmhz" Nov 12 18:06:49.045867 kubelet[2722]: I1112 18:06:49.045782 2722 topology_manager.go:215] "Topology Admit Handler" podUID="cdff6f1d-4fee-422c-bb63-c5707ab88ef8" podNamespace="kube-system" podName="coredns-76f75df574-bdfqg" Nov 12 18:06:49.046236 kubelet[2722]: I1112 18:06:49.045955 2722 topology_manager.go:215] "Topology Admit Handler" podUID="30f0ebab-e362-4d1a-9134-a19a9dbbe847" podNamespace="calico-apiserver" podName="calico-apiserver-55594cbfc8-dq7v6" Nov 12 18:06:49.214940 kubelet[2722]: I1112 18:06:49.214892 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4735fef9-5e79-40fb-ba5e-7a6cff344df8-tigera-ca-bundle\") pod \"calico-kube-controllers-5bd4854968-v99gp\" (UID: \"4735fef9-5e79-40fb-ba5e-7a6cff344df8\") " pod="calico-system/calico-kube-controllers-5bd4854968-v99gp" Nov 12 18:06:49.214940 kubelet[2722]: I1112 18:06:49.214938 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdff6f1d-4fee-422c-bb63-c5707ab88ef8-config-volume\") pod \"coredns-76f75df574-bdfqg\" (UID: \"cdff6f1d-4fee-422c-bb63-c5707ab88ef8\") " pod="kube-system/coredns-76f75df574-bdfqg" Nov 12 18:06:49.215101 kubelet[2722]: I1112 18:06:49.214963 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5kw5\" (UniqueName: \"kubernetes.io/projected/cdff6f1d-4fee-422c-bb63-c5707ab88ef8-kube-api-access-w5kw5\") pod \"coredns-76f75df574-bdfqg\" (UID: \"cdff6f1d-4fee-422c-bb63-c5707ab88ef8\") " pod="kube-system/coredns-76f75df574-bdfqg" Nov 12 18:06:49.215101 kubelet[2722]: I1112 18:06:49.214990 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/30f0ebab-e362-4d1a-9134-a19a9dbbe847-calico-apiserver-certs\") pod \"calico-apiserver-55594cbfc8-dq7v6\" (UID: \"30f0ebab-e362-4d1a-9134-a19a9dbbe847\") " pod="calico-apiserver/calico-apiserver-55594cbfc8-dq7v6" Nov 12 18:06:49.215185 kubelet[2722]: I1112 18:06:49.215127 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6sb8\" (UniqueName: \"kubernetes.io/projected/4735fef9-5e79-40fb-ba5e-7a6cff344df8-kube-api-access-j6sb8\") pod \"calico-kube-controllers-5bd4854968-v99gp\" (UID: \"4735fef9-5e79-40fb-ba5e-7a6cff344df8\") " pod="calico-system/calico-kube-controllers-5bd4854968-v99gp" Nov 12 18:06:49.215185 kubelet[2722]: I1112 18:06:49.215153 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prdjg\" (UniqueName: \"kubernetes.io/projected/4c5a9c64-6667-4267-a885-4e8b234758e4-kube-api-access-prdjg\") pod \"calico-apiserver-55594cbfc8-xtvnb\" (UID: \"4c5a9c64-6667-4267-a885-4e8b234758e4\") " pod="calico-apiserver/calico-apiserver-55594cbfc8-xtvnb" Nov 12 18:06:49.215239 kubelet[2722]: I1112 18:06:49.215198 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvx64\" (UniqueName: \"kubernetes.io/projected/30f0ebab-e362-4d1a-9134-a19a9dbbe847-kube-api-access-bvx64\") pod \"calico-apiserver-55594cbfc8-dq7v6\" (UID: \"30f0ebab-e362-4d1a-9134-a19a9dbbe847\") " pod="calico-apiserver/calico-apiserver-55594cbfc8-dq7v6" Nov 12 18:06:49.215239 kubelet[2722]: I1112 18:06:49.215222 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjgrq\" (UniqueName: \"kubernetes.io/projected/28a6568b-1b3e-478e-ba5b-d89b95125e3f-kube-api-access-mjgrq\") pod \"coredns-76f75df574-pgmhz\" (UID: \"28a6568b-1b3e-478e-ba5b-d89b95125e3f\") " pod="kube-system/coredns-76f75df574-pgmhz" Nov 12 18:06:49.215290 kubelet[2722]: I1112 18:06:49.215244 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28a6568b-1b3e-478e-ba5b-d89b95125e3f-config-volume\") pod \"coredns-76f75df574-pgmhz\" (UID: \"28a6568b-1b3e-478e-ba5b-d89b95125e3f\") " pod="kube-system/coredns-76f75df574-pgmhz" Nov 12 18:06:49.215290 kubelet[2722]: I1112 18:06:49.215264 2722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4c5a9c64-6667-4267-a885-4e8b234758e4-calico-apiserver-certs\") pod \"calico-apiserver-55594cbfc8-xtvnb\" (UID: \"4c5a9c64-6667-4267-a885-4e8b234758e4\") " pod="calico-apiserver/calico-apiserver-55594cbfc8-xtvnb" Nov 12 18:06:49.305198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a91c486f001129dc4be1d24cde1fb4050f7f18a73563be5d713ebc2aa781ce6-rootfs.mount: Deactivated successfully. Nov 12 18:06:49.337740 containerd[1544]: time="2024-11-12T18:06:49.337685550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55594cbfc8-xtvnb,Uid:4c5a9c64-6667-4267-a885-4e8b234758e4,Namespace:calico-apiserver,Attempt:0,}" Nov 12 18:06:49.349658 kubelet[2722]: E1112 18:06:49.349304 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:49.350023 kubelet[2722]: E1112 18:06:49.349885 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:49.350898 containerd[1544]: time="2024-11-12T18:06:49.350862928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bdfqg,Uid:cdff6f1d-4fee-422c-bb63-c5707ab88ef8,Namespace:kube-system,Attempt:0,}" Nov 12 18:06:49.351024 containerd[1544]: time="2024-11-12T18:06:49.350995769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pgmhz,Uid:28a6568b-1b3e-478e-ba5b-d89b95125e3f,Namespace:kube-system,Attempt:0,}" Nov 12 18:06:49.354051 containerd[1544]: time="2024-11-12T18:06:49.353980782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55594cbfc8-dq7v6,Uid:30f0ebab-e362-4d1a-9134-a19a9dbbe847,Namespace:calico-apiserver,Attempt:0,}" Nov 12 18:06:49.632210 containerd[1544]: time="2024-11-12T18:06:49.632000327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd4854968-v99gp,Uid:4735fef9-5e79-40fb-ba5e-7a6cff344df8,Namespace:calico-system,Attempt:0,}" Nov 12 18:06:49.635832 containerd[1544]: time="2024-11-12T18:06:49.635765464Z" level=error msg="Failed to destroy network for sandbox \"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.636231 containerd[1544]: time="2024-11-12T18:06:49.636117145Z" level=error msg="encountered an error cleaning up failed sandbox \"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.636231 containerd[1544]: time="2024-11-12T18:06:49.636175546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55594cbfc8-dq7v6,Uid:30f0ebab-e362-4d1a-9134-a19a9dbbe847,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.636967 containerd[1544]: time="2024-11-12T18:06:49.636927109Z" level=error msg="Failed to destroy network for sandbox \"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.637765 containerd[1544]: time="2024-11-12T18:06:49.637734432Z" level=error msg="encountered an error cleaning up failed sandbox \"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.637838 containerd[1544]: time="2024-11-12T18:06:49.637782353Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bdfqg,Uid:cdff6f1d-4fee-422c-bb63-c5707ab88ef8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.638799 containerd[1544]: time="2024-11-12T18:06:49.638666876Z" level=error msg="Failed to destroy network for sandbox \"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.639300 kubelet[2722]: E1112 18:06:49.639263 2722 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.639381 kubelet[2722]: E1112 18:06:49.639335 2722 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-bdfqg" Nov 12 18:06:49.639381 kubelet[2722]: E1112 18:06:49.639366 2722 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-bdfqg" Nov 12 18:06:49.639429 kubelet[2722]: E1112 18:06:49.639269 2722 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.639756 kubelet[2722]: E1112 18:06:49.639465 2722 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55594cbfc8-dq7v6" Nov 12 18:06:49.639756 kubelet[2722]: E1112 18:06:49.639517 2722 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55594cbfc8-dq7v6" Nov 12 18:06:49.639907 containerd[1544]: time="2024-11-12T18:06:49.639636441Z" level=error msg="encountered an error cleaning up failed sandbox \"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.639907 containerd[1544]: time="2024-11-12T18:06:49.639675521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55594cbfc8-xtvnb,Uid:4c5a9c64-6667-4267-a885-4e8b234758e4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.640347 kubelet[2722]: E1112 18:06:49.639562 2722 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55594cbfc8-dq7v6_calico-apiserver(30f0ebab-e362-4d1a-9134-a19a9dbbe847)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55594cbfc8-dq7v6_calico-apiserver(30f0ebab-e362-4d1a-9134-a19a9dbbe847)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55594cbfc8-dq7v6" podUID="30f0ebab-e362-4d1a-9134-a19a9dbbe847" Nov 12 18:06:49.640516 kubelet[2722]: E1112 18:06:49.639420 2722 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-bdfqg_kube-system(cdff6f1d-4fee-422c-bb63-c5707ab88ef8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-bdfqg_kube-system(cdff6f1d-4fee-422c-bb63-c5707ab88ef8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-bdfqg" podUID="cdff6f1d-4fee-422c-bb63-c5707ab88ef8" Nov 12 18:06:49.640606 kubelet[2722]: E1112 18:06:49.639863 2722 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.640687 kubelet[2722]: E1112 18:06:49.640677 2722 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55594cbfc8-xtvnb" Nov 12 18:06:49.640746 kubelet[2722]: E1112 18:06:49.640738 2722 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55594cbfc8-xtvnb" Nov 12 18:06:49.640896 kubelet[2722]: E1112 18:06:49.640882 2722 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55594cbfc8-xtvnb_calico-apiserver(4c5a9c64-6667-4267-a885-4e8b234758e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55594cbfc8-xtvnb_calico-apiserver(4c5a9c64-6667-4267-a885-4e8b234758e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55594cbfc8-xtvnb" podUID="4c5a9c64-6667-4267-a885-4e8b234758e4" Nov 12 18:06:49.648367 containerd[1544]: time="2024-11-12T18:06:49.648317399Z" level=error msg="Failed to destroy network for sandbox \"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.648622 containerd[1544]: time="2024-11-12T18:06:49.648595680Z" level=error msg="encountered an error cleaning up failed sandbox \"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.648654 containerd[1544]: time="2024-11-12T18:06:49.648637240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pgmhz,Uid:28a6568b-1b3e-478e-ba5b-d89b95125e3f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.649025 kubelet[2722]: E1112 18:06:49.648879 2722 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.649025 kubelet[2722]: E1112 18:06:49.648929 2722 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-pgmhz" Nov 12 18:06:49.649025 kubelet[2722]: E1112 18:06:49.648948 2722 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-pgmhz" Nov 12 18:06:49.649979 kubelet[2722]: E1112 18:06:49.649001 2722 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-pgmhz_kube-system(28a6568b-1b3e-478e-ba5b-d89b95125e3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-pgmhz_kube-system(28a6568b-1b3e-478e-ba5b-d89b95125e3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-pgmhz" podUID="28a6568b-1b3e-478e-ba5b-d89b95125e3f" Nov 12 18:06:49.682012 containerd[1544]: time="2024-11-12T18:06:49.681971267Z" level=error msg="Failed to destroy network for sandbox \"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.682313 containerd[1544]: time="2024-11-12T18:06:49.682286109Z" level=error msg="encountered an error cleaning up failed sandbox \"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.682367 containerd[1544]: time="2024-11-12T18:06:49.682333709Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd4854968-v99gp,Uid:4735fef9-5e79-40fb-ba5e-7a6cff344df8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.682620 kubelet[2722]: E1112 18:06:49.682587 2722 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.682664 kubelet[2722]: E1112 18:06:49.682640 2722 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bd4854968-v99gp" Nov 12 18:06:49.682664 kubelet[2722]: E1112 18:06:49.682660 2722 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bd4854968-v99gp" Nov 12 18:06:49.682732 kubelet[2722]: E1112 18:06:49.682719 2722 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5bd4854968-v99gp_calico-system(4735fef9-5e79-40fb-ba5e-7a6cff344df8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5bd4854968-v99gp_calico-system(4735fef9-5e79-40fb-ba5e-7a6cff344df8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bd4854968-v99gp" podUID="4735fef9-5e79-40fb-ba5e-7a6cff344df8" Nov 12 18:06:49.821549 kubelet[2722]: I1112 18:06:49.821515 2722 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Nov 12 18:06:49.822238 kubelet[2722]: E1112 18:06:49.821904 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:49.822414 containerd[1544]: time="2024-11-12T18:06:49.822272685Z" level=info msg="StopPodSandbox for \"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\"" Nov 12 18:06:49.822947 kubelet[2722]: I1112 18:06:49.822923 2722 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Nov 12 18:06:49.823453 containerd[1544]: time="2024-11-12T18:06:49.822850968Z" level=info msg="Ensure that sandbox 3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71 in task-service has been cleanup successfully" Nov 12 18:06:49.823734 containerd[1544]: time="2024-11-12T18:06:49.823709292Z" level=info msg="StopPodSandbox for \"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\"" Nov 12 18:06:49.825029 containerd[1544]: time="2024-11-12T18:06:49.824894257Z" level=info msg="Ensure that sandbox 4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16 in task-service has been cleanup successfully" Nov 12 18:06:49.825992 kubelet[2722]: I1112 18:06:49.825960 2722 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Nov 12 18:06:49.826905 containerd[1544]: time="2024-11-12T18:06:49.826321023Z" level=info msg="StopPodSandbox for \"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\"" Nov 12 18:06:49.826905 containerd[1544]: time="2024-11-12T18:06:49.826440904Z" level=info msg="Ensure that sandbox e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094 in task-service has been cleanup successfully" Nov 12 18:06:49.827865 kubelet[2722]: I1112 18:06:49.827836 2722 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Nov 12 18:06:49.828448 containerd[1544]: time="2024-11-12T18:06:49.828418752Z" level=info msg="StopPodSandbox for \"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\"" Nov 12 18:06:49.828701 containerd[1544]: time="2024-11-12T18:06:49.828670954Z" level=info msg="Ensure that sandbox 8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c in task-service has been cleanup successfully" Nov 12 18:06:49.833306 kubelet[2722]: I1112 18:06:49.833283 2722 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Nov 12 18:06:49.834369 containerd[1544]: time="2024-11-12T18:06:49.834275938Z" level=info msg="StopPodSandbox for \"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\"" Nov 12 18:06:49.834713 containerd[1544]: time="2024-11-12T18:06:49.834563340Z" level=info msg="Ensure that sandbox f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb in task-service has been cleanup successfully" Nov 12 18:06:49.838289 containerd[1544]: time="2024-11-12T18:06:49.838170755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 18:06:49.878563 containerd[1544]: time="2024-11-12T18:06:49.876267963Z" level=error msg="StopPodSandbox for \"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\" failed" error="failed to destroy network for sandbox \"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.879041 kubelet[2722]: E1112 18:06:49.876584 2722 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Nov 12 18:06:49.879041 kubelet[2722]: E1112 18:06:49.876671 2722 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71"} Nov 12 18:06:49.879041 kubelet[2722]: E1112 18:06:49.876705 2722 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4735fef9-5e79-40fb-ba5e-7a6cff344df8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 18:06:49.879041 kubelet[2722]: E1112 18:06:49.876731 2722 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4735fef9-5e79-40fb-ba5e-7a6cff344df8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bd4854968-v99gp" podUID="4735fef9-5e79-40fb-ba5e-7a6cff344df8" Nov 12 18:06:49.898324 containerd[1544]: time="2024-11-12T18:06:49.898083059Z" level=error msg="StopPodSandbox for \"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\" failed" error="failed to destroy network for sandbox \"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.898964 kubelet[2722]: E1112 18:06:49.898408 2722 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Nov 12 18:06:49.898964 kubelet[2722]: E1112 18:06:49.898455 2722 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094"} Nov 12 18:06:49.898964 kubelet[2722]: E1112 18:06:49.898488 2722 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cdff6f1d-4fee-422c-bb63-c5707ab88ef8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 18:06:49.898964 kubelet[2722]: E1112 18:06:49.898518 2722 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cdff6f1d-4fee-422c-bb63-c5707ab88ef8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-bdfqg" podUID="cdff6f1d-4fee-422c-bb63-c5707ab88ef8" Nov 12 18:06:49.905806 containerd[1544]: time="2024-11-12T18:06:49.905032050Z" level=error msg="StopPodSandbox for \"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\" failed" error="failed to destroy network for sandbox \"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.910045 kubelet[2722]: E1112 18:06:49.910016 2722 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Nov 12 18:06:49.910129 kubelet[2722]: E1112 18:06:49.910063 2722 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb"} Nov 12 18:06:49.910129 kubelet[2722]: E1112 18:06:49.910098 2722 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c5a9c64-6667-4267-a885-4e8b234758e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 18:06:49.910129 kubelet[2722]: E1112 18:06:49.910124 2722 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c5a9c64-6667-4267-a885-4e8b234758e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55594cbfc8-xtvnb" podUID="4c5a9c64-6667-4267-a885-4e8b234758e4" Nov 12 18:06:49.934421 containerd[1544]: time="2024-11-12T18:06:49.934047338Z" level=error msg="StopPodSandbox for \"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\" failed" error="failed to destroy network for sandbox \"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.942657 kubelet[2722]: E1112 18:06:49.940849 2722 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Nov 12 18:06:49.942657 kubelet[2722]: E1112 18:06:49.940979 2722 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16"} Nov 12 18:06:49.942657 kubelet[2722]: E1112 18:06:49.941023 2722 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"28a6568b-1b3e-478e-ba5b-d89b95125e3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 18:06:49.942657 kubelet[2722]: E1112 18:06:49.941061 2722 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"28a6568b-1b3e-478e-ba5b-d89b95125e3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-pgmhz" podUID="28a6568b-1b3e-478e-ba5b-d89b95125e3f" Nov 12 18:06:49.954843 containerd[1544]: time="2024-11-12T18:06:49.954781909Z" level=error msg="StopPodSandbox for \"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\" failed" error="failed to destroy network for sandbox \"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:49.955051 kubelet[2722]: E1112 18:06:49.955025 2722 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Nov 12 18:06:49.955128 kubelet[2722]: E1112 18:06:49.955061 2722 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c"} Nov 12 18:06:49.955128 kubelet[2722]: E1112 18:06:49.955095 2722 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"30f0ebab-e362-4d1a-9134-a19a9dbbe847\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 18:06:49.955128 kubelet[2722]: E1112 18:06:49.955121 2722 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"30f0ebab-e362-4d1a-9134-a19a9dbbe847\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55594cbfc8-dq7v6" podUID="30f0ebab-e362-4d1a-9134-a19a9dbbe847" Nov 12 18:06:50.306757 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb-shm.mount: Deactivated successfully. Nov 12 18:06:50.605183 systemd[1]: Started sshd@7-10.0.0.144:22-10.0.0.1:60054.service - OpenSSH per-connection server daemon (10.0.0.1:60054). Nov 12 18:06:50.639392 sshd[3723]: Accepted publickey for core from 10.0.0.1 port 60054 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:06:50.640721 sshd[3723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:06:50.644431 systemd-logind[1523]: New session 8 of user core. Nov 12 18:06:50.651021 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 18:06:50.762247 containerd[1544]: time="2024-11-12T18:06:50.761605815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g9kgv,Uid:a282df54-f6aa-450e-a3f8-2feaec5bf123,Namespace:calico-system,Attempt:0,}" Nov 12 18:06:50.788421 sshd[3723]: pam_unix(sshd:session): session closed for user core Nov 12 18:06:50.795715 systemd[1]: sshd@7-10.0.0.144:22-10.0.0.1:60054.service: Deactivated successfully. Nov 12 18:06:50.803110 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 18:06:50.804110 systemd-logind[1523]: Session 8 logged out. Waiting for processes to exit. Nov 12 18:06:50.805720 systemd-logind[1523]: Removed session 8. Nov 12 18:06:50.852682 containerd[1544]: time="2024-11-12T18:06:50.852637401Z" level=error msg="Failed to destroy network for sandbox \"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:50.853044 containerd[1544]: time="2024-11-12T18:06:50.853012082Z" level=error msg="encountered an error cleaning up failed sandbox \"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:50.853090 containerd[1544]: time="2024-11-12T18:06:50.853070243Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g9kgv,Uid:a282df54-f6aa-450e-a3f8-2feaec5bf123,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:50.853314 kubelet[2722]: E1112 18:06:50.853291 2722 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:50.853568 kubelet[2722]: E1112 18:06:50.853342 2722 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g9kgv" Nov 12 18:06:50.853568 kubelet[2722]: E1112 18:06:50.853363 2722 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g9kgv" Nov 12 18:06:50.853568 kubelet[2722]: E1112 18:06:50.853419 2722 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g9kgv_calico-system(a282df54-f6aa-450e-a3f8-2feaec5bf123)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g9kgv_calico-system(a282df54-f6aa-450e-a3f8-2feaec5bf123)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g9kgv" podUID="a282df54-f6aa-450e-a3f8-2feaec5bf123" Nov 12 18:06:50.854822 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1-shm.mount: Deactivated successfully. Nov 12 18:06:51.274882 kubelet[2722]: I1112 18:06:51.274820 2722 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 18:06:51.276376 kubelet[2722]: E1112 18:06:51.276301 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:51.839177 kubelet[2722]: E1112 18:06:51.839135 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:51.839566 kubelet[2722]: I1112 18:06:51.839440 2722 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Nov 12 18:06:51.840416 containerd[1544]: time="2024-11-12T18:06:51.840372892Z" level=info msg="StopPodSandbox for \"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\"" Nov 12 18:06:51.840855 containerd[1544]: time="2024-11-12T18:06:51.840546773Z" level=info msg="Ensure that sandbox 9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1 in task-service has been cleanup successfully" Nov 12 18:06:51.867118 containerd[1544]: time="2024-11-12T18:06:51.867032961Z" level=error msg="StopPodSandbox for \"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\" failed" error="failed to destroy network for sandbox \"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 18:06:51.867291 kubelet[2722]: E1112 18:06:51.867246 2722 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Nov 12 18:06:51.867291 kubelet[2722]: E1112 18:06:51.867285 2722 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1"} Nov 12 18:06:51.867596 kubelet[2722]: E1112 18:06:51.867320 2722 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a282df54-f6aa-450e-a3f8-2feaec5bf123\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 18:06:51.867596 kubelet[2722]: E1112 18:06:51.867347 2722 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a282df54-f6aa-450e-a3f8-2feaec5bf123\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g9kgv" podUID="a282df54-f6aa-450e-a3f8-2feaec5bf123" Nov 12 18:06:53.513103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4059634340.mount: Deactivated successfully. Nov 12 18:06:53.599072 containerd[1544]: time="2024-11-12T18:06:53.599021503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:53.599694 containerd[1544]: time="2024-11-12T18:06:53.599649545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=135495328" Nov 12 18:06:53.600598 containerd[1544]: time="2024-11-12T18:06:53.600537469Z" level=info msg="ImageCreate event name:\"sha256:8d083b1bdef5f976f011d47e03dcb8015c1a80cb54a915c6b8e64df03f0743d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:53.602464 containerd[1544]: time="2024-11-12T18:06:53.602249915Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:06:53.602934 containerd[1544]: time="2024-11-12T18:06:53.602893718Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:8d083b1bdef5f976f011d47e03dcb8015c1a80cb54a915c6b8e64df03f0743d5\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"135495190\" in 3.764665401s" Nov 12 18:06:53.602934 containerd[1544]: time="2024-11-12T18:06:53.602926998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:8d083b1bdef5f976f011d47e03dcb8015c1a80cb54a915c6b8e64df03f0743d5\"" Nov 12 18:06:53.610029 containerd[1544]: time="2024-11-12T18:06:53.609991464Z" level=info msg="CreateContainer within sandbox \"b642b392cd185e54775dbcd4f420b35eeb59a60b67f0a6d7dc54401df0071335\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 18:06:53.622100 containerd[1544]: time="2024-11-12T18:06:53.622039670Z" level=info msg="CreateContainer within sandbox \"b642b392cd185e54775dbcd4f420b35eeb59a60b67f0a6d7dc54401df0071335\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d94a7c82610f6ecd07a3f665d37eedbc5a48af25f6458df983b41649a75bf11c\"" Nov 12 18:06:53.623241 containerd[1544]: time="2024-11-12T18:06:53.623209555Z" level=info msg="StartContainer for \"d94a7c82610f6ecd07a3f665d37eedbc5a48af25f6458df983b41649a75bf11c\"" Nov 12 18:06:53.805443 containerd[1544]: time="2024-11-12T18:06:53.803576438Z" level=info msg="StartContainer for \"d94a7c82610f6ecd07a3f665d37eedbc5a48af25f6458df983b41649a75bf11c\" returns successfully" Nov 12 18:06:53.846578 kubelet[2722]: E1112 18:06:53.845708 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:53.904837 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 18:06:53.905004 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 18:06:54.846863 kubelet[2722]: I1112 18:06:54.846369 2722 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 18:06:54.847235 kubelet[2722]: E1112 18:06:54.847133 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:55.390824 kernel: bpftool[3998]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 18:06:55.541740 systemd-networkd[1231]: vxlan.calico: Link UP Nov 12 18:06:55.541748 systemd-networkd[1231]: vxlan.calico: Gained carrier Nov 12 18:06:55.800117 systemd[1]: Started sshd@8-10.0.0.144:22-10.0.0.1:48486.service - OpenSSH per-connection server daemon (10.0.0.1:48486). Nov 12 18:06:55.845661 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 48486 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:06:55.847052 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:06:55.854273 systemd-logind[1523]: New session 9 of user core. Nov 12 18:06:55.864325 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 18:06:56.013669 sshd[4068]: pam_unix(sshd:session): session closed for user core Nov 12 18:06:56.018218 systemd[1]: sshd@8-10.0.0.144:22-10.0.0.1:48486.service: Deactivated successfully. Nov 12 18:06:56.021347 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 18:06:56.022590 systemd-logind[1523]: Session 9 logged out. Waiting for processes to exit. Nov 12 18:06:56.024664 systemd-logind[1523]: Removed session 9. Nov 12 18:06:56.572025 kubelet[2722]: I1112 18:06:56.571883 2722 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 18:06:56.573063 kubelet[2722]: E1112 18:06:56.572650 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:06:56.966969 systemd-networkd[1231]: vxlan.calico: Gained IPv6LL Nov 12 18:07:01.029270 systemd[1]: Started sshd@9-10.0.0.144:22-10.0.0.1:48502.service - OpenSSH per-connection server daemon (10.0.0.1:48502). Nov 12 18:07:01.061468 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 48502 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:07:01.062874 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:07:01.067130 systemd-logind[1523]: New session 10 of user core. Nov 12 18:07:01.081026 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 18:07:01.213731 sshd[4142]: pam_unix(sshd:session): session closed for user core Nov 12 18:07:01.227034 systemd[1]: Started sshd@10-10.0.0.144:22-10.0.0.1:48512.service - OpenSSH per-connection server daemon (10.0.0.1:48512). Nov 12 18:07:01.227503 systemd[1]: sshd@9-10.0.0.144:22-10.0.0.1:48502.service: Deactivated successfully. Nov 12 18:07:01.229957 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 18:07:01.231506 systemd-logind[1523]: Session 10 logged out. Waiting for processes to exit. Nov 12 18:07:01.232770 systemd-logind[1523]: Removed session 10. Nov 12 18:07:01.259120 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 48512 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:07:01.260340 sshd[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:07:01.264639 systemd-logind[1523]: New session 11 of user core. Nov 12 18:07:01.274032 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 18:07:01.423826 sshd[4155]: pam_unix(sshd:session): session closed for user core Nov 12 18:07:01.432920 systemd[1]: Started sshd@11-10.0.0.144:22-10.0.0.1:48520.service - OpenSSH per-connection server daemon (10.0.0.1:48520). Nov 12 18:07:01.433454 systemd[1]: sshd@10-10.0.0.144:22-10.0.0.1:48512.service: Deactivated successfully. Nov 12 18:07:01.440468 systemd-logind[1523]: Session 11 logged out. Waiting for processes to exit. Nov 12 18:07:01.447999 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 18:07:01.452990 systemd-logind[1523]: Removed session 11. Nov 12 18:07:01.483545 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 48520 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:07:01.485050 sshd[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:07:01.492274 systemd-logind[1523]: New session 12 of user core. Nov 12 18:07:01.506033 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 18:07:01.634424 sshd[4169]: pam_unix(sshd:session): session closed for user core Nov 12 18:07:01.638092 systemd[1]: sshd@11-10.0.0.144:22-10.0.0.1:48520.service: Deactivated successfully. Nov 12 18:07:01.639998 systemd-logind[1523]: Session 12 logged out. Waiting for processes to exit. Nov 12 18:07:01.640073 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 18:07:01.641050 systemd-logind[1523]: Removed session 12. Nov 12 18:07:02.757509 containerd[1544]: time="2024-11-12T18:07:02.757463436Z" level=info msg="StopPodSandbox for \"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\"" Nov 12 18:07:02.757868 containerd[1544]: time="2024-11-12T18:07:02.757559596Z" level=info msg="StopPodSandbox for \"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\"" Nov 12 18:07:02.757868 containerd[1544]: time="2024-11-12T18:07:02.757627996Z" level=info msg="StopPodSandbox for \"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\"" Nov 12 18:07:02.758536 containerd[1544]: time="2024-11-12T18:07:02.758216078Z" level=info msg="StopPodSandbox for \"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\"" Nov 12 18:07:02.897318 kubelet[2722]: I1112 18:07:02.897278 2722 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-fdlgs" podStartSLOduration=10.14875324 podStartE2EDuration="20.897234357s" podCreationTimestamp="2024-11-12 18:06:42 +0000 UTC" firstStartedPulling="2024-11-12 18:06:42.854673922 +0000 UTC m=+21.179657218" lastFinishedPulling="2024-11-12 18:06:53.603154999 +0000 UTC m=+31.928138335" observedRunningTime="2024-11-12 18:06:53.860318453 +0000 UTC m=+32.185301829" watchObservedRunningTime="2024-11-12 18:07:02.897234357 +0000 UTC m=+41.222217693" Nov 12 18:07:03.137947 containerd[1544]: 2024-11-12 18:07:02.894 [INFO][4257] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Nov 12 18:07:03.137947 containerd[1544]: 2024-11-12 18:07:02.894 [INFO][4257] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" iface="eth0" netns="/var/run/netns/cni-26642f96-a2d9-2ebe-5260-4e634da6d56f" Nov 12 18:07:03.137947 containerd[1544]: 2024-11-12 18:07:02.896 [INFO][4257] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" iface="eth0" netns="/var/run/netns/cni-26642f96-a2d9-2ebe-5260-4e634da6d56f" Nov 12 18:07:03.137947 containerd[1544]: 2024-11-12 18:07:02.898 [INFO][4257] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" iface="eth0" netns="/var/run/netns/cni-26642f96-a2d9-2ebe-5260-4e634da6d56f" Nov 12 18:07:03.137947 containerd[1544]: 2024-11-12 18:07:02.898 [INFO][4257] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Nov 12 18:07:03.137947 containerd[1544]: 2024-11-12 18:07:02.898 [INFO][4257] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Nov 12 18:07:03.137947 containerd[1544]: 2024-11-12 18:07:03.119 [INFO][4285] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" HandleID="k8s-pod-network.4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Workload="localhost-k8s-coredns--76f75df574--pgmhz-eth0" Nov 12 18:07:03.137947 containerd[1544]: 2024-11-12 18:07:03.120 [INFO][4285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:03.137947 containerd[1544]: 2024-11-12 18:07:03.120 [INFO][4285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:03.137947 containerd[1544]: 2024-11-12 18:07:03.130 [WARNING][4285] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" HandleID="k8s-pod-network.4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Workload="localhost-k8s-coredns--76f75df574--pgmhz-eth0" Nov 12 18:07:03.137947 containerd[1544]: 2024-11-12 18:07:03.130 [INFO][4285] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" HandleID="k8s-pod-network.4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Workload="localhost-k8s-coredns--76f75df574--pgmhz-eth0" Nov 12 18:07:03.137947 containerd[1544]: 2024-11-12 18:07:03.132 [INFO][4285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:03.137947 containerd[1544]: 2024-11-12 18:07:03.134 [INFO][4257] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Nov 12 18:07:03.142524 containerd[1544]: time="2024-11-12T18:07:03.141881888Z" level=info msg="TearDown network for sandbox \"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\" successfully" Nov 12 18:07:03.142524 containerd[1544]: time="2024-11-12T18:07:03.141916728Z" level=info msg="StopPodSandbox for \"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\" returns successfully" Nov 12 18:07:03.142663 kubelet[2722]: E1112 18:07:03.142555 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:07:03.144265 containerd[1544]: time="2024-11-12T18:07:03.144111734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pgmhz,Uid:28a6568b-1b3e-478e-ba5b-d89b95125e3f,Namespace:kube-system,Attempt:1,}" Nov 12 18:07:03.147970 systemd[1]: run-netns-cni\x2d26642f96\x2da2d9\x2d2ebe\x2d5260\x2d4e634da6d56f.mount: Deactivated successfully. Nov 12 18:07:03.157446 containerd[1544]: 2024-11-12 18:07:02.896 [INFO][4254] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Nov 12 18:07:03.157446 containerd[1544]: 2024-11-12 18:07:02.897 [INFO][4254] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" iface="eth0" netns="/var/run/netns/cni-0f944340-fe33-3311-73e7-9e2f4ef4d53d" Nov 12 18:07:03.157446 containerd[1544]: 2024-11-12 18:07:02.898 [INFO][4254] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" iface="eth0" netns="/var/run/netns/cni-0f944340-fe33-3311-73e7-9e2f4ef4d53d" Nov 12 18:07:03.157446 containerd[1544]: 2024-11-12 18:07:02.898 [INFO][4254] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" iface="eth0" netns="/var/run/netns/cni-0f944340-fe33-3311-73e7-9e2f4ef4d53d" Nov 12 18:07:03.157446 containerd[1544]: 2024-11-12 18:07:02.898 [INFO][4254] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Nov 12 18:07:03.157446 containerd[1544]: 2024-11-12 18:07:02.899 [INFO][4254] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Nov 12 18:07:03.157446 containerd[1544]: 2024-11-12 18:07:03.119 [INFO][4287] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" HandleID="k8s-pod-network.3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Workload="localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0" Nov 12 18:07:03.157446 containerd[1544]: 2024-11-12 18:07:03.120 [INFO][4287] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:03.157446 containerd[1544]: 2024-11-12 18:07:03.132 [INFO][4287] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:03.157446 containerd[1544]: 2024-11-12 18:07:03.145 [WARNING][4287] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" HandleID="k8s-pod-network.3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Workload="localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0" Nov 12 18:07:03.157446 containerd[1544]: 2024-11-12 18:07:03.145 [INFO][4287] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" HandleID="k8s-pod-network.3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Workload="localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0" Nov 12 18:07:03.157446 containerd[1544]: 2024-11-12 18:07:03.148 [INFO][4287] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:03.157446 containerd[1544]: 2024-11-12 18:07:03.152 [INFO][4254] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Nov 12 18:07:03.160634 containerd[1544]: time="2024-11-12T18:07:03.157641292Z" level=info msg="TearDown network for sandbox \"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\" successfully" Nov 12 18:07:03.160634 containerd[1544]: time="2024-11-12T18:07:03.157668092Z" level=info msg="StopPodSandbox for \"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\" returns successfully" Nov 12 18:07:03.160634 containerd[1544]: time="2024-11-12T18:07:03.158599335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd4854968-v99gp,Uid:4735fef9-5e79-40fb-ba5e-7a6cff344df8,Namespace:calico-system,Attempt:1,}" Nov 12 18:07:03.160258 systemd[1]: run-netns-cni\x2d0f944340\x2dfe33\x2d3311\x2d73e7\x2d9e2f4ef4d53d.mount: Deactivated successfully. Nov 12 18:07:03.176265 containerd[1544]: 2024-11-12 18:07:02.896 [INFO][4255] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Nov 12 18:07:03.176265 containerd[1544]: 2024-11-12 18:07:02.896 [INFO][4255] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" iface="eth0" netns="/var/run/netns/cni-565a99d5-1992-2bab-4310-52a9d4e4a292" Nov 12 18:07:03.176265 containerd[1544]: 2024-11-12 18:07:02.897 [INFO][4255] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" iface="eth0" netns="/var/run/netns/cni-565a99d5-1992-2bab-4310-52a9d4e4a292" Nov 12 18:07:03.176265 containerd[1544]: 2024-11-12 18:07:02.897 [INFO][4255] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" iface="eth0" netns="/var/run/netns/cni-565a99d5-1992-2bab-4310-52a9d4e4a292" Nov 12 18:07:03.176265 containerd[1544]: 2024-11-12 18:07:02.897 [INFO][4255] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Nov 12 18:07:03.176265 containerd[1544]: 2024-11-12 18:07:02.898 [INFO][4255] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Nov 12 18:07:03.176265 containerd[1544]: 2024-11-12 18:07:03.119 [INFO][4286] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" HandleID="k8s-pod-network.8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Workload="localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0" Nov 12 18:07:03.176265 containerd[1544]: 2024-11-12 18:07:03.120 [INFO][4286] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:03.176265 containerd[1544]: 2024-11-12 18:07:03.148 [INFO][4286] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:03.176265 containerd[1544]: 2024-11-12 18:07:03.158 [WARNING][4286] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" HandleID="k8s-pod-network.8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Workload="localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0" Nov 12 18:07:03.176265 containerd[1544]: 2024-11-12 18:07:03.160 [INFO][4286] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" HandleID="k8s-pod-network.8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Workload="localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0" Nov 12 18:07:03.176265 containerd[1544]: 2024-11-12 18:07:03.162 [INFO][4286] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:03.176265 containerd[1544]: 2024-11-12 18:07:03.170 [INFO][4255] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Nov 12 18:07:03.177172 containerd[1544]: time="2024-11-12T18:07:03.176302344Z" level=info msg="TearDown network for sandbox \"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\" successfully" Nov 12 18:07:03.177172 containerd[1544]: time="2024-11-12T18:07:03.176330905Z" level=info msg="StopPodSandbox for \"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\" returns successfully" Nov 12 18:07:03.178408 containerd[1544]: time="2024-11-12T18:07:03.178375150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55594cbfc8-dq7v6,Uid:30f0ebab-e362-4d1a-9134-a19a9dbbe847,Namespace:calico-apiserver,Attempt:1,}" Nov 12 18:07:03.180342 systemd[1]: run-netns-cni\x2d565a99d5\x2d1992\x2d2bab\x2d4310\x2d52a9d4e4a292.mount: Deactivated successfully. Nov 12 18:07:03.188271 containerd[1544]: 2024-11-12 18:07:02.892 [INFO][4256] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Nov 12 18:07:03.188271 containerd[1544]: 2024-11-12 18:07:02.896 [INFO][4256] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" iface="eth0" netns="/var/run/netns/cni-0d5209af-0246-698a-1364-0d069f29ad13" Nov 12 18:07:03.188271 containerd[1544]: 2024-11-12 18:07:02.896 [INFO][4256] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" iface="eth0" netns="/var/run/netns/cni-0d5209af-0246-698a-1364-0d069f29ad13" Nov 12 18:07:03.188271 containerd[1544]: 2024-11-12 18:07:02.903 [INFO][4256] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" iface="eth0" netns="/var/run/netns/cni-0d5209af-0246-698a-1364-0d069f29ad13" Nov 12 18:07:03.188271 containerd[1544]: 2024-11-12 18:07:02.903 [INFO][4256] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Nov 12 18:07:03.188271 containerd[1544]: 2024-11-12 18:07:02.903 [INFO][4256] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Nov 12 18:07:03.188271 containerd[1544]: 2024-11-12 18:07:03.123 [INFO][4288] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" HandleID="k8s-pod-network.e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Workload="localhost-k8s-coredns--76f75df574--bdfqg-eth0" Nov 12 18:07:03.188271 containerd[1544]: 2024-11-12 18:07:03.123 [INFO][4288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:03.188271 containerd[1544]: 2024-11-12 18:07:03.162 [INFO][4288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:03.188271 containerd[1544]: 2024-11-12 18:07:03.172 [WARNING][4288] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" HandleID="k8s-pod-network.e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Workload="localhost-k8s-coredns--76f75df574--bdfqg-eth0" Nov 12 18:07:03.188271 containerd[1544]: 2024-11-12 18:07:03.172 [INFO][4288] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" HandleID="k8s-pod-network.e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Workload="localhost-k8s-coredns--76f75df574--bdfqg-eth0" Nov 12 18:07:03.188271 containerd[1544]: 2024-11-12 18:07:03.176 [INFO][4288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:03.188271 containerd[1544]: 2024-11-12 18:07:03.186 [INFO][4256] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Nov 12 18:07:03.188696 containerd[1544]: time="2024-11-12T18:07:03.188620179Z" level=info msg="TearDown network for sandbox \"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\" successfully" Nov 12 18:07:03.188728 containerd[1544]: time="2024-11-12T18:07:03.188696979Z" level=info msg="StopPodSandbox for \"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\" returns successfully" Nov 12 18:07:03.189443 kubelet[2722]: E1112 18:07:03.188998 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:07:03.191053 containerd[1544]: time="2024-11-12T18:07:03.191020266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bdfqg,Uid:cdff6f1d-4fee-422c-bb63-c5707ab88ef8,Namespace:kube-system,Attempt:1,}" Nov 12 18:07:03.355626 systemd-networkd[1231]: calieb795543e29: Link UP Nov 12 18:07:03.356774 systemd-networkd[1231]: calieb795543e29: Gained carrier Nov 12 18:07:03.378478 containerd[1544]: 2024-11-12 18:07:03.235 [INFO][4327] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0 calico-kube-controllers-5bd4854968- calico-system 4735fef9-5e79-40fb-ba5e-7a6cff344df8 876 0 2024-11-12 18:06:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5bd4854968 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5bd4854968-v99gp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calieb795543e29 [] []}} ContainerID="d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31" Namespace="calico-system" Pod="calico-kube-controllers-5bd4854968-v99gp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-" Nov 12 18:07:03.378478 containerd[1544]: 2024-11-12 18:07:03.235 [INFO][4327] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31" Namespace="calico-system" Pod="calico-kube-controllers-5bd4854968-v99gp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0" Nov 12 18:07:03.378478 containerd[1544]: 2024-11-12 18:07:03.291 [INFO][4372] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31" HandleID="k8s-pod-network.d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31" Workload="localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0" Nov 12 18:07:03.378478 containerd[1544]: 2024-11-12 18:07:03.305 [INFO][4372] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31" HandleID="k8s-pod-network.d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31" Workload="localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000418dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5bd4854968-v99gp", "timestamp":"2024-11-12 18:07:03.291115865 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 18:07:03.378478 containerd[1544]: 2024-11-12 18:07:03.305 [INFO][4372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:03.378478 containerd[1544]: 2024-11-12 18:07:03.307 [INFO][4372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:03.378478 containerd[1544]: 2024-11-12 18:07:03.307 [INFO][4372] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 18:07:03.378478 containerd[1544]: 2024-11-12 18:07:03.312 [INFO][4372] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31" host="localhost" Nov 12 18:07:03.378478 containerd[1544]: 2024-11-12 18:07:03.325 [INFO][4372] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 18:07:03.378478 containerd[1544]: 2024-11-12 18:07:03.331 [INFO][4372] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 18:07:03.378478 containerd[1544]: 2024-11-12 18:07:03.333 [INFO][4372] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 18:07:03.378478 containerd[1544]: 2024-11-12 18:07:03.336 [INFO][4372] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 18:07:03.378478 containerd[1544]: 2024-11-12 18:07:03.336 [INFO][4372] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31" host="localhost" Nov 12 18:07:03.378478 containerd[1544]: 2024-11-12 18:07:03.337 [INFO][4372] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31 Nov 12 18:07:03.378478 containerd[1544]: 2024-11-12 18:07:03.340 [INFO][4372] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31" host="localhost" Nov 12 18:07:03.378478 containerd[1544]: 2024-11-12 18:07:03.346 [INFO][4372] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31" host="localhost" Nov 12 18:07:03.378478 containerd[1544]: 2024-11-12 18:07:03.346 [INFO][4372] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31" host="localhost" Nov 12 18:07:03.378478 containerd[1544]: 2024-11-12 18:07:03.346 [INFO][4372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:03.378478 containerd[1544]: 2024-11-12 18:07:03.346 [INFO][4372] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31" HandleID="k8s-pod-network.d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31" Workload="localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0" Nov 12 18:07:03.379085 containerd[1544]: 2024-11-12 18:07:03.350 [INFO][4327] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31" Namespace="calico-system" Pod="calico-kube-controllers-5bd4854968-v99gp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0", GenerateName:"calico-kube-controllers-5bd4854968-", Namespace:"calico-system", SelfLink:"", UID:"4735fef9-5e79-40fb-ba5e-7a6cff344df8", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bd4854968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5bd4854968-v99gp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calieb795543e29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:03.379085 containerd[1544]: 2024-11-12 18:07:03.350 [INFO][4327] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31" Namespace="calico-system" Pod="calico-kube-controllers-5bd4854968-v99gp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0" Nov 12 18:07:03.379085 containerd[1544]: 2024-11-12 18:07:03.350 [INFO][4327] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb795543e29 ContainerID="d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31" Namespace="calico-system" Pod="calico-kube-controllers-5bd4854968-v99gp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0" Nov 12 18:07:03.379085 containerd[1544]: 2024-11-12 18:07:03.359 [INFO][4327] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31" Namespace="calico-system" Pod="calico-kube-controllers-5bd4854968-v99gp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0" Nov 12 18:07:03.379085 containerd[1544]: 2024-11-12 18:07:03.362 [INFO][4327] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31" Namespace="calico-system" Pod="calico-kube-controllers-5bd4854968-v99gp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0", GenerateName:"calico-kube-controllers-5bd4854968-", Namespace:"calico-system", SelfLink:"", UID:"4735fef9-5e79-40fb-ba5e-7a6cff344df8", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bd4854968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31", Pod:"calico-kube-controllers-5bd4854968-v99gp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calieb795543e29", MAC:"86:8e:69:46:37:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:03.379085 containerd[1544]: 2024-11-12 18:07:03.372 [INFO][4327] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31" Namespace="calico-system" Pod="calico-kube-controllers-5bd4854968-v99gp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0" Nov 12 18:07:03.388339 systemd-networkd[1231]: cali4f356f92e22: Link UP Nov 12 18:07:03.388476 systemd-networkd[1231]: cali4f356f92e22: Gained carrier Nov 12 18:07:03.408049 containerd[1544]: 2024-11-12 18:07:03.260 [INFO][4343] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0 calico-apiserver-55594cbfc8- calico-apiserver 30f0ebab-e362-4d1a-9134-a19a9dbbe847 877 0 2024-11-12 18:06:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55594cbfc8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55594cbfc8-dq7v6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4f356f92e22 [] []}} ContainerID="7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14" Namespace="calico-apiserver" Pod="calico-apiserver-55594cbfc8-dq7v6" WorkloadEndpoint="localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-" Nov 12 18:07:03.408049 containerd[1544]: 2024-11-12 18:07:03.261 [INFO][4343] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14" Namespace="calico-apiserver" Pod="calico-apiserver-55594cbfc8-dq7v6" WorkloadEndpoint="localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0" Nov 12 18:07:03.408049 containerd[1544]: 2024-11-12 18:07:03.305 [INFO][4381] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14" HandleID="k8s-pod-network.7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14" Workload="localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0" Nov 12 18:07:03.408049 containerd[1544]: 2024-11-12 18:07:03.321 [INFO][4381] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14" HandleID="k8s-pod-network.7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14" Workload="localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f3ad0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55594cbfc8-dq7v6", "timestamp":"2024-11-12 18:07:03.305730226 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 18:07:03.408049 containerd[1544]: 2024-11-12 18:07:03.321 [INFO][4381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:03.408049 containerd[1544]: 2024-11-12 18:07:03.346 [INFO][4381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:03.408049 containerd[1544]: 2024-11-12 18:07:03.348 [INFO][4381] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 18:07:03.408049 containerd[1544]: 2024-11-12 18:07:03.350 [INFO][4381] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14" host="localhost" Nov 12 18:07:03.408049 containerd[1544]: 2024-11-12 18:07:03.356 [INFO][4381] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 18:07:03.408049 containerd[1544]: 2024-11-12 18:07:03.365 [INFO][4381] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 18:07:03.408049 containerd[1544]: 2024-11-12 18:07:03.367 [INFO][4381] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 18:07:03.408049 containerd[1544]: 2024-11-12 18:07:03.369 [INFO][4381] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 18:07:03.408049 containerd[1544]: 2024-11-12 18:07:03.369 [INFO][4381] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14" host="localhost" Nov 12 18:07:03.408049 containerd[1544]: 2024-11-12 18:07:03.372 [INFO][4381] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14 Nov 12 18:07:03.408049 containerd[1544]: 2024-11-12 18:07:03.375 [INFO][4381] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14" host="localhost" Nov 12 18:07:03.408049 containerd[1544]: 2024-11-12 18:07:03.382 [INFO][4381] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14" host="localhost" Nov 12 18:07:03.408049 containerd[1544]: 2024-11-12 18:07:03.382 [INFO][4381] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14" host="localhost" Nov 12 18:07:03.408049 containerd[1544]: 2024-11-12 18:07:03.382 [INFO][4381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:03.408049 containerd[1544]: 2024-11-12 18:07:03.382 [INFO][4381] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14" HandleID="k8s-pod-network.7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14" Workload="localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0" Nov 12 18:07:03.408612 containerd[1544]: 2024-11-12 18:07:03.385 [INFO][4343] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14" Namespace="calico-apiserver" Pod="calico-apiserver-55594cbfc8-dq7v6" WorkloadEndpoint="localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0", GenerateName:"calico-apiserver-55594cbfc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"30f0ebab-e362-4d1a-9134-a19a9dbbe847", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55594cbfc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55594cbfc8-dq7v6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f356f92e22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:03.408612 containerd[1544]: 2024-11-12 18:07:03.385 [INFO][4343] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14" Namespace="calico-apiserver" Pod="calico-apiserver-55594cbfc8-dq7v6" WorkloadEndpoint="localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0" Nov 12 18:07:03.408612 containerd[1544]: 2024-11-12 18:07:03.385 [INFO][4343] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f356f92e22 ContainerID="7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14" Namespace="calico-apiserver" Pod="calico-apiserver-55594cbfc8-dq7v6" WorkloadEndpoint="localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0" Nov 12 18:07:03.408612 containerd[1544]: 2024-11-12 18:07:03.387 [INFO][4343] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14" Namespace="calico-apiserver" Pod="calico-apiserver-55594cbfc8-dq7v6" WorkloadEndpoint="localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0" Nov 12 18:07:03.408612 containerd[1544]: 2024-11-12 18:07:03.387 [INFO][4343] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14" Namespace="calico-apiserver" Pod="calico-apiserver-55594cbfc8-dq7v6" WorkloadEndpoint="localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0", GenerateName:"calico-apiserver-55594cbfc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"30f0ebab-e362-4d1a-9134-a19a9dbbe847", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55594cbfc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14", Pod:"calico-apiserver-55594cbfc8-dq7v6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f356f92e22", MAC:"0e:1f:99:5c:d5:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:03.408612 containerd[1544]: 2024-11-12 18:07:03.402 [INFO][4343] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14" Namespace="calico-apiserver" Pod="calico-apiserver-55594cbfc8-dq7v6" WorkloadEndpoint="localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0" Nov 12 18:07:03.419975 containerd[1544]: time="2024-11-12T18:07:03.417714979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:07:03.419975 containerd[1544]: time="2024-11-12T18:07:03.417783139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:07:03.419975 containerd[1544]: time="2024-11-12T18:07:03.417806300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:07:03.420844 containerd[1544]: time="2024-11-12T18:07:03.419995666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:07:03.428123 systemd-networkd[1231]: cali3474ac64ad0: Link UP Nov 12 18:07:03.429753 systemd-networkd[1231]: cali3474ac64ad0: Gained carrier Nov 12 18:07:03.433680 containerd[1544]: time="2024-11-12T18:07:03.431848019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:07:03.433680 containerd[1544]: time="2024-11-12T18:07:03.431915899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:07:03.433680 containerd[1544]: time="2024-11-12T18:07:03.431932019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:07:03.433680 containerd[1544]: time="2024-11-12T18:07:03.432061179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:07:03.448859 containerd[1544]: 2024-11-12 18:07:03.266 [INFO][4316] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--pgmhz-eth0 coredns-76f75df574- kube-system 28a6568b-1b3e-478e-ba5b-d89b95125e3f 878 0 2024-11-12 18:06:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-pgmhz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3474ac64ad0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589" Namespace="kube-system" Pod="coredns-76f75df574-pgmhz" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--pgmhz-" Nov 12 18:07:03.448859 containerd[1544]: 2024-11-12 18:07:03.267 [INFO][4316] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589" Namespace="kube-system" Pod="coredns-76f75df574-pgmhz" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--pgmhz-eth0" Nov 12 18:07:03.448859 containerd[1544]: 2024-11-12 18:07:03.318 [INFO][4388] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589" HandleID="k8s-pod-network.3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589" Workload="localhost-k8s-coredns--76f75df574--pgmhz-eth0" Nov 12 18:07:03.448859 containerd[1544]: 2024-11-12 18:07:03.331 [INFO][4388] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589" HandleID="k8s-pod-network.3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589" Workload="localhost-k8s-coredns--76f75df574--pgmhz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004c8b70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-pgmhz", "timestamp":"2024-11-12 18:07:03.318059021 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 18:07:03.448859 containerd[1544]: 2024-11-12 18:07:03.331 [INFO][4388] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:03.448859 containerd[1544]: 2024-11-12 18:07:03.382 [INFO][4388] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:03.448859 containerd[1544]: 2024-11-12 18:07:03.382 [INFO][4388] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 18:07:03.448859 containerd[1544]: 2024-11-12 18:07:03.385 [INFO][4388] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589" host="localhost" Nov 12 18:07:03.448859 containerd[1544]: 2024-11-12 18:07:03.393 [INFO][4388] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 18:07:03.448859 containerd[1544]: 2024-11-12 18:07:03.405 [INFO][4388] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 18:07:03.448859 containerd[1544]: 2024-11-12 18:07:03.407 [INFO][4388] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 18:07:03.448859 containerd[1544]: 2024-11-12 18:07:03.409 [INFO][4388] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 18:07:03.448859 containerd[1544]: 2024-11-12 18:07:03.409 [INFO][4388] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589" host="localhost" Nov 12 18:07:03.448859 containerd[1544]: 2024-11-12 18:07:03.411 [INFO][4388] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589 Nov 12 18:07:03.448859 containerd[1544]: 2024-11-12 18:07:03.416 [INFO][4388] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589" host="localhost" Nov 12 18:07:03.448859 containerd[1544]: 2024-11-12 18:07:03.423 [INFO][4388] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589" host="localhost" Nov 12 18:07:03.448859 containerd[1544]: 2024-11-12 18:07:03.423 [INFO][4388] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589" host="localhost" Nov 12 18:07:03.448859 containerd[1544]: 2024-11-12 18:07:03.423 [INFO][4388] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:03.448859 containerd[1544]: 2024-11-12 18:07:03.423 [INFO][4388] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589" HandleID="k8s-pod-network.3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589" Workload="localhost-k8s-coredns--76f75df574--pgmhz-eth0" Nov 12 18:07:03.449359 containerd[1544]: 2024-11-12 18:07:03.426 [INFO][4316] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589" Namespace="kube-system" Pod="coredns-76f75df574-pgmhz" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--pgmhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--pgmhz-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"28a6568b-1b3e-478e-ba5b-d89b95125e3f", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-pgmhz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3474ac64ad0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:03.449359 containerd[1544]: 2024-11-12 18:07:03.426 [INFO][4316] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589" Namespace="kube-system" Pod="coredns-76f75df574-pgmhz" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--pgmhz-eth0" Nov 12 18:07:03.449359 containerd[1544]: 2024-11-12 18:07:03.426 [INFO][4316] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3474ac64ad0 ContainerID="3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589" Namespace="kube-system" Pod="coredns-76f75df574-pgmhz" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--pgmhz-eth0" Nov 12 18:07:03.449359 containerd[1544]: 2024-11-12 18:07:03.428 [INFO][4316] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589" Namespace="kube-system" Pod="coredns-76f75df574-pgmhz" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--pgmhz-eth0" Nov 12 18:07:03.449359 containerd[1544]: 2024-11-12 18:07:03.430 [INFO][4316] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589" Namespace="kube-system" Pod="coredns-76f75df574-pgmhz" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--pgmhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--pgmhz-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"28a6568b-1b3e-478e-ba5b-d89b95125e3f", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589", Pod:"coredns-76f75df574-pgmhz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3474ac64ad0", MAC:"fe:50:ba:3e:4e:cc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:03.449359 containerd[1544]: 2024-11-12 18:07:03.440 [INFO][4316] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589" Namespace="kube-system" Pod="coredns-76f75df574-pgmhz" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--pgmhz-eth0" Nov 12 18:07:03.471996 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 18:07:03.474068 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 18:07:03.478404 systemd-networkd[1231]: cali8bb7660fd1a: Link UP Nov 12 18:07:03.479082 systemd-networkd[1231]: cali8bb7660fd1a: Gained carrier Nov 12 18:07:03.480247 containerd[1544]: time="2024-11-12T18:07:03.480037754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:07:03.480247 containerd[1544]: time="2024-11-12T18:07:03.480215554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:07:03.480521 containerd[1544]: time="2024-11-12T18:07:03.480480315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:07:03.481236 containerd[1544]: time="2024-11-12T18:07:03.481175797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:07:03.502436 containerd[1544]: 2024-11-12 18:07:03.301 [INFO][4356] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--bdfqg-eth0 coredns-76f75df574- kube-system cdff6f1d-4fee-422c-bb63-c5707ab88ef8 875 0 2024-11-12 18:06:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-bdfqg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8bb7660fd1a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5" Namespace="kube-system" Pod="coredns-76f75df574-bdfqg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bdfqg-" Nov 12 18:07:03.502436 containerd[1544]: 2024-11-12 18:07:03.302 [INFO][4356] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5" Namespace="kube-system" Pod="coredns-76f75df574-bdfqg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bdfqg-eth0" Nov 12 18:07:03.502436 containerd[1544]: 2024-11-12 18:07:03.341 [INFO][4398] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5" HandleID="k8s-pod-network.7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5" Workload="localhost-k8s-coredns--76f75df574--bdfqg-eth0" Nov 12 18:07:03.502436 containerd[1544]: 2024-11-12 18:07:03.355 [INFO][4398] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5" HandleID="k8s-pod-network.7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5" Workload="localhost-k8s-coredns--76f75df574--bdfqg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000133d10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-bdfqg", "timestamp":"2024-11-12 18:07:03.341807727 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 18:07:03.502436 containerd[1544]: 2024-11-12 18:07:03.355 [INFO][4398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:03.502436 containerd[1544]: 2024-11-12 18:07:03.423 [INFO][4398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:03.502436 containerd[1544]: 2024-11-12 18:07:03.423 [INFO][4398] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 18:07:03.502436 containerd[1544]: 2024-11-12 18:07:03.426 [INFO][4398] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5" host="localhost" Nov 12 18:07:03.502436 containerd[1544]: 2024-11-12 18:07:03.432 [INFO][4398] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 18:07:03.502436 containerd[1544]: 2024-11-12 18:07:03.440 [INFO][4398] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 18:07:03.502436 containerd[1544]: 2024-11-12 18:07:03.443 [INFO][4398] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 18:07:03.502436 containerd[1544]: 2024-11-12 18:07:03.447 [INFO][4398] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 18:07:03.502436 containerd[1544]: 2024-11-12 18:07:03.447 [INFO][4398] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5" host="localhost" Nov 12 18:07:03.502436 containerd[1544]: 2024-11-12 18:07:03.449 [INFO][4398] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5 Nov 12 18:07:03.502436 containerd[1544]: 2024-11-12 18:07:03.457 [INFO][4398] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5" host="localhost" Nov 12 18:07:03.502436 containerd[1544]: 2024-11-12 18:07:03.467 [INFO][4398] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5" host="localhost" Nov 12 18:07:03.502436 containerd[1544]: 2024-11-12 18:07:03.468 [INFO][4398] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5" host="localhost" Nov 12 18:07:03.502436 containerd[1544]: 2024-11-12 18:07:03.468 [INFO][4398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:03.502436 containerd[1544]: 2024-11-12 18:07:03.468 [INFO][4398] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5" HandleID="k8s-pod-network.7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5" Workload="localhost-k8s-coredns--76f75df574--bdfqg-eth0" Nov 12 18:07:03.503168 containerd[1544]: 2024-11-12 18:07:03.475 [INFO][4356] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5" Namespace="kube-system" Pod="coredns-76f75df574-bdfqg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bdfqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--bdfqg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"cdff6f1d-4fee-422c-bb63-c5707ab88ef8", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-bdfqg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8bb7660fd1a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:03.503168 containerd[1544]: 2024-11-12 18:07:03.476 [INFO][4356] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5" Namespace="kube-system" Pod="coredns-76f75df574-bdfqg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bdfqg-eth0" Nov 12 18:07:03.503168 containerd[1544]: 2024-11-12 18:07:03.476 [INFO][4356] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8bb7660fd1a ContainerID="7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5" Namespace="kube-system" Pod="coredns-76f75df574-bdfqg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bdfqg-eth0" Nov 12 18:07:03.503168 containerd[1544]: 2024-11-12 18:07:03.478 [INFO][4356] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5" Namespace="kube-system" Pod="coredns-76f75df574-bdfqg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bdfqg-eth0" Nov 12 18:07:03.503168 containerd[1544]: 2024-11-12 18:07:03.479 [INFO][4356] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5" Namespace="kube-system" Pod="coredns-76f75df574-bdfqg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bdfqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--bdfqg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"cdff6f1d-4fee-422c-bb63-c5707ab88ef8", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5", Pod:"coredns-76f75df574-bdfqg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8bb7660fd1a", MAC:"f2:bb:56:42:67:1e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:03.503168 containerd[1544]: 2024-11-12 18:07:03.499 [INFO][4356] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5" Namespace="kube-system" Pod="coredns-76f75df574-bdfqg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bdfqg-eth0" Nov 12 18:07:03.516644 containerd[1544]: time="2024-11-12T18:07:03.516170775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55594cbfc8-dq7v6,Uid:30f0ebab-e362-4d1a-9134-a19a9dbbe847,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14\"" Nov 12 18:07:03.519931 containerd[1544]: time="2024-11-12T18:07:03.519839945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 18:07:03.523160 containerd[1544]: time="2024-11-12T18:07:03.523119594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd4854968-v99gp,Uid:4735fef9-5e79-40fb-ba5e-7a6cff344df8,Namespace:calico-system,Attempt:1,} returns sandbox id \"d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31\"" Nov 12 18:07:03.524591 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 18:07:03.545650 containerd[1544]: time="2024-11-12T18:07:03.545443696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pgmhz,Uid:28a6568b-1b3e-478e-ba5b-d89b95125e3f,Namespace:kube-system,Attempt:1,} returns sandbox id \"3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589\"" Nov 12 18:07:03.545924 containerd[1544]: time="2024-11-12T18:07:03.545354296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:07:03.545924 containerd[1544]: time="2024-11-12T18:07:03.545412416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:07:03.545924 containerd[1544]: time="2024-11-12T18:07:03.545426576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:07:03.546620 kubelet[2722]: E1112 18:07:03.546220 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:07:03.547312 containerd[1544]: time="2024-11-12T18:07:03.546948141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:07:03.549139 containerd[1544]: time="2024-11-12T18:07:03.549107547Z" level=info msg="CreateContainer within sandbox \"3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 18:07:03.570088 containerd[1544]: time="2024-11-12T18:07:03.570046845Z" level=info msg="CreateContainer within sandbox \"3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d3c10b0d2d1d115244adc030d6318887c5fc58648a94e2c375132cfe4c79ad24\"" Nov 12 18:07:03.570485 containerd[1544]: time="2024-11-12T18:07:03.570451526Z" level=info msg="StartContainer for \"d3c10b0d2d1d115244adc030d6318887c5fc58648a94e2c375132cfe4c79ad24\"" Nov 12 18:07:03.576668 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 18:07:03.597536 containerd[1544]: time="2024-11-12T18:07:03.597497122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bdfqg,Uid:cdff6f1d-4fee-422c-bb63-c5707ab88ef8,Namespace:kube-system,Attempt:1,} returns sandbox id \"7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5\"" Nov 12 18:07:03.598331 kubelet[2722]: E1112 18:07:03.598297 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:07:03.600263 containerd[1544]: time="2024-11-12T18:07:03.600225890Z" level=info msg="CreateContainer within sandbox \"7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 18:07:03.627587 containerd[1544]: time="2024-11-12T18:07:03.627458046Z" level=info msg="StartContainer for \"d3c10b0d2d1d115244adc030d6318887c5fc58648a94e2c375132cfe4c79ad24\" returns successfully" Nov 12 18:07:03.630747 containerd[1544]: time="2024-11-12T18:07:03.630687255Z" level=info msg="CreateContainer within sandbox \"7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"33087fa51d6ae44c90e3e9bbcb2cbe3217d240df135a060677ba1a457b7cd3d4\"" Nov 12 18:07:03.631686 containerd[1544]: time="2024-11-12T18:07:03.631246256Z" level=info msg="StartContainer for \"33087fa51d6ae44c90e3e9bbcb2cbe3217d240df135a060677ba1a457b7cd3d4\"" Nov 12 18:07:03.702370 containerd[1544]: time="2024-11-12T18:07:03.702321295Z" level=info msg="StartContainer for \"33087fa51d6ae44c90e3e9bbcb2cbe3217d240df135a060677ba1a457b7cd3d4\" returns successfully" Nov 12 18:07:03.882232 kubelet[2722]: E1112 18:07:03.881795 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:07:03.890909 kubelet[2722]: E1112 18:07:03.890878 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:07:03.899933 kubelet[2722]: I1112 18:07:03.899856 2722 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-pgmhz" podStartSLOduration=29.899815287 podStartE2EDuration="29.899815287s" podCreationTimestamp="2024-11-12 18:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 18:07:03.899690447 +0000 UTC m=+42.224673783" watchObservedRunningTime="2024-11-12 18:07:03.899815287 +0000 UTC m=+42.224798623" Nov 12 18:07:03.926652 kubelet[2722]: I1112 18:07:03.926226 2722 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-bdfqg" podStartSLOduration=29.926183001 podStartE2EDuration="29.926183001s" podCreationTimestamp="2024-11-12 18:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 18:07:03.912210322 +0000 UTC m=+42.237193698" watchObservedRunningTime="2024-11-12 18:07:03.926183001 +0000 UTC m=+42.251166337" Nov 12 18:07:04.150107 systemd[1]: run-netns-cni\x2d0d5209af\x2d0246\x2d698a\x2d1364\x2d0d069f29ad13.mount: Deactivated successfully. Nov 12 18:07:04.646955 systemd-networkd[1231]: calieb795543e29: Gained IPv6LL Nov 12 18:07:04.898294 kubelet[2722]: E1112 18:07:04.897840 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:07:04.898756 kubelet[2722]: E1112 18:07:04.898677 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:07:04.966968 systemd-networkd[1231]: cali4f356f92e22: Gained IPv6LL Nov 12 18:07:05.180418 containerd[1544]: time="2024-11-12T18:07:05.180297574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:07:05.181267 containerd[1544]: time="2024-11-12T18:07:05.180831816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=39277239" Nov 12 18:07:05.182198 containerd[1544]: time="2024-11-12T18:07:05.182164579Z" level=info msg="ImageCreate event name:\"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:07:05.185368 containerd[1544]: time="2024-11-12T18:07:05.185156427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:07:05.185923 containerd[1544]: time="2024-11-12T18:07:05.185891069Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"40646891\" in 1.666005324s" Nov 12 18:07:05.186086 containerd[1544]: time="2024-11-12T18:07:05.185990669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\"" Nov 12 18:07:05.188192 containerd[1544]: time="2024-11-12T18:07:05.187026472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 18:07:05.188192 containerd[1544]: time="2024-11-12T18:07:05.187897514Z" level=info msg="CreateContainer within sandbox \"7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 18:07:05.208137 containerd[1544]: time="2024-11-12T18:07:05.208084408Z" level=info msg="CreateContainer within sandbox \"7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"60d4c94ba9bde736ec32e43c9fac410128e3269e8fe68313c40c8882b3227723\"" Nov 12 18:07:05.208851 containerd[1544]: time="2024-11-12T18:07:05.208812290Z" level=info msg="StartContainer for \"60d4c94ba9bde736ec32e43c9fac410128e3269e8fe68313c40c8882b3227723\"" Nov 12 18:07:05.290997 containerd[1544]: time="2024-11-12T18:07:05.290942229Z" level=info msg="StartContainer for \"60d4c94ba9bde736ec32e43c9fac410128e3269e8fe68313c40c8882b3227723\" returns successfully" Nov 12 18:07:05.350933 systemd-networkd[1231]: cali8bb7660fd1a: Gained IPv6LL Nov 12 18:07:05.415889 systemd-networkd[1231]: cali3474ac64ad0: Gained IPv6LL Nov 12 18:07:05.759607 containerd[1544]: time="2024-11-12T18:07:05.759334756Z" level=info msg="StopPodSandbox for \"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\"" Nov 12 18:07:05.759990 containerd[1544]: time="2024-11-12T18:07:05.759968718Z" level=info msg="StopPodSandbox for \"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\"" Nov 12 18:07:05.858782 containerd[1544]: 2024-11-12 18:07:05.811 [INFO][4799] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Nov 12 18:07:05.858782 containerd[1544]: 2024-11-12 18:07:05.811 [INFO][4799] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" iface="eth0" netns="/var/run/netns/cni-935e699d-4088-afd0-d3c0-a579cf0143aa" Nov 12 18:07:05.858782 containerd[1544]: 2024-11-12 18:07:05.811 [INFO][4799] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" iface="eth0" netns="/var/run/netns/cni-935e699d-4088-afd0-d3c0-a579cf0143aa" Nov 12 18:07:05.858782 containerd[1544]: 2024-11-12 18:07:05.812 [INFO][4799] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" iface="eth0" netns="/var/run/netns/cni-935e699d-4088-afd0-d3c0-a579cf0143aa" Nov 12 18:07:05.858782 containerd[1544]: 2024-11-12 18:07:05.812 [INFO][4799] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Nov 12 18:07:05.858782 containerd[1544]: 2024-11-12 18:07:05.812 [INFO][4799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Nov 12 18:07:05.858782 containerd[1544]: 2024-11-12 18:07:05.840 [INFO][4813] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" HandleID="k8s-pod-network.9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Workload="localhost-k8s-csi--node--driver--g9kgv-eth0" Nov 12 18:07:05.858782 containerd[1544]: 2024-11-12 18:07:05.840 [INFO][4813] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:05.858782 containerd[1544]: 2024-11-12 18:07:05.840 [INFO][4813] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:05.858782 containerd[1544]: 2024-11-12 18:07:05.851 [WARNING][4813] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" HandleID="k8s-pod-network.9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Workload="localhost-k8s-csi--node--driver--g9kgv-eth0" Nov 12 18:07:05.858782 containerd[1544]: 2024-11-12 18:07:05.851 [INFO][4813] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" HandleID="k8s-pod-network.9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Workload="localhost-k8s-csi--node--driver--g9kgv-eth0" Nov 12 18:07:05.858782 containerd[1544]: 2024-11-12 18:07:05.852 [INFO][4813] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:05.858782 containerd[1544]: 2024-11-12 18:07:05.857 [INFO][4799] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Nov 12 18:07:05.859523 containerd[1544]: time="2024-11-12T18:07:05.859374662Z" level=info msg="TearDown network for sandbox \"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\" successfully" Nov 12 18:07:05.859523 containerd[1544]: time="2024-11-12T18:07:05.859406222Z" level=info msg="StopPodSandbox for \"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\" returns successfully" Nov 12 18:07:05.860340 containerd[1544]: time="2024-11-12T18:07:05.860311865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g9kgv,Uid:a282df54-f6aa-450e-a3f8-2feaec5bf123,Namespace:calico-system,Attempt:1,}" Nov 12 18:07:05.861583 systemd[1]: run-netns-cni\x2d935e699d\x2d4088\x2dafd0\x2dd3c0\x2da579cf0143aa.mount: Deactivated successfully. Nov 12 18:07:05.878336 containerd[1544]: 2024-11-12 18:07:05.831 [INFO][4798] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Nov 12 18:07:05.878336 containerd[1544]: 2024-11-12 18:07:05.831 [INFO][4798] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" iface="eth0" netns="/var/run/netns/cni-6babf380-8bcb-9cc9-b834-93f4ed3aa85e" Nov 12 18:07:05.878336 containerd[1544]: 2024-11-12 18:07:05.831 [INFO][4798] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" iface="eth0" netns="/var/run/netns/cni-6babf380-8bcb-9cc9-b834-93f4ed3aa85e" Nov 12 18:07:05.878336 containerd[1544]: 2024-11-12 18:07:05.831 [INFO][4798] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" iface="eth0" netns="/var/run/netns/cni-6babf380-8bcb-9cc9-b834-93f4ed3aa85e" Nov 12 18:07:05.878336 containerd[1544]: 2024-11-12 18:07:05.831 [INFO][4798] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Nov 12 18:07:05.878336 containerd[1544]: 2024-11-12 18:07:05.831 [INFO][4798] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Nov 12 18:07:05.878336 containerd[1544]: 2024-11-12 18:07:05.862 [INFO][4819] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" HandleID="k8s-pod-network.f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Workload="localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0" Nov 12 18:07:05.878336 containerd[1544]: 2024-11-12 18:07:05.862 [INFO][4819] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:05.878336 containerd[1544]: 2024-11-12 18:07:05.862 [INFO][4819] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:05.878336 containerd[1544]: 2024-11-12 18:07:05.872 [WARNING][4819] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" HandleID="k8s-pod-network.f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Workload="localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0" Nov 12 18:07:05.878336 containerd[1544]: 2024-11-12 18:07:05.872 [INFO][4819] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" HandleID="k8s-pod-network.f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Workload="localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0" Nov 12 18:07:05.878336 containerd[1544]: 2024-11-12 18:07:05.873 [INFO][4819] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:05.878336 containerd[1544]: 2024-11-12 18:07:05.876 [INFO][4798] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Nov 12 18:07:05.879084 containerd[1544]: time="2024-11-12T18:07:05.878880234Z" level=info msg="TearDown network for sandbox \"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\" successfully" Nov 12 18:07:05.879084 containerd[1544]: time="2024-11-12T18:07:05.878909394Z" level=info msg="StopPodSandbox for \"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\" returns successfully" Nov 12 18:07:05.879523 containerd[1544]: time="2024-11-12T18:07:05.879496956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55594cbfc8-xtvnb,Uid:4c5a9c64-6667-4267-a885-4e8b234758e4,Namespace:calico-apiserver,Attempt:1,}" Nov 12 18:07:05.901422 kubelet[2722]: E1112 18:07:05.901234 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:07:05.903437 kubelet[2722]: E1112 18:07:05.901814 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 18:07:06.029581 systemd-networkd[1231]: cali9a8178c8a77: Link UP Nov 12 18:07:06.030525 systemd-networkd[1231]: cali9a8178c8a77: Gained carrier Nov 12 18:07:06.043646 kubelet[2722]: I1112 18:07:06.043584 2722 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55594cbfc8-dq7v6" podStartSLOduration=22.375609461 podStartE2EDuration="24.04353547s" podCreationTimestamp="2024-11-12 18:06:42 +0000 UTC" firstStartedPulling="2024-11-12 18:07:03.518387741 +0000 UTC m=+41.843371077" lastFinishedPulling="2024-11-12 18:07:05.18631367 +0000 UTC m=+43.511297086" observedRunningTime="2024-11-12 18:07:05.913780287 +0000 UTC m=+44.238763703" watchObservedRunningTime="2024-11-12 18:07:06.04353547 +0000 UTC m=+44.368518766" Nov 12 18:07:06.051880 containerd[1544]: 2024-11-12 18:07:05.957 [INFO][4830] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--g9kgv-eth0 csi-node-driver- calico-system a282df54-f6aa-450e-a3f8-2feaec5bf123 936 0 2024-11-12 18:06:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:64dd8495dc k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-g9kgv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9a8178c8a77 [] []}} ContainerID="140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085" Namespace="calico-system" Pod="csi-node-driver-g9kgv" WorkloadEndpoint="localhost-k8s-csi--node--driver--g9kgv-" Nov 12 18:07:06.051880 containerd[1544]: 2024-11-12 18:07:05.958 [INFO][4830] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085" Namespace="calico-system" Pod="csi-node-driver-g9kgv" WorkloadEndpoint="localhost-k8s-csi--node--driver--g9kgv-eth0" Nov 12 18:07:06.051880 containerd[1544]: 2024-11-12 18:07:05.985 [INFO][4859] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085" HandleID="k8s-pod-network.140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085" Workload="localhost-k8s-csi--node--driver--g9kgv-eth0" Nov 12 18:07:06.051880 containerd[1544]: 2024-11-12 18:07:05.997 [INFO][4859] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085" HandleID="k8s-pod-network.140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085" Workload="localhost-k8s-csi--node--driver--g9kgv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000481650), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-g9kgv", "timestamp":"2024-11-12 18:07:05.985776559 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 18:07:06.051880 containerd[1544]: 2024-11-12 18:07:05.997 [INFO][4859] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:06.051880 containerd[1544]: 2024-11-12 18:07:05.998 [INFO][4859] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:06.051880 containerd[1544]: 2024-11-12 18:07:05.998 [INFO][4859] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 18:07:06.051880 containerd[1544]: 2024-11-12 18:07:06.000 [INFO][4859] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085" host="localhost" Nov 12 18:07:06.051880 containerd[1544]: 2024-11-12 18:07:06.004 [INFO][4859] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 18:07:06.051880 containerd[1544]: 2024-11-12 18:07:06.008 [INFO][4859] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 18:07:06.051880 containerd[1544]: 2024-11-12 18:07:06.010 [INFO][4859] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 18:07:06.051880 containerd[1544]: 2024-11-12 18:07:06.012 [INFO][4859] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 18:07:06.051880 containerd[1544]: 2024-11-12 18:07:06.012 [INFO][4859] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085" host="localhost" Nov 12 18:07:06.051880 containerd[1544]: 2024-11-12 18:07:06.013 [INFO][4859] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085 Nov 12 18:07:06.051880 containerd[1544]: 2024-11-12 18:07:06.017 [INFO][4859] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085" host="localhost" Nov 12 18:07:06.051880 containerd[1544]: 2024-11-12 18:07:06.022 [INFO][4859] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085" host="localhost" Nov 12 18:07:06.051880 containerd[1544]: 2024-11-12 18:07:06.022 [INFO][4859] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085" host="localhost" Nov 12 18:07:06.051880 containerd[1544]: 2024-11-12 18:07:06.023 [INFO][4859] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:06.051880 containerd[1544]: 2024-11-12 18:07:06.023 [INFO][4859] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085" HandleID="k8s-pod-network.140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085" Workload="localhost-k8s-csi--node--driver--g9kgv-eth0" Nov 12 18:07:06.052776 containerd[1544]: 2024-11-12 18:07:06.025 [INFO][4830] cni-plugin/k8s.go 386: Populated endpoint ContainerID="140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085" Namespace="calico-system" Pod="csi-node-driver-g9kgv" WorkloadEndpoint="localhost-k8s-csi--node--driver--g9kgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g9kgv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a282df54-f6aa-450e-a3f8-2feaec5bf123", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-g9kgv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9a8178c8a77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:06.052776 containerd[1544]: 2024-11-12 18:07:06.025 [INFO][4830] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085" Namespace="calico-system" Pod="csi-node-driver-g9kgv" WorkloadEndpoint="localhost-k8s-csi--node--driver--g9kgv-eth0" Nov 12 18:07:06.052776 containerd[1544]: 2024-11-12 18:07:06.025 [INFO][4830] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a8178c8a77 ContainerID="140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085" Namespace="calico-system" Pod="csi-node-driver-g9kgv" WorkloadEndpoint="localhost-k8s-csi--node--driver--g9kgv-eth0" Nov 12 18:07:06.052776 containerd[1544]: 2024-11-12 18:07:06.030 [INFO][4830] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085" Namespace="calico-system" Pod="csi-node-driver-g9kgv" WorkloadEndpoint="localhost-k8s-csi--node--driver--g9kgv-eth0" Nov 12 18:07:06.052776 containerd[1544]: 2024-11-12 18:07:06.031 [INFO][4830] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085" Namespace="calico-system" Pod="csi-node-driver-g9kgv" WorkloadEndpoint="localhost-k8s-csi--node--driver--g9kgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g9kgv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a282df54-f6aa-450e-a3f8-2feaec5bf123", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085", Pod:"csi-node-driver-g9kgv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9a8178c8a77", MAC:"26:22:9f:49:28:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:06.052776 containerd[1544]: 2024-11-12 18:07:06.044 [INFO][4830] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085" Namespace="calico-system" Pod="csi-node-driver-g9kgv" WorkloadEndpoint="localhost-k8s-csi--node--driver--g9kgv-eth0" Nov 12 18:07:06.073702 systemd-networkd[1231]: cali77a1de321bc: Link UP Nov 12 18:07:06.074231 systemd-networkd[1231]: cali77a1de321bc: Gained carrier Nov 12 18:07:06.088178 containerd[1544]: time="2024-11-12T18:07:06.088050866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:07:06.088178 containerd[1544]: time="2024-11-12T18:07:06.088114426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:07:06.088178 containerd[1544]: time="2024-11-12T18:07:06.088130626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:07:06.088390 containerd[1544]: time="2024-11-12T18:07:06.088247507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:07:06.091688 containerd[1544]: 2024-11-12 18:07:05.958 [INFO][4840] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0 calico-apiserver-55594cbfc8- calico-apiserver 4c5a9c64-6667-4267-a885-4e8b234758e4 937 0 2024-11-12 18:06:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55594cbfc8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55594cbfc8-xtvnb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali77a1de321bc [] []}} ContainerID="630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8" Namespace="calico-apiserver" Pod="calico-apiserver-55594cbfc8-xtvnb" WorkloadEndpoint="localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-" Nov 12 18:07:06.091688 containerd[1544]: 2024-11-12 18:07:05.958 [INFO][4840] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8" Namespace="calico-apiserver" Pod="calico-apiserver-55594cbfc8-xtvnb" WorkloadEndpoint="localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0" Nov 12 18:07:06.091688 containerd[1544]: 2024-11-12 18:07:05.989 [INFO][4860] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8" HandleID="k8s-pod-network.630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8" Workload="localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0" Nov 12 18:07:06.091688 containerd[1544]: 2024-11-12 18:07:06.003 [INFO][4860] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8" HandleID="k8s-pod-network.630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8" Workload="localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003054c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55594cbfc8-xtvnb", "timestamp":"2024-11-12 18:07:05.989606129 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 18:07:06.091688 containerd[1544]: 2024-11-12 18:07:06.004 [INFO][4860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:06.091688 containerd[1544]: 2024-11-12 18:07:06.023 [INFO][4860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:06.091688 containerd[1544]: 2024-11-12 18:07:06.023 [INFO][4860] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 18:07:06.091688 containerd[1544]: 2024-11-12 18:07:06.026 [INFO][4860] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8" host="localhost" Nov 12 18:07:06.091688 containerd[1544]: 2024-11-12 18:07:06.033 [INFO][4860] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 18:07:06.091688 containerd[1544]: 2024-11-12 18:07:06.043 [INFO][4860] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 18:07:06.091688 containerd[1544]: 2024-11-12 18:07:06.046 [INFO][4860] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 18:07:06.091688 containerd[1544]: 2024-11-12 18:07:06.048 [INFO][4860] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 18:07:06.091688 containerd[1544]: 2024-11-12 18:07:06.049 [INFO][4860] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8" host="localhost" Nov 12 18:07:06.091688 containerd[1544]: 2024-11-12 18:07:06.051 [INFO][4860] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8 Nov 12 18:07:06.091688 containerd[1544]: 2024-11-12 18:07:06.058 [INFO][4860] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8" host="localhost" Nov 12 18:07:06.091688 containerd[1544]: 2024-11-12 18:07:06.067 [INFO][4860] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8" host="localhost" Nov 12 18:07:06.091688 containerd[1544]: 2024-11-12 18:07:06.067 [INFO][4860] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8" host="localhost" Nov 12 18:07:06.091688 containerd[1544]: 2024-11-12 18:07:06.067 [INFO][4860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:06.091688 containerd[1544]: 2024-11-12 18:07:06.067 [INFO][4860] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8" HandleID="k8s-pod-network.630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8" Workload="localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0" Nov 12 18:07:06.092225 containerd[1544]: 2024-11-12 18:07:06.070 [INFO][4840] cni-plugin/k8s.go 386: Populated endpoint ContainerID="630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8" Namespace="calico-apiserver" Pod="calico-apiserver-55594cbfc8-xtvnb" WorkloadEndpoint="localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0", GenerateName:"calico-apiserver-55594cbfc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c5a9c64-6667-4267-a885-4e8b234758e4", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55594cbfc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55594cbfc8-xtvnb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77a1de321bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:06.092225 containerd[1544]: 2024-11-12 18:07:06.070 [INFO][4840] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8" Namespace="calico-apiserver" Pod="calico-apiserver-55594cbfc8-xtvnb" WorkloadEndpoint="localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0" Nov 12 18:07:06.092225 containerd[1544]: 2024-11-12 18:07:06.070 [INFO][4840] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77a1de321bc ContainerID="630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8" Namespace="calico-apiserver" Pod="calico-apiserver-55594cbfc8-xtvnb" WorkloadEndpoint="localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0" Nov 12 18:07:06.092225 containerd[1544]: 2024-11-12 18:07:06.074 [INFO][4840] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8" Namespace="calico-apiserver" Pod="calico-apiserver-55594cbfc8-xtvnb" WorkloadEndpoint="localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0" Nov 12 18:07:06.092225 containerd[1544]: 2024-11-12 18:07:06.074 [INFO][4840] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8" Namespace="calico-apiserver" Pod="calico-apiserver-55594cbfc8-xtvnb" WorkloadEndpoint="localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0", GenerateName:"calico-apiserver-55594cbfc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c5a9c64-6667-4267-a885-4e8b234758e4", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55594cbfc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8", Pod:"calico-apiserver-55594cbfc8-xtvnb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77a1de321bc", MAC:"fe:4f:32:d1:d6:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:06.092225 containerd[1544]: 2024-11-12 18:07:06.085 [INFO][4840] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8" Namespace="calico-apiserver" Pod="calico-apiserver-55594cbfc8-xtvnb" WorkloadEndpoint="localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0" Nov 12 18:07:06.118844 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 18:07:06.130937 containerd[1544]: time="2024-11-12T18:07:06.130755697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 18:07:06.131557 containerd[1544]: time="2024-11-12T18:07:06.131368859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 18:07:06.131557 containerd[1544]: time="2024-11-12T18:07:06.131452339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:07:06.131759 containerd[1544]: time="2024-11-12T18:07:06.131675700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 18:07:06.147441 containerd[1544]: time="2024-11-12T18:07:06.147372620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g9kgv,Uid:a282df54-f6aa-450e-a3f8-2feaec5bf123,Namespace:calico-system,Attempt:1,} returns sandbox id \"140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085\"" Nov 12 18:07:06.176948 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 18:07:06.204747 systemd[1]: run-netns-cni\x2d6babf380\x2d8bcb\x2d9cc9\x2db834\x2d93f4ed3aa85e.mount: Deactivated successfully. Nov 12 18:07:06.205423 containerd[1544]: time="2024-11-12T18:07:06.205390571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55594cbfc8-xtvnb,Uid:4c5a9c64-6667-4267-a885-4e8b234758e4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8\"" Nov 12 18:07:06.212892 containerd[1544]: time="2024-11-12T18:07:06.212856551Z" level=info msg="CreateContainer within sandbox \"630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 18:07:06.232078 containerd[1544]: time="2024-11-12T18:07:06.232039081Z" level=info msg="CreateContainer within sandbox \"630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"73062f706bff8115c37cdad126679b553258f8398023ba003545b46d9d5b63da\"" Nov 12 18:07:06.234175 containerd[1544]: time="2024-11-12T18:07:06.234070686Z" level=info msg="StartContainer for \"73062f706bff8115c37cdad126679b553258f8398023ba003545b46d9d5b63da\"" Nov 12 18:07:06.336715 containerd[1544]: time="2024-11-12T18:07:06.336529593Z" level=info msg="StartContainer for \"73062f706bff8115c37cdad126679b553258f8398023ba003545b46d9d5b63da\" returns successfully" Nov 12 18:07:06.646148 systemd[1]: Started sshd@12-10.0.0.144:22-10.0.0.1:53220.service - OpenSSH per-connection server daemon (10.0.0.1:53220). Nov 12 18:07:06.697613 sshd[5025]: Accepted publickey for core from 10.0.0.1 port 53220 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:07:06.699970 sshd[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:07:06.705988 systemd-logind[1523]: New session 13 of user core. Nov 12 18:07:06.711060 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 18:07:06.908597 kubelet[2722]: I1112 18:07:06.908261 2722 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 18:07:06.923553 kubelet[2722]: I1112 18:07:06.923191 2722 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55594cbfc8-xtvnb" podStartSLOduration=24.923141479 podStartE2EDuration="24.923141479s" podCreationTimestamp="2024-11-12 18:06:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 18:07:06.923011399 +0000 UTC m=+45.247994735" watchObservedRunningTime="2024-11-12 18:07:06.923141479 +0000 UTC m=+45.248124815" Nov 12 18:07:06.940859 containerd[1544]: time="2024-11-12T18:07:06.940812565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:07:06.941377 containerd[1544]: time="2024-11-12T18:07:06.941339967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=31961371" Nov 12 18:07:06.943902 containerd[1544]: time="2024-11-12T18:07:06.943854253Z" level=info msg="ImageCreate event name:\"sha256:526584192bc71f907fcb2d2ef01be0c760fee2ab7bb1e05e41ad9ade98a986b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:07:06.948575 containerd[1544]: time="2024-11-12T18:07:06.948383105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:07:06.949504 containerd[1544]: time="2024-11-12T18:07:06.949398268Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:526584192bc71f907fcb2d2ef01be0c760fee2ab7bb1e05e41ad9ade98a986b3\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"33330975\" in 1.762336196s" Nov 12 18:07:06.949504 containerd[1544]: time="2024-11-12T18:07:06.949432468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:526584192bc71f907fcb2d2ef01be0c760fee2ab7bb1e05e41ad9ade98a986b3\"" Nov 12 18:07:06.950980 containerd[1544]: time="2024-11-12T18:07:06.950953992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 18:07:06.958364 containerd[1544]: time="2024-11-12T18:07:06.958241451Z" level=info msg="CreateContainer within sandbox \"d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 18:07:06.959160 sshd[5025]: pam_unix(sshd:session): session closed for user core Nov 12 18:07:06.967400 containerd[1544]: time="2024-11-12T18:07:06.967227514Z" level=info msg="CreateContainer within sandbox \"d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"831775d143d5e98cb9a1bbead0df899372f48a8f43230c72c4558fcc1d250e67\"" Nov 12 18:07:06.968105 containerd[1544]: time="2024-11-12T18:07:06.968062836Z" level=info msg="StartContainer for \"831775d143d5e98cb9a1bbead0df899372f48a8f43230c72c4558fcc1d250e67\"" Nov 12 18:07:06.969412 systemd[1]: Started sshd@13-10.0.0.144:22-10.0.0.1:53224.service - OpenSSH per-connection server daemon (10.0.0.1:53224). Nov 12 18:07:06.969914 systemd[1]: sshd@12-10.0.0.144:22-10.0.0.1:53220.service: Deactivated successfully. Nov 12 18:07:06.971533 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 18:07:06.974167 systemd-logind[1523]: Session 13 logged out. Waiting for processes to exit. Nov 12 18:07:06.975809 systemd-logind[1523]: Removed session 13. Nov 12 18:07:07.007485 sshd[5043]: Accepted publickey for core from 10.0.0.1 port 53224 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:07:07.008385 sshd[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:07:07.016941 systemd-logind[1523]: New session 14 of user core. Nov 12 18:07:07.025413 containerd[1544]: time="2024-11-12T18:07:07.025363624Z" level=info msg="StartContainer for \"831775d143d5e98cb9a1bbead0df899372f48a8f43230c72c4558fcc1d250e67\" returns successfully" Nov 12 18:07:07.027144 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 18:07:07.355642 sshd[5043]: pam_unix(sshd:session): session closed for user core Nov 12 18:07:07.366050 systemd[1]: Started sshd@14-10.0.0.144:22-10.0.0.1:53232.service - OpenSSH per-connection server daemon (10.0.0.1:53232). Nov 12 18:07:07.366459 systemd[1]: sshd@13-10.0.0.144:22-10.0.0.1:53224.service: Deactivated successfully. Nov 12 18:07:07.369744 systemd-logind[1523]: Session 14 logged out. Waiting for processes to exit. Nov 12 18:07:07.370834 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 18:07:07.375935 systemd-logind[1523]: Removed session 14. Nov 12 18:07:07.416690 sshd[5092]: Accepted publickey for core from 10.0.0.1 port 53232 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:07:07.418557 sshd[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:07:07.424357 systemd-logind[1523]: New session 15 of user core. Nov 12 18:07:07.433088 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 18:07:07.463248 systemd-networkd[1231]: cali77a1de321bc: Gained IPv6LL Nov 12 18:07:07.915516 kubelet[2722]: I1112 18:07:07.915474 2722 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 18:07:07.932808 kubelet[2722]: I1112 18:07:07.932751 2722 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5bd4854968-v99gp" podStartSLOduration=22.507285863 podStartE2EDuration="25.932712734s" podCreationTimestamp="2024-11-12 18:06:42 +0000 UTC" firstStartedPulling="2024-11-12 18:07:03.524392438 +0000 UTC m=+41.849375774" lastFinishedPulling="2024-11-12 18:07:06.949819309 +0000 UTC m=+45.274802645" observedRunningTime="2024-11-12 18:07:07.931620051 +0000 UTC m=+46.256603387" watchObservedRunningTime="2024-11-12 18:07:07.932712734 +0000 UTC m=+46.257696070" Nov 12 18:07:07.976921 systemd-networkd[1231]: cali9a8178c8a77: Gained IPv6LL Nov 12 18:07:08.169657 containerd[1544]: time="2024-11-12T18:07:08.169518888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:07:08.170070 containerd[1544]: time="2024-11-12T18:07:08.170029409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7464731" Nov 12 18:07:08.171511 containerd[1544]: time="2024-11-12T18:07:08.171469373Z" level=info msg="ImageCreate event name:\"sha256:7c36e10791d457ced41235b20bab3cd8f54891dd8f7ddaa627378845532c8737\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:07:08.173434 containerd[1544]: time="2024-11-12T18:07:08.173393818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:07:08.174458 containerd[1544]: time="2024-11-12T18:07:08.174419940Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:7c36e10791d457ced41235b20bab3cd8f54891dd8f7ddaa627378845532c8737\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"8834367\" in 1.223431548s" Nov 12 18:07:08.174491 containerd[1544]: time="2024-11-12T18:07:08.174457780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:7c36e10791d457ced41235b20bab3cd8f54891dd8f7ddaa627378845532c8737\"" Nov 12 18:07:08.177346 containerd[1544]: time="2024-11-12T18:07:08.177304307Z" level=info msg="CreateContainer within sandbox \"140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 18:07:08.223992 containerd[1544]: time="2024-11-12T18:07:08.223943584Z" level=info msg="CreateContainer within sandbox \"140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5af170d1e4bdb29f0086ee28951730af115ce1caab621b57b452ac7c243ce33d\"" Nov 12 18:07:08.224999 containerd[1544]: time="2024-11-12T18:07:08.224966826Z" level=info msg="StartContainer for \"5af170d1e4bdb29f0086ee28951730af115ce1caab621b57b452ac7c243ce33d\"" Nov 12 18:07:08.281939 containerd[1544]: time="2024-11-12T18:07:08.281891248Z" level=info msg="StartContainer for \"5af170d1e4bdb29f0086ee28951730af115ce1caab621b57b452ac7c243ce33d\" returns successfully" Nov 12 18:07:08.282899 containerd[1544]: time="2024-11-12T18:07:08.282865571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 18:07:09.055903 sshd[5092]: pam_unix(sshd:session): session closed for user core Nov 12 18:07:09.081309 systemd[1]: Started sshd@15-10.0.0.144:22-10.0.0.1:53248.service - OpenSSH per-connection server daemon (10.0.0.1:53248). Nov 12 18:07:09.081753 systemd[1]: sshd@14-10.0.0.144:22-10.0.0.1:53232.service: Deactivated successfully. Nov 12 18:07:09.091745 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 18:07:09.096342 systemd-logind[1523]: Session 15 logged out. Waiting for processes to exit. Nov 12 18:07:09.100308 systemd-logind[1523]: Removed session 15. Nov 12 18:07:09.131100 sshd[5192]: Accepted publickey for core from 10.0.0.1 port 53248 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:07:09.133298 sshd[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:07:09.137980 systemd-logind[1523]: New session 16 of user core. Nov 12 18:07:09.146090 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 18:07:09.255839 containerd[1544]: time="2024-11-12T18:07:09.255744503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:07:09.257053 containerd[1544]: time="2024-11-12T18:07:09.256691386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=9883360" Nov 12 18:07:09.258510 containerd[1544]: time="2024-11-12T18:07:09.258113789Z" level=info msg="ImageCreate event name:\"sha256:fe02b0a9952e3e3b3828f30f55de14ed8db1a2c781e5563c5c70e2a748e28486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:07:09.260564 containerd[1544]: time="2024-11-12T18:07:09.260499475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 18:07:09.261996 containerd[1544]: time="2024-11-12T18:07:09.261958358Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:fe02b0a9952e3e3b3828f30f55de14ed8db1a2c781e5563c5c70e2a748e28486\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11252948\" in 979.057947ms" Nov 12 18:07:09.261996 containerd[1544]: time="2024-11-12T18:07:09.261993199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:fe02b0a9952e3e3b3828f30f55de14ed8db1a2c781e5563c5c70e2a748e28486\"" Nov 12 18:07:09.264685 containerd[1544]: time="2024-11-12T18:07:09.264561325Z" level=info msg="CreateContainer within sandbox \"140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 18:07:09.283817 containerd[1544]: time="2024-11-12T18:07:09.283751892Z" level=info msg="CreateContainer within sandbox \"140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1e828b3c0710182f041d59e69b80a42d0b76ab6f148c5f21f0774afe50eceb50\"" Nov 12 18:07:09.286428 containerd[1544]: time="2024-11-12T18:07:09.285487736Z" level=info msg="StartContainer for \"1e828b3c0710182f041d59e69b80a42d0b76ab6f148c5f21f0774afe50eceb50\"" Nov 12 18:07:09.365115 containerd[1544]: time="2024-11-12T18:07:09.365017450Z" level=info msg="StartContainer for \"1e828b3c0710182f041d59e69b80a42d0b76ab6f148c5f21f0774afe50eceb50\" returns successfully" Nov 12 18:07:09.525586 sshd[5192]: pam_unix(sshd:session): session closed for user core Nov 12 18:07:09.539079 systemd[1]: Started sshd@16-10.0.0.144:22-10.0.0.1:53264.service - OpenSSH per-connection server daemon (10.0.0.1:53264). Nov 12 18:07:09.540062 systemd[1]: sshd@15-10.0.0.144:22-10.0.0.1:53248.service: Deactivated successfully. Nov 12 18:07:09.543246 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 18:07:09.544253 systemd-logind[1523]: Session 16 logged out. Waiting for processes to exit. Nov 12 18:07:09.545435 systemd-logind[1523]: Removed session 16. Nov 12 18:07:09.581676 sshd[5246]: Accepted publickey for core from 10.0.0.1 port 53264 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:07:09.584206 sshd[5246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:07:09.589134 systemd-logind[1523]: New session 17 of user core. Nov 12 18:07:09.600195 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 18:07:09.749949 sshd[5246]: pam_unix(sshd:session): session closed for user core Nov 12 18:07:09.753622 systemd-logind[1523]: Session 17 logged out. Waiting for processes to exit. Nov 12 18:07:09.753801 systemd[1]: sshd@16-10.0.0.144:22-10.0.0.1:53264.service: Deactivated successfully. Nov 12 18:07:09.755700 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 18:07:09.756215 systemd-logind[1523]: Removed session 17. Nov 12 18:07:09.837527 kubelet[2722]: I1112 18:07:09.837470 2722 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 18:07:09.839833 kubelet[2722]: I1112 18:07:09.839804 2722 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 18:07:09.939370 kubelet[2722]: I1112 18:07:09.939322 2722 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-g9kgv" podStartSLOduration=24.825835958 podStartE2EDuration="27.939278693s" podCreationTimestamp="2024-11-12 18:06:42 +0000 UTC" firstStartedPulling="2024-11-12 18:07:06.148835384 +0000 UTC m=+44.473818720" lastFinishedPulling="2024-11-12 18:07:09.262278119 +0000 UTC m=+47.587261455" observedRunningTime="2024-11-12 18:07:09.938533172 +0000 UTC m=+48.263516508" watchObservedRunningTime="2024-11-12 18:07:09.939278693 +0000 UTC m=+48.264262069" Nov 12 18:07:14.765040 systemd[1]: Started sshd@17-10.0.0.144:22-10.0.0.1:39828.service - OpenSSH per-connection server daemon (10.0.0.1:39828). Nov 12 18:07:14.823959 sshd[5272]: Accepted publickey for core from 10.0.0.1 port 39828 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:07:14.825357 sshd[5272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:07:14.829456 systemd-logind[1523]: New session 18 of user core. Nov 12 18:07:14.835033 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 18:07:14.964294 sshd[5272]: pam_unix(sshd:session): session closed for user core Nov 12 18:07:14.966863 systemd[1]: sshd@17-10.0.0.144:22-10.0.0.1:39828.service: Deactivated successfully. Nov 12 18:07:14.969343 systemd-logind[1523]: Session 18 logged out. Waiting for processes to exit. Nov 12 18:07:14.969662 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 18:07:14.970767 systemd-logind[1523]: Removed session 18. Nov 12 18:07:15.073871 kubelet[2722]: I1112 18:07:15.073699 2722 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 18:07:19.987623 systemd[1]: Started sshd@18-10.0.0.144:22-10.0.0.1:39830.service - OpenSSH per-connection server daemon (10.0.0.1:39830). Nov 12 18:07:20.025045 sshd[5298]: Accepted publickey for core from 10.0.0.1 port 39830 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:07:20.026559 sshd[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:07:20.031139 systemd-logind[1523]: New session 19 of user core. Nov 12 18:07:20.041094 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 18:07:20.108281 kernel: hrtimer: interrupt took 5413851 ns Nov 12 18:07:20.206177 sshd[5298]: pam_unix(sshd:session): session closed for user core Nov 12 18:07:20.209994 systemd[1]: sshd@18-10.0.0.144:22-10.0.0.1:39830.service: Deactivated successfully. Nov 12 18:07:20.212447 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 18:07:20.213399 systemd-logind[1523]: Session 19 logged out. Waiting for processes to exit. Nov 12 18:07:20.214194 systemd-logind[1523]: Removed session 19. Nov 12 18:07:21.748448 containerd[1544]: time="2024-11-12T18:07:21.748407614Z" level=info msg="StopPodSandbox for \"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\"" Nov 12 18:07:21.817861 containerd[1544]: 2024-11-12 18:07:21.785 [WARNING][5328] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0", GenerateName:"calico-apiserver-55594cbfc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c5a9c64-6667-4267-a885-4e8b234758e4", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55594cbfc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8", Pod:"calico-apiserver-55594cbfc8-xtvnb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77a1de321bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:21.817861 containerd[1544]: 2024-11-12 18:07:21.785 [INFO][5328] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Nov 12 18:07:21.817861 containerd[1544]: 2024-11-12 18:07:21.785 [INFO][5328] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" iface="eth0" netns="" Nov 12 18:07:21.817861 containerd[1544]: 2024-11-12 18:07:21.785 [INFO][5328] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Nov 12 18:07:21.817861 containerd[1544]: 2024-11-12 18:07:21.785 [INFO][5328] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Nov 12 18:07:21.817861 containerd[1544]: 2024-11-12 18:07:21.805 [INFO][5337] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" HandleID="k8s-pod-network.f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Workload="localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0" Nov 12 18:07:21.817861 containerd[1544]: 2024-11-12 18:07:21.805 [INFO][5337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:21.817861 containerd[1544]: 2024-11-12 18:07:21.805 [INFO][5337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:21.817861 containerd[1544]: 2024-11-12 18:07:21.813 [WARNING][5337] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" HandleID="k8s-pod-network.f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Workload="localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0" Nov 12 18:07:21.817861 containerd[1544]: 2024-11-12 18:07:21.813 [INFO][5337] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" HandleID="k8s-pod-network.f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Workload="localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0" Nov 12 18:07:21.817861 containerd[1544]: 2024-11-12 18:07:21.814 [INFO][5337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:21.817861 containerd[1544]: 2024-11-12 18:07:21.816 [INFO][5328] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Nov 12 18:07:21.817861 containerd[1544]: time="2024-11-12T18:07:21.817726355Z" level=info msg="TearDown network for sandbox \"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\" successfully" Nov 12 18:07:21.817861 containerd[1544]: time="2024-11-12T18:07:21.817750355Z" level=info msg="StopPodSandbox for \"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\" returns successfully" Nov 12 18:07:21.818755 containerd[1544]: time="2024-11-12T18:07:21.818608597Z" level=info msg="RemovePodSandbox for \"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\"" Nov 12 18:07:21.828175 containerd[1544]: time="2024-11-12T18:07:21.828131297Z" level=info msg="Forcibly stopping sandbox \"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\"" Nov 12 18:07:21.895899 containerd[1544]: 2024-11-12 18:07:21.864 [WARNING][5359] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0", GenerateName:"calico-apiserver-55594cbfc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c5a9c64-6667-4267-a885-4e8b234758e4", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55594cbfc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"630c0210a786a5155cd6b285b4c7ef8cdaa1b36243b9a9e484e0b1b3f14cf0d8", Pod:"calico-apiserver-55594cbfc8-xtvnb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77a1de321bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:21.895899 containerd[1544]: 2024-11-12 18:07:21.864 [INFO][5359] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Nov 12 18:07:21.895899 containerd[1544]: 2024-11-12 18:07:21.864 [INFO][5359] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" iface="eth0" netns="" Nov 12 18:07:21.895899 containerd[1544]: 2024-11-12 18:07:21.864 [INFO][5359] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Nov 12 18:07:21.895899 containerd[1544]: 2024-11-12 18:07:21.864 [INFO][5359] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Nov 12 18:07:21.895899 containerd[1544]: 2024-11-12 18:07:21.883 [INFO][5367] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" HandleID="k8s-pod-network.f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Workload="localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0" Nov 12 18:07:21.895899 containerd[1544]: 2024-11-12 18:07:21.883 [INFO][5367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:21.895899 containerd[1544]: 2024-11-12 18:07:21.883 [INFO][5367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:21.895899 containerd[1544]: 2024-11-12 18:07:21.890 [WARNING][5367] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" HandleID="k8s-pod-network.f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Workload="localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0" Nov 12 18:07:21.895899 containerd[1544]: 2024-11-12 18:07:21.891 [INFO][5367] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" HandleID="k8s-pod-network.f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Workload="localhost-k8s-calico--apiserver--55594cbfc8--xtvnb-eth0" Nov 12 18:07:21.895899 containerd[1544]: 2024-11-12 18:07:21.892 [INFO][5367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:21.895899 containerd[1544]: 2024-11-12 18:07:21.893 [INFO][5359] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb" Nov 12 18:07:21.896304 containerd[1544]: time="2024-11-12T18:07:21.895974755Z" level=info msg="TearDown network for sandbox \"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\" successfully" Nov 12 18:07:21.917206 containerd[1544]: time="2024-11-12T18:07:21.917158078Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 18:07:21.917297 containerd[1544]: time="2024-11-12T18:07:21.917234239Z" level=info msg="RemovePodSandbox \"f259eb094611118e4e55addf3084f4f827dcb661ddac1a2758959b9aa1a0b1eb\" returns successfully" Nov 12 18:07:21.917804 containerd[1544]: time="2024-11-12T18:07:21.917738760Z" level=info msg="StopPodSandbox for \"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\"" Nov 12 18:07:21.991549 containerd[1544]: 2024-11-12 18:07:21.957 [WARNING][5389] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0", GenerateName:"calico-kube-controllers-5bd4854968-", Namespace:"calico-system", SelfLink:"", UID:"4735fef9-5e79-40fb-ba5e-7a6cff344df8", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bd4854968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31", Pod:"calico-kube-controllers-5bd4854968-v99gp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calieb795543e29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:21.991549 containerd[1544]: 2024-11-12 18:07:21.957 [INFO][5389] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Nov 12 18:07:21.991549 containerd[1544]: 2024-11-12 18:07:21.957 [INFO][5389] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" iface="eth0" netns="" Nov 12 18:07:21.991549 containerd[1544]: 2024-11-12 18:07:21.957 [INFO][5389] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Nov 12 18:07:21.991549 containerd[1544]: 2024-11-12 18:07:21.957 [INFO][5389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Nov 12 18:07:21.991549 containerd[1544]: 2024-11-12 18:07:21.979 [INFO][5396] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" HandleID="k8s-pod-network.3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Workload="localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0" Nov 12 18:07:21.991549 containerd[1544]: 2024-11-12 18:07:21.979 [INFO][5396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:21.991549 containerd[1544]: 2024-11-12 18:07:21.979 [INFO][5396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:21.991549 containerd[1544]: 2024-11-12 18:07:21.987 [WARNING][5396] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" HandleID="k8s-pod-network.3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Workload="localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0" Nov 12 18:07:21.991549 containerd[1544]: 2024-11-12 18:07:21.987 [INFO][5396] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" HandleID="k8s-pod-network.3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Workload="localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0" Nov 12 18:07:21.991549 containerd[1544]: 2024-11-12 18:07:21.988 [INFO][5396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:21.991549 containerd[1544]: 2024-11-12 18:07:21.990 [INFO][5389] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Nov 12 18:07:21.991549 containerd[1544]: time="2024-11-12T18:07:21.991432510Z" level=info msg="TearDown network for sandbox \"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\" successfully" Nov 12 18:07:21.991549 containerd[1544]: time="2024-11-12T18:07:21.991456750Z" level=info msg="StopPodSandbox for \"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\" returns successfully" Nov 12 18:07:21.994042 containerd[1544]: time="2024-11-12T18:07:21.992275192Z" level=info msg="RemovePodSandbox for \"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\"" Nov 12 18:07:21.994042 containerd[1544]: time="2024-11-12T18:07:21.992305992Z" level=info msg="Forcibly stopping sandbox \"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\"" Nov 12 18:07:22.056230 containerd[1544]: 2024-11-12 18:07:22.025 [WARNING][5418] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0", GenerateName:"calico-kube-controllers-5bd4854968-", Namespace:"calico-system", SelfLink:"", UID:"4735fef9-5e79-40fb-ba5e-7a6cff344df8", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bd4854968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d576d28916eb9d7ef2eab403ad10e17a7a047958789cc6d7f351ed4ff7e72f31", Pod:"calico-kube-controllers-5bd4854968-v99gp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calieb795543e29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:22.056230 containerd[1544]: 2024-11-12 18:07:22.026 [INFO][5418] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Nov 12 18:07:22.056230 containerd[1544]: 2024-11-12 18:07:22.026 [INFO][5418] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" iface="eth0" netns="" Nov 12 18:07:22.056230 containerd[1544]: 2024-11-12 18:07:22.026 [INFO][5418] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Nov 12 18:07:22.056230 containerd[1544]: 2024-11-12 18:07:22.026 [INFO][5418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Nov 12 18:07:22.056230 containerd[1544]: 2024-11-12 18:07:22.044 [INFO][5425] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" HandleID="k8s-pod-network.3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Workload="localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0" Nov 12 18:07:22.056230 containerd[1544]: 2024-11-12 18:07:22.044 [INFO][5425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:22.056230 containerd[1544]: 2024-11-12 18:07:22.044 [INFO][5425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:22.056230 containerd[1544]: 2024-11-12 18:07:22.052 [WARNING][5425] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" HandleID="k8s-pod-network.3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Workload="localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0" Nov 12 18:07:22.056230 containerd[1544]: 2024-11-12 18:07:22.052 [INFO][5425] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" HandleID="k8s-pod-network.3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Workload="localhost-k8s-calico--kube--controllers--5bd4854968--v99gp-eth0" Nov 12 18:07:22.056230 containerd[1544]: 2024-11-12 18:07:22.053 [INFO][5425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:22.056230 containerd[1544]: 2024-11-12 18:07:22.054 [INFO][5418] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71" Nov 12 18:07:22.056230 containerd[1544]: time="2024-11-12T18:07:22.056189121Z" level=info msg="TearDown network for sandbox \"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\" successfully" Nov 12 18:07:22.059583 containerd[1544]: time="2024-11-12T18:07:22.059551248Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 18:07:22.059739 containerd[1544]: time="2024-11-12T18:07:22.059720648Z" level=info msg="RemovePodSandbox \"3e08a350796cd3c05e59f6b4fce917cb018afabde2e22e66e177933545607f71\" returns successfully" Nov 12 18:07:22.060544 containerd[1544]: time="2024-11-12T18:07:22.060518170Z" level=info msg="StopPodSandbox for \"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\"" Nov 12 18:07:22.123686 containerd[1544]: 2024-11-12 18:07:22.092 [WARNING][5448] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g9kgv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a282df54-f6aa-450e-a3f8-2feaec5bf123", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085", Pod:"csi-node-driver-g9kgv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9a8178c8a77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:22.123686 containerd[1544]: 2024-11-12 18:07:22.093 [INFO][5448] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Nov 12 18:07:22.123686 containerd[1544]: 2024-11-12 18:07:22.093 [INFO][5448] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" iface="eth0" netns="" Nov 12 18:07:22.123686 containerd[1544]: 2024-11-12 18:07:22.093 [INFO][5448] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Nov 12 18:07:22.123686 containerd[1544]: 2024-11-12 18:07:22.093 [INFO][5448] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Nov 12 18:07:22.123686 containerd[1544]: 2024-11-12 18:07:22.111 [INFO][5455] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" HandleID="k8s-pod-network.9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Workload="localhost-k8s-csi--node--driver--g9kgv-eth0" Nov 12 18:07:22.123686 containerd[1544]: 2024-11-12 18:07:22.111 [INFO][5455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:22.123686 containerd[1544]: 2024-11-12 18:07:22.111 [INFO][5455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:22.123686 containerd[1544]: 2024-11-12 18:07:22.119 [WARNING][5455] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" HandleID="k8s-pod-network.9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Workload="localhost-k8s-csi--node--driver--g9kgv-eth0" Nov 12 18:07:22.123686 containerd[1544]: 2024-11-12 18:07:22.119 [INFO][5455] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" HandleID="k8s-pod-network.9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Workload="localhost-k8s-csi--node--driver--g9kgv-eth0" Nov 12 18:07:22.123686 containerd[1544]: 2024-11-12 18:07:22.120 [INFO][5455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:22.123686 containerd[1544]: 2024-11-12 18:07:22.122 [INFO][5448] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Nov 12 18:07:22.124617 containerd[1544]: time="2024-11-12T18:07:22.123706098Z" level=info msg="TearDown network for sandbox \"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\" successfully" Nov 12 18:07:22.124617 containerd[1544]: time="2024-11-12T18:07:22.123729658Z" level=info msg="StopPodSandbox for \"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\" returns successfully" Nov 12 18:07:22.124617 containerd[1544]: time="2024-11-12T18:07:22.124171579Z" level=info msg="RemovePodSandbox for \"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\"" Nov 12 18:07:22.124617 containerd[1544]: time="2024-11-12T18:07:22.124214299Z" level=info msg="Forcibly stopping sandbox \"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\"" Nov 12 18:07:22.185746 containerd[1544]: 2024-11-12 18:07:22.155 [WARNING][5477] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g9kgv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a282df54-f6aa-450e-a3f8-2feaec5bf123", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"140374c0662123bc731c504ec9cf93f1bea0786138c7befa2f6a623498dfd085", Pod:"csi-node-driver-g9kgv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9a8178c8a77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:22.185746 containerd[1544]: 2024-11-12 18:07:22.155 [INFO][5477] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Nov 12 18:07:22.185746 containerd[1544]: 2024-11-12 18:07:22.155 [INFO][5477] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" iface="eth0" netns="" Nov 12 18:07:22.185746 containerd[1544]: 2024-11-12 18:07:22.155 [INFO][5477] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Nov 12 18:07:22.185746 containerd[1544]: 2024-11-12 18:07:22.155 [INFO][5477] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Nov 12 18:07:22.185746 containerd[1544]: 2024-11-12 18:07:22.172 [INFO][5485] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" HandleID="k8s-pod-network.9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Workload="localhost-k8s-csi--node--driver--g9kgv-eth0" Nov 12 18:07:22.185746 containerd[1544]: 2024-11-12 18:07:22.172 [INFO][5485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:22.185746 containerd[1544]: 2024-11-12 18:07:22.172 [INFO][5485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:22.185746 containerd[1544]: 2024-11-12 18:07:22.181 [WARNING][5485] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" HandleID="k8s-pod-network.9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Workload="localhost-k8s-csi--node--driver--g9kgv-eth0" Nov 12 18:07:22.185746 containerd[1544]: 2024-11-12 18:07:22.181 [INFO][5485] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" HandleID="k8s-pod-network.9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Workload="localhost-k8s-csi--node--driver--g9kgv-eth0" Nov 12 18:07:22.185746 containerd[1544]: 2024-11-12 18:07:22.182 [INFO][5485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:22.185746 containerd[1544]: 2024-11-12 18:07:22.184 [INFO][5477] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1" Nov 12 18:07:22.186189 containerd[1544]: time="2024-11-12T18:07:22.185782503Z" level=info msg="TearDown network for sandbox \"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\" successfully" Nov 12 18:07:22.188437 containerd[1544]: time="2024-11-12T18:07:22.188412748Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 18:07:22.188493 containerd[1544]: time="2024-11-12T18:07:22.188468468Z" level=info msg="RemovePodSandbox \"9cfe419b23a40cc19430d3cd4af04b1afced3d638f264875fd1a1a3728d3b8b1\" returns successfully" Nov 12 18:07:22.189196 containerd[1544]: time="2024-11-12T18:07:22.189174790Z" level=info msg="StopPodSandbox for \"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\"" Nov 12 18:07:22.251009 containerd[1544]: 2024-11-12 18:07:22.220 [WARNING][5508] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--bdfqg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"cdff6f1d-4fee-422c-bb63-c5707ab88ef8", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5", Pod:"coredns-76f75df574-bdfqg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8bb7660fd1a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:22.251009 containerd[1544]: 2024-11-12 18:07:22.221 [INFO][5508] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Nov 12 18:07:22.251009 containerd[1544]: 2024-11-12 18:07:22.221 [INFO][5508] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" iface="eth0" netns="" Nov 12 18:07:22.251009 containerd[1544]: 2024-11-12 18:07:22.221 [INFO][5508] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Nov 12 18:07:22.251009 containerd[1544]: 2024-11-12 18:07:22.221 [INFO][5508] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Nov 12 18:07:22.251009 containerd[1544]: 2024-11-12 18:07:22.238 [INFO][5515] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" HandleID="k8s-pod-network.e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Workload="localhost-k8s-coredns--76f75df574--bdfqg-eth0" Nov 12 18:07:22.251009 containerd[1544]: 2024-11-12 18:07:22.238 [INFO][5515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:22.251009 containerd[1544]: 2024-11-12 18:07:22.238 [INFO][5515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:22.251009 containerd[1544]: 2024-11-12 18:07:22.246 [WARNING][5515] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" HandleID="k8s-pod-network.e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Workload="localhost-k8s-coredns--76f75df574--bdfqg-eth0" Nov 12 18:07:22.251009 containerd[1544]: 2024-11-12 18:07:22.246 [INFO][5515] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" HandleID="k8s-pod-network.e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Workload="localhost-k8s-coredns--76f75df574--bdfqg-eth0" Nov 12 18:07:22.251009 containerd[1544]: 2024-11-12 18:07:22.248 [INFO][5515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:22.251009 containerd[1544]: 2024-11-12 18:07:22.249 [INFO][5508] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Nov 12 18:07:22.251394 containerd[1544]: time="2024-11-12T18:07:22.251039795Z" level=info msg="TearDown network for sandbox \"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\" successfully" Nov 12 18:07:22.251394 containerd[1544]: time="2024-11-12T18:07:22.251064435Z" level=info msg="StopPodSandbox for \"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\" returns successfully" Nov 12 18:07:22.251571 containerd[1544]: time="2024-11-12T18:07:22.251545516Z" level=info msg="RemovePodSandbox for \"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\"" Nov 12 18:07:22.251601 containerd[1544]: time="2024-11-12T18:07:22.251581116Z" level=info msg="Forcibly stopping sandbox \"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\"" Nov 12 18:07:22.312054 containerd[1544]: 2024-11-12 18:07:22.283 [WARNING][5537] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--bdfqg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"cdff6f1d-4fee-422c-bb63-c5707ab88ef8", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7733d0950ed3195d9a76723d9941b47d987eb7c1c5aab630796c825bf477bbf5", Pod:"coredns-76f75df574-bdfqg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8bb7660fd1a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:22.312054 containerd[1544]: 2024-11-12 18:07:22.283 [INFO][5537] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Nov 12 18:07:22.312054 containerd[1544]: 2024-11-12 18:07:22.283 [INFO][5537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" iface="eth0" netns="" Nov 12 18:07:22.312054 containerd[1544]: 2024-11-12 18:07:22.283 [INFO][5537] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Nov 12 18:07:22.312054 containerd[1544]: 2024-11-12 18:07:22.283 [INFO][5537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Nov 12 18:07:22.312054 containerd[1544]: 2024-11-12 18:07:22.300 [INFO][5545] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" HandleID="k8s-pod-network.e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Workload="localhost-k8s-coredns--76f75df574--bdfqg-eth0" Nov 12 18:07:22.312054 containerd[1544]: 2024-11-12 18:07:22.300 [INFO][5545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:22.312054 containerd[1544]: 2024-11-12 18:07:22.300 [INFO][5545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:22.312054 containerd[1544]: 2024-11-12 18:07:22.308 [WARNING][5545] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" HandleID="k8s-pod-network.e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Workload="localhost-k8s-coredns--76f75df574--bdfqg-eth0" Nov 12 18:07:22.312054 containerd[1544]: 2024-11-12 18:07:22.308 [INFO][5545] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" HandleID="k8s-pod-network.e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Workload="localhost-k8s-coredns--76f75df574--bdfqg-eth0" Nov 12 18:07:22.312054 containerd[1544]: 2024-11-12 18:07:22.309 [INFO][5545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:22.312054 containerd[1544]: 2024-11-12 18:07:22.310 [INFO][5537] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094" Nov 12 18:07:22.312054 containerd[1544]: time="2024-11-12T18:07:22.312030838Z" level=info msg="TearDown network for sandbox \"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\" successfully" Nov 12 18:07:22.315094 containerd[1544]: time="2024-11-12T18:07:22.315046924Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 18:07:22.315146 containerd[1544]: time="2024-11-12T18:07:22.315107644Z" level=info msg="RemovePodSandbox \"e63d08ddff85ded624098cd4654716a962d9e0996ec5034699bbb902b1c24094\" returns successfully" Nov 12 18:07:22.315536 containerd[1544]: time="2024-11-12T18:07:22.315514405Z" level=info msg="StopPodSandbox for \"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\"" Nov 12 18:07:22.375985 containerd[1544]: 2024-11-12 18:07:22.346 [WARNING][5568] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0", GenerateName:"calico-apiserver-55594cbfc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"30f0ebab-e362-4d1a-9134-a19a9dbbe847", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55594cbfc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14", Pod:"calico-apiserver-55594cbfc8-dq7v6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f356f92e22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:22.375985 containerd[1544]: 2024-11-12 18:07:22.346 [INFO][5568] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Nov 12 18:07:22.375985 containerd[1544]: 2024-11-12 18:07:22.346 [INFO][5568] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" iface="eth0" netns="" Nov 12 18:07:22.375985 containerd[1544]: 2024-11-12 18:07:22.346 [INFO][5568] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Nov 12 18:07:22.375985 containerd[1544]: 2024-11-12 18:07:22.346 [INFO][5568] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Nov 12 18:07:22.375985 containerd[1544]: 2024-11-12 18:07:22.363 [INFO][5575] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" HandleID="k8s-pod-network.8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Workload="localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0" Nov 12 18:07:22.375985 containerd[1544]: 2024-11-12 18:07:22.363 [INFO][5575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:22.375985 containerd[1544]: 2024-11-12 18:07:22.363 [INFO][5575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:22.375985 containerd[1544]: 2024-11-12 18:07:22.371 [WARNING][5575] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" HandleID="k8s-pod-network.8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Workload="localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0" Nov 12 18:07:22.375985 containerd[1544]: 2024-11-12 18:07:22.371 [INFO][5575] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" HandleID="k8s-pod-network.8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Workload="localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0" Nov 12 18:07:22.375985 containerd[1544]: 2024-11-12 18:07:22.373 [INFO][5575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:22.375985 containerd[1544]: 2024-11-12 18:07:22.374 [INFO][5568] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Nov 12 18:07:22.376475 containerd[1544]: time="2024-11-12T18:07:22.376011527Z" level=info msg="TearDown network for sandbox \"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\" successfully" Nov 12 18:07:22.376475 containerd[1544]: time="2024-11-12T18:07:22.376037927Z" level=info msg="StopPodSandbox for \"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\" returns successfully" Nov 12 18:07:22.376524 containerd[1544]: time="2024-11-12T18:07:22.376494368Z" level=info msg="RemovePodSandbox for \"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\"" Nov 12 18:07:22.376547 containerd[1544]: time="2024-11-12T18:07:22.376522848Z" level=info msg="Forcibly stopping sandbox \"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\"" Nov 12 18:07:22.439622 containerd[1544]: 2024-11-12 18:07:22.408 [WARNING][5599] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0", GenerateName:"calico-apiserver-55594cbfc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"30f0ebab-e362-4d1a-9134-a19a9dbbe847", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55594cbfc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7f1b45b4af1cee5e35b57c0ac85c0fb98e2a3f25f8e7b34d0183d98f58e22b14", Pod:"calico-apiserver-55594cbfc8-dq7v6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f356f92e22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:22.439622 containerd[1544]: 2024-11-12 18:07:22.408 [INFO][5599] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Nov 12 18:07:22.439622 containerd[1544]: 2024-11-12 18:07:22.408 [INFO][5599] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" iface="eth0" netns="" Nov 12 18:07:22.439622 containerd[1544]: 2024-11-12 18:07:22.408 [INFO][5599] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Nov 12 18:07:22.439622 containerd[1544]: 2024-11-12 18:07:22.408 [INFO][5599] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Nov 12 18:07:22.439622 containerd[1544]: 2024-11-12 18:07:22.427 [INFO][5607] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" HandleID="k8s-pod-network.8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Workload="localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0" Nov 12 18:07:22.439622 containerd[1544]: 2024-11-12 18:07:22.427 [INFO][5607] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:22.439622 containerd[1544]: 2024-11-12 18:07:22.427 [INFO][5607] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:22.439622 containerd[1544]: 2024-11-12 18:07:22.435 [WARNING][5607] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" HandleID="k8s-pod-network.8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Workload="localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0" Nov 12 18:07:22.439622 containerd[1544]: 2024-11-12 18:07:22.435 [INFO][5607] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" HandleID="k8s-pod-network.8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Workload="localhost-k8s-calico--apiserver--55594cbfc8--dq7v6-eth0" Nov 12 18:07:22.439622 containerd[1544]: 2024-11-12 18:07:22.436 [INFO][5607] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:22.439622 containerd[1544]: 2024-11-12 18:07:22.438 [INFO][5599] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c" Nov 12 18:07:22.440009 containerd[1544]: time="2024-11-12T18:07:22.439648336Z" level=info msg="TearDown network for sandbox \"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\" successfully" Nov 12 18:07:22.442115 containerd[1544]: time="2024-11-12T18:07:22.442085141Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 18:07:22.442165 containerd[1544]: time="2024-11-12T18:07:22.442150621Z" level=info msg="RemovePodSandbox \"8cd1af823ec2239d982b6ca6b7f058dbfb39b2a1b46a87df61850fa3f8f8162c\" returns successfully" Nov 12 18:07:22.442616 containerd[1544]: time="2024-11-12T18:07:22.442594102Z" level=info msg="StopPodSandbox for \"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\"" Nov 12 18:07:22.505107 containerd[1544]: 2024-11-12 18:07:22.474 [WARNING][5630] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--pgmhz-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"28a6568b-1b3e-478e-ba5b-d89b95125e3f", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589", Pod:"coredns-76f75df574-pgmhz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3474ac64ad0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:22.505107 containerd[1544]: 2024-11-12 18:07:22.474 [INFO][5630] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Nov 12 18:07:22.505107 containerd[1544]: 2024-11-12 18:07:22.474 [INFO][5630] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" iface="eth0" netns="" Nov 12 18:07:22.505107 containerd[1544]: 2024-11-12 18:07:22.474 [INFO][5630] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Nov 12 18:07:22.505107 containerd[1544]: 2024-11-12 18:07:22.474 [INFO][5630] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Nov 12 18:07:22.505107 containerd[1544]: 2024-11-12 18:07:22.492 [INFO][5637] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" HandleID="k8s-pod-network.4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Workload="localhost-k8s-coredns--76f75df574--pgmhz-eth0" Nov 12 18:07:22.505107 containerd[1544]: 2024-11-12 18:07:22.492 [INFO][5637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:22.505107 containerd[1544]: 2024-11-12 18:07:22.492 [INFO][5637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:22.505107 containerd[1544]: 2024-11-12 18:07:22.501 [WARNING][5637] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" HandleID="k8s-pod-network.4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Workload="localhost-k8s-coredns--76f75df574--pgmhz-eth0" Nov 12 18:07:22.505107 containerd[1544]: 2024-11-12 18:07:22.501 [INFO][5637] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" HandleID="k8s-pod-network.4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Workload="localhost-k8s-coredns--76f75df574--pgmhz-eth0" Nov 12 18:07:22.505107 containerd[1544]: 2024-11-12 18:07:22.502 [INFO][5637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:22.505107 containerd[1544]: 2024-11-12 18:07:22.503 [INFO][5630] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Nov 12 18:07:22.505466 containerd[1544]: time="2024-11-12T18:07:22.505149788Z" level=info msg="TearDown network for sandbox \"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\" successfully" Nov 12 18:07:22.505466 containerd[1544]: time="2024-11-12T18:07:22.505172748Z" level=info msg="StopPodSandbox for \"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\" returns successfully" Nov 12 18:07:22.505603 containerd[1544]: time="2024-11-12T18:07:22.505578229Z" level=info msg="RemovePodSandbox for \"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\"" Nov 12 18:07:22.505628 containerd[1544]: time="2024-11-12T18:07:22.505608669Z" level=info msg="Forcibly stopping sandbox \"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\"" Nov 12 18:07:22.566148 containerd[1544]: 2024-11-12 18:07:22.536 [WARNING][5659] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--pgmhz-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"28a6568b-1b3e-478e-ba5b-d89b95125e3f", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 18, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3fe876a8bf422d8e59e9ba94e5cd43b62d5980667045641c4af9e5dc00a0e589", Pod:"coredns-76f75df574-pgmhz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3474ac64ad0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 18:07:22.566148 containerd[1544]: 2024-11-12 18:07:22.536 [INFO][5659] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Nov 12 18:07:22.566148 containerd[1544]: 2024-11-12 18:07:22.536 [INFO][5659] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" iface="eth0" netns="" Nov 12 18:07:22.566148 containerd[1544]: 2024-11-12 18:07:22.536 [INFO][5659] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Nov 12 18:07:22.566148 containerd[1544]: 2024-11-12 18:07:22.536 [INFO][5659] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Nov 12 18:07:22.566148 containerd[1544]: 2024-11-12 18:07:22.554 [INFO][5667] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" HandleID="k8s-pod-network.4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Workload="localhost-k8s-coredns--76f75df574--pgmhz-eth0" Nov 12 18:07:22.566148 containerd[1544]: 2024-11-12 18:07:22.554 [INFO][5667] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 18:07:22.566148 containerd[1544]: 2024-11-12 18:07:22.554 [INFO][5667] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 18:07:22.566148 containerd[1544]: 2024-11-12 18:07:22.562 [WARNING][5667] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" HandleID="k8s-pod-network.4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Workload="localhost-k8s-coredns--76f75df574--pgmhz-eth0" Nov 12 18:07:22.566148 containerd[1544]: 2024-11-12 18:07:22.562 [INFO][5667] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" HandleID="k8s-pod-network.4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Workload="localhost-k8s-coredns--76f75df574--pgmhz-eth0" Nov 12 18:07:22.566148 containerd[1544]: 2024-11-12 18:07:22.563 [INFO][5667] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 18:07:22.566148 containerd[1544]: 2024-11-12 18:07:22.564 [INFO][5659] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16" Nov 12 18:07:22.566905 containerd[1544]: time="2024-11-12T18:07:22.566118032Z" level=info msg="TearDown network for sandbox \"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\" successfully" Nov 12 18:07:22.569947 containerd[1544]: time="2024-11-12T18:07:22.569913839Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 18:07:22.569988 containerd[1544]: time="2024-11-12T18:07:22.569974279Z" level=info msg="RemovePodSandbox \"4faa8c2f7c3f193665a09c08d379caca87cdea17b70c5d5332dc209933bc2f16\" returns successfully" Nov 12 18:07:25.219014 systemd[1]: Started sshd@19-10.0.0.144:22-10.0.0.1:54028.service - OpenSSH per-connection server daemon (10.0.0.1:54028). Nov 12 18:07:25.260916 sshd[5674]: Accepted publickey for core from 10.0.0.1 port 54028 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 18:07:25.262668 sshd[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 18:07:25.266533 systemd-logind[1523]: New session 20 of user core. Nov 12 18:07:25.279096 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 18:07:25.435040 sshd[5674]: pam_unix(sshd:session): session closed for user core Nov 12 18:07:25.438217 systemd[1]: sshd@19-10.0.0.144:22-10.0.0.1:54028.service: Deactivated successfully. Nov 12 18:07:25.440113 systemd-logind[1523]: Session 20 logged out. Waiting for processes to exit. Nov 12 18:07:25.440125 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 18:07:25.441516 systemd-logind[1523]: Removed session 20. Nov 12 18:07:26.632656 kubelet[2722]: E1112 18:07:26.632613 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"