Jun 25 18:43:11.895743 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jun 25 18:43:11.895763 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Tue Jun 25 17:19:03 -00 2024 Jun 25 18:43:11.895772 kernel: KASLR enabled Jun 25 18:43:11.895778 kernel: efi: EFI v2.7 by EDK II Jun 25 18:43:11.895783 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jun 25 18:43:11.895789 kernel: random: crng init done Jun 25 18:43:11.895803 kernel: ACPI: Early table checksum verification disabled Jun 25 18:43:11.895809 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jun 25 18:43:11.895816 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jun 25 18:43:11.895823 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:43:11.895830 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:43:11.895836 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:43:11.895842 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:43:11.895848 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:43:11.895855 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:43:11.895863 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:43:11.895869 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:43:11.895875 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:43:11.895881 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jun 25 18:43:11.895888 kernel: NUMA: Failed to initialise from firmware Jun 25 18:43:11.895894 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jun 25 18:43:11.895901 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jun 25 18:43:11.895907 kernel: Zone ranges: Jun 25 18:43:11.895913 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jun 25 18:43:11.895919 kernel: DMA32 empty Jun 25 18:43:11.895926 kernel: Normal empty Jun 25 18:43:11.895933 kernel: Movable zone start for each node Jun 25 18:43:11.895939 kernel: Early memory node ranges Jun 25 18:43:11.895945 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jun 25 18:43:11.895952 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jun 25 18:43:11.895958 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jun 25 18:43:11.895964 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jun 25 18:43:11.895970 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jun 25 18:43:11.895977 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jun 25 18:43:11.895983 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jun 25 18:43:11.895989 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jun 25 18:43:11.895995 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jun 25 18:43:11.896003 kernel: psci: probing for conduit method from ACPI. Jun 25 18:43:11.896009 kernel: psci: PSCIv1.1 detected in firmware. Jun 25 18:43:11.896015 kernel: psci: Using standard PSCI v0.2 function IDs Jun 25 18:43:11.896024 kernel: psci: Trusted OS migration not required Jun 25 18:43:11.896031 kernel: psci: SMC Calling Convention v1.1 Jun 25 18:43:11.896038 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jun 25 18:43:11.896046 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jun 25 18:43:11.896052 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jun 25 18:43:11.896059 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jun 25 18:43:11.896066 kernel: Detected PIPT I-cache on CPU0 Jun 25 18:43:11.896073 kernel: CPU features: detected: GIC system register CPU interface Jun 25 18:43:11.896079 kernel: CPU features: detected: Hardware dirty bit management Jun 25 18:43:11.896086 kernel: CPU features: detected: Spectre-v4 Jun 25 18:43:11.896093 kernel: CPU features: detected: Spectre-BHB Jun 25 18:43:11.896099 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 25 18:43:11.896106 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 25 18:43:11.896114 kernel: CPU features: detected: ARM erratum 1418040 Jun 25 18:43:11.896121 kernel: alternatives: applying boot alternatives Jun 25 18:43:11.896128 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6069a8408a0ca7e7bc40a0bde7fe3ef89df2f98c4bdd2e7e7f9f8f3f8ad207f Jun 25 18:43:11.896135 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:43:11.896142 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:43:11.896149 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 18:43:11.896156 kernel: Fallback order for Node 0: 0 Jun 25 18:43:11.896162 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jun 25 18:43:11.896169 kernel: Policy zone: DMA Jun 25 18:43:11.896176 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:43:11.896182 kernel: software IO TLB: area num 4. Jun 25 18:43:11.896190 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jun 25 18:43:11.896197 kernel: Memory: 2386852K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185436K reserved, 0K cma-reserved) Jun 25 18:43:11.896204 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 25 18:43:11.896211 kernel: trace event string verifier disabled Jun 25 18:43:11.896218 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:43:11.896225 kernel: rcu: RCU event tracing is enabled. Jun 25 18:43:11.896232 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 25 18:43:11.896239 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:43:11.896245 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:43:11.896252 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:43:11.896259 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 25 18:43:11.896266 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 25 18:43:11.896274 kernel: GICv3: 256 SPIs implemented Jun 25 18:43:11.896280 kernel: GICv3: 0 Extended SPIs implemented Jun 25 18:43:11.896287 kernel: Root IRQ handler: gic_handle_irq Jun 25 18:43:11.896294 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jun 25 18:43:11.896301 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jun 25 18:43:11.896307 kernel: ITS [mem 0x08080000-0x0809ffff] Jun 25 18:43:11.896314 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Jun 25 18:43:11.896321 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Jun 25 18:43:11.896328 kernel: GICv3: using LPI property table @0x00000000400f0000 Jun 25 18:43:11.896334 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jun 25 18:43:11.896341 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:43:11.896349 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:43:11.896356 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jun 25 18:43:11.896362 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 25 18:43:11.896369 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 25 18:43:11.896376 kernel: arm-pv: using stolen time PV Jun 25 18:43:11.896383 kernel: Console: colour dummy device 80x25 Jun 25 18:43:11.896390 kernel: ACPI: Core revision 20230628 Jun 25 18:43:11.896397 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 25 18:43:11.896404 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:43:11.896411 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:43:11.896419 kernel: SELinux: Initializing. Jun 25 18:43:11.896426 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:43:11.896433 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:43:11.896440 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:43:11.896447 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:43:11.896454 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:43:11.896460 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:43:11.896467 kernel: Platform MSI: ITS@0x8080000 domain created Jun 25 18:43:11.896474 kernel: PCI/MSI: ITS@0x8080000 domain created Jun 25 18:43:11.896482 kernel: Remapping and enabling EFI services. Jun 25 18:43:11.896489 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:43:11.896495 kernel: Detected PIPT I-cache on CPU1 Jun 25 18:43:11.896503 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jun 25 18:43:11.896510 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jun 25 18:43:11.896516 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:43:11.896523 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jun 25 18:43:11.896530 kernel: Detected PIPT I-cache on CPU2 Jun 25 18:43:11.896537 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jun 25 18:43:11.896544 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jun 25 18:43:11.896552 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:43:11.896559 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jun 25 18:43:11.896570 kernel: Detected PIPT I-cache on CPU3 Jun 25 18:43:11.896579 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jun 25 18:43:11.896586 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jun 25 18:43:11.896593 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:43:11.896600 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jun 25 18:43:11.896607 kernel: smp: Brought up 1 node, 4 CPUs Jun 25 18:43:11.896615 kernel: SMP: Total of 4 processors activated. Jun 25 18:43:11.896623 kernel: CPU features: detected: 32-bit EL0 Support Jun 25 18:43:11.896631 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 25 18:43:11.896656 kernel: CPU features: detected: Common not Private translations Jun 25 18:43:11.896664 kernel: CPU features: detected: CRC32 instructions Jun 25 18:43:11.896671 kernel: CPU features: detected: Enhanced Virtualization Traps Jun 25 18:43:11.896679 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 25 18:43:11.896686 kernel: CPU features: detected: LSE atomic instructions Jun 25 18:43:11.896693 kernel: CPU features: detected: Privileged Access Never Jun 25 18:43:11.896702 kernel: CPU features: detected: RAS Extension Support Jun 25 18:43:11.896710 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jun 25 18:43:11.896717 kernel: CPU: All CPU(s) started at EL1 Jun 25 18:43:11.896724 kernel: alternatives: applying system-wide alternatives Jun 25 18:43:11.896731 kernel: devtmpfs: initialized Jun 25 18:43:11.896738 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:43:11.896746 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 25 18:43:11.896753 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:43:11.896760 kernel: SMBIOS 3.0.0 present. Jun 25 18:43:11.896769 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jun 25 18:43:11.896776 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:43:11.896783 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 25 18:43:11.896791 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 25 18:43:11.896802 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 25 18:43:11.896809 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:43:11.896816 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jun 25 18:43:11.896824 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:43:11.896831 kernel: cpuidle: using governor menu Jun 25 18:43:11.896840 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 25 18:43:11.896847 kernel: ASID allocator initialised with 32768 entries Jun 25 18:43:11.896854 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:43:11.896861 kernel: Serial: AMBA PL011 UART driver Jun 25 18:43:11.896868 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jun 25 18:43:11.896876 kernel: Modules: 0 pages in range for non-PLT usage Jun 25 18:43:11.896883 kernel: Modules: 509120 pages in range for PLT usage Jun 25 18:43:11.896890 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:43:11.896897 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:43:11.896906 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 25 18:43:11.896913 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 25 18:43:11.896920 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:43:11.896928 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:43:11.896935 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 25 18:43:11.896942 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 25 18:43:11.896949 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:43:11.896956 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:43:11.896964 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:43:11.896972 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:43:11.896979 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:43:11.896986 kernel: ACPI: Interpreter enabled Jun 25 18:43:11.896994 kernel: ACPI: Using GIC for interrupt routing Jun 25 18:43:11.897001 kernel: ACPI: MCFG table detected, 1 entries Jun 25 18:43:11.897008 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jun 25 18:43:11.897015 kernel: printk: console [ttyAMA0] enabled Jun 25 18:43:11.897023 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 18:43:11.897152 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 18:43:11.897228 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jun 25 18:43:11.897309 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jun 25 18:43:11.897376 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jun 25 18:43:11.897441 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jun 25 18:43:11.897450 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jun 25 18:43:11.897458 kernel: PCI host bridge to bus 0000:00 Jun 25 18:43:11.897527 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jun 25 18:43:11.897589 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jun 25 18:43:11.897715 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jun 25 18:43:11.897777 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 18:43:11.897866 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jun 25 18:43:11.897939 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 18:43:11.898008 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jun 25 18:43:11.898078 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jun 25 18:43:11.898149 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jun 25 18:43:11.898218 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jun 25 18:43:11.898287 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jun 25 18:43:11.898353 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jun 25 18:43:11.898410 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jun 25 18:43:11.898467 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jun 25 18:43:11.898527 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jun 25 18:43:11.898536 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jun 25 18:43:11.898544 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jun 25 18:43:11.898551 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jun 25 18:43:11.898571 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jun 25 18:43:11.898578 kernel: iommu: Default domain type: Translated Jun 25 18:43:11.898586 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 25 18:43:11.898593 kernel: efivars: Registered efivars operations Jun 25 18:43:11.898600 kernel: vgaarb: loaded Jun 25 18:43:11.898609 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 25 18:43:11.898616 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:43:11.898623 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:43:11.898631 kernel: pnp: PnP ACPI init Jun 25 18:43:11.898790 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jun 25 18:43:11.898813 kernel: pnp: PnP ACPI: found 1 devices Jun 25 18:43:11.898821 kernel: NET: Registered PF_INET protocol family Jun 25 18:43:11.898828 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:43:11.898839 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 18:43:11.898847 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:43:11.898854 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 18:43:11.898861 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 18:43:11.898869 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 18:43:11.898876 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:43:11.898883 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:43:11.898891 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:43:11.898898 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:43:11.898906 kernel: kvm [1]: HYP mode not available Jun 25 18:43:11.898914 kernel: Initialise system trusted keyrings Jun 25 18:43:11.898921 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 18:43:11.898928 kernel: Key type asymmetric registered Jun 25 18:43:11.898935 kernel: Asymmetric key parser 'x509' registered Jun 25 18:43:11.898943 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 25 18:43:11.898950 kernel: io scheduler mq-deadline registered Jun 25 18:43:11.898958 kernel: io scheduler kyber registered Jun 25 18:43:11.898965 kernel: io scheduler bfq registered Jun 25 18:43:11.898974 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jun 25 18:43:11.898981 kernel: ACPI: button: Power Button [PWRB] Jun 25 18:43:11.898989 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jun 25 18:43:11.899069 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jun 25 18:43:11.899080 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:43:11.899087 kernel: thunder_xcv, ver 1.0 Jun 25 18:43:11.899099 kernel: thunder_bgx, ver 1.0 Jun 25 18:43:11.899107 kernel: nicpf, ver 1.0 Jun 25 18:43:11.899114 kernel: nicvf, ver 1.0 Jun 25 18:43:11.899206 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 25 18:43:11.899276 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-06-25T18:43:11 UTC (1719340991) Jun 25 18:43:11.899289 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 18:43:11.899299 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jun 25 18:43:11.899309 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jun 25 18:43:11.899316 kernel: watchdog: Hard watchdog permanently disabled Jun 25 18:43:11.899324 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:43:11.899331 kernel: Segment Routing with IPv6 Jun 25 18:43:11.899341 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:43:11.899348 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:43:11.899355 kernel: Key type dns_resolver registered Jun 25 18:43:11.899362 kernel: registered taskstats version 1 Jun 25 18:43:11.899369 kernel: Loading compiled-in X.509 certificates Jun 25 18:43:11.899377 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 751918e575d02f96b0daadd44b8f442a8c39ecd3' Jun 25 18:43:11.899384 kernel: Key type .fscrypt registered Jun 25 18:43:11.899391 kernel: Key type fscrypt-provisioning registered Jun 25 18:43:11.899398 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:43:11.899407 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:43:11.899414 kernel: ima: No architecture policies found Jun 25 18:43:11.899435 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 25 18:43:11.899442 kernel: clk: Disabling unused clocks Jun 25 18:43:11.899450 kernel: Freeing unused kernel memory: 39040K Jun 25 18:43:11.899457 kernel: Run /init as init process Jun 25 18:43:11.899464 kernel: with arguments: Jun 25 18:43:11.899471 kernel: /init Jun 25 18:43:11.899479 kernel: with environment: Jun 25 18:43:11.899488 kernel: HOME=/ Jun 25 18:43:11.899495 kernel: TERM=linux Jun 25 18:43:11.899502 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:43:11.899512 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:43:11.899521 systemd[1]: Detected virtualization kvm. Jun 25 18:43:11.899529 systemd[1]: Detected architecture arm64. Jun 25 18:43:11.899537 systemd[1]: Running in initrd. Jun 25 18:43:11.899546 systemd[1]: No hostname configured, using default hostname. Jun 25 18:43:11.899553 systemd[1]: Hostname set to . Jun 25 18:43:11.899562 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:43:11.899569 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:43:11.899577 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:43:11.899585 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:43:11.899594 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:43:11.899602 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:43:11.899611 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:43:11.899619 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:43:11.899629 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:43:11.899643 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:43:11.899652 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:43:11.899668 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:43:11.899676 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:43:11.899686 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:43:11.899694 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:43:11.899702 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:43:11.899710 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:43:11.899718 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:43:11.899726 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:43:11.899734 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:43:11.899741 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:43:11.899749 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:43:11.899759 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:43:11.899767 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:43:11.899775 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:43:11.899783 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:43:11.899790 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:43:11.899804 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:43:11.899812 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:43:11.899820 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:43:11.899830 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:43:11.899838 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:43:11.899846 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:43:11.899854 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:43:11.899862 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:43:11.899872 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:43:11.899880 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:43:11.899905 systemd-journald[238]: Collecting audit messages is disabled. Jun 25 18:43:11.899924 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:43:11.899934 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:43:11.899943 systemd-journald[238]: Journal started Jun 25 18:43:11.899961 systemd-journald[238]: Runtime Journal (/run/log/journal/dbf85b52f11048afb6c53e3188982ae1) is 5.9M, max 47.3M, 41.4M free. Jun 25 18:43:11.884830 systemd-modules-load[239]: Inserted module 'overlay' Jun 25 18:43:11.902683 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:43:11.904148 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:43:11.905359 systemd-modules-load[239]: Inserted module 'br_netfilter' Jun 25 18:43:11.906332 kernel: Bridge firewalling registered Jun 25 18:43:11.906911 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:43:11.910236 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:43:11.912101 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:43:11.913221 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:43:11.921331 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:43:11.923789 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:43:11.925884 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:43:11.926954 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:43:11.929872 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:43:11.938613 dracut-cmdline[272]: dracut-dracut-053 Jun 25 18:43:11.940894 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6069a8408a0ca7e7bc40a0bde7fe3ef89df2f98c4bdd2e7e7f9f8f3f8ad207f Jun 25 18:43:11.955190 systemd-resolved[277]: Positive Trust Anchors: Jun 25 18:43:11.955208 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:43:11.955240 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:43:11.959811 systemd-resolved[277]: Defaulting to hostname 'linux'. Jun 25 18:43:11.960722 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:43:11.962586 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:43:12.004666 kernel: SCSI subsystem initialized Jun 25 18:43:12.009656 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:43:12.016658 kernel: iscsi: registered transport (tcp) Jun 25 18:43:12.031661 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:43:12.031677 kernel: QLogic iSCSI HBA Driver Jun 25 18:43:12.072700 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:43:12.080825 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:43:12.095876 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:43:12.095985 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:43:12.096011 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:43:12.144710 kernel: raid6: neonx8 gen() 15665 MB/s Jun 25 18:43:12.161672 kernel: raid6: neonx4 gen() 15638 MB/s Jun 25 18:43:12.178658 kernel: raid6: neonx2 gen() 13242 MB/s Jun 25 18:43:12.195663 kernel: raid6: neonx1 gen() 10467 MB/s Jun 25 18:43:12.212664 kernel: raid6: int64x8 gen() 6949 MB/s Jun 25 18:43:12.229662 kernel: raid6: int64x4 gen() 7341 MB/s Jun 25 18:43:12.246662 kernel: raid6: int64x2 gen() 6127 MB/s Jun 25 18:43:12.263661 kernel: raid6: int64x1 gen() 5056 MB/s Jun 25 18:43:12.263688 kernel: raid6: using algorithm neonx8 gen() 15665 MB/s Jun 25 18:43:12.280670 kernel: raid6: .... xor() 11902 MB/s, rmw enabled Jun 25 18:43:12.280691 kernel: raid6: using neon recovery algorithm Jun 25 18:43:12.285834 kernel: xor: measuring software checksum speed Jun 25 18:43:12.285851 kernel: 8regs : 19859 MB/sec Jun 25 18:43:12.286692 kernel: 32regs : 19711 MB/sec Jun 25 18:43:12.287857 kernel: arm64_neon : 26823 MB/sec Jun 25 18:43:12.287870 kernel: xor: using function: arm64_neon (26823 MB/sec) Jun 25 18:43:12.337668 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:43:12.348702 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:43:12.360822 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:43:12.371252 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jun 25 18:43:12.374337 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:43:12.376654 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:43:12.391296 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jun 25 18:43:12.417723 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:43:12.425764 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:43:12.464178 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:43:12.469860 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:43:12.481339 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:43:12.483032 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:43:12.485800 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:43:12.486914 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:43:12.493780 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:43:12.500998 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:43:12.516860 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jun 25 18:43:12.522214 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 25 18:43:12.522321 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 18:43:12.522333 kernel: GPT:9289727 != 19775487 Jun 25 18:43:12.522342 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 18:43:12.522351 kernel: GPT:9289727 != 19775487 Jun 25 18:43:12.522360 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 18:43:12.522371 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:43:12.518865 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:43:12.519019 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:43:12.520091 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:43:12.520918 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:43:12.521109 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:43:12.528380 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:43:12.540867 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:43:12.545688 kernel: BTRFS: device fsid c80091a6-4bf3-4ad3-8e1c-e6eb918765f9 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (515) Jun 25 18:43:12.545725 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (514) Jun 25 18:43:12.552039 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:43:12.556888 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 18:43:12.561375 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 18:43:12.567861 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 18:43:12.568705 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 18:43:12.573879 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:43:12.585859 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:43:12.587468 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:43:12.592087 disk-uuid[549]: Primary Header is updated. Jun 25 18:43:12.592087 disk-uuid[549]: Secondary Entries is updated. Jun 25 18:43:12.592087 disk-uuid[549]: Secondary Header is updated. Jun 25 18:43:12.595664 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:43:12.607422 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:43:12.610659 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:43:13.608663 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:43:13.609065 disk-uuid[550]: The operation has completed successfully. Jun 25 18:43:13.631929 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:43:13.632025 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:43:13.647821 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:43:13.652623 sh[573]: Success Jun 25 18:43:13.671675 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 25 18:43:13.708172 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:43:13.709727 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:43:13.710456 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:43:13.721679 kernel: BTRFS info (device dm-0): first mount of filesystem c80091a6-4bf3-4ad3-8e1c-e6eb918765f9 Jun 25 18:43:13.721723 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:43:13.721736 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:43:13.721746 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:43:13.722649 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:43:13.725451 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:43:13.726758 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:43:13.736791 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:43:13.738422 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:43:13.744850 kernel: BTRFS info (device vda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:43:13.744887 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:43:13.744898 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:43:13.747691 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:43:13.753964 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:43:13.756656 kernel: BTRFS info (device vda6): last unmount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:43:13.760735 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:43:13.769832 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:43:13.832121 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:43:13.841785 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:43:13.866052 ignition[664]: Ignition 2.19.0 Jun 25 18:43:13.866061 ignition[664]: Stage: fetch-offline Jun 25 18:43:13.866094 ignition[664]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:13.866101 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:43:13.868249 systemd-networkd[764]: lo: Link UP Jun 25 18:43:13.866185 ignition[664]: parsed url from cmdline: "" Jun 25 18:43:13.868253 systemd-networkd[764]: lo: Gained carrier Jun 25 18:43:13.866188 ignition[664]: no config URL provided Jun 25 18:43:13.869232 systemd-networkd[764]: Enumeration completed Jun 25 18:43:13.866192 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:43:13.869657 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:43:13.866200 ignition[664]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:43:13.869819 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:43:13.866219 ignition[664]: op(1): [started] loading QEMU firmware config module Jun 25 18:43:13.869823 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:43:13.866223 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 25 18:43:13.870575 systemd[1]: Reached target network.target - Network. Jun 25 18:43:13.874666 ignition[664]: op(1): [finished] loading QEMU firmware config module Jun 25 18:43:13.870750 systemd-networkd[764]: eth0: Link UP Jun 25 18:43:13.870754 systemd-networkd[764]: eth0: Gained carrier Jun 25 18:43:13.870761 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:43:13.895686 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.123/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 18:43:13.921087 ignition[664]: parsing config with SHA512: f8583f757d22465425d03237116e040281cdcf27a75f64ea6beb4de36144c2f56be7280db85be77875add6f2354393c9c13073bb70018d9c4616302bd266b995 Jun 25 18:43:13.924955 unknown[664]: fetched base config from "system" Jun 25 18:43:13.924969 unknown[664]: fetched user config from "qemu" Jun 25 18:43:13.925331 ignition[664]: fetch-offline: fetch-offline passed Jun 25 18:43:13.927135 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:43:13.925382 ignition[664]: Ignition finished successfully Jun 25 18:43:13.928143 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 25 18:43:13.934858 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:43:13.945558 ignition[771]: Ignition 2.19.0 Jun 25 18:43:13.946324 ignition[771]: Stage: kargs Jun 25 18:43:13.946547 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:13.946559 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:43:13.947844 ignition[771]: kargs: kargs passed Jun 25 18:43:13.947891 ignition[771]: Ignition finished successfully Jun 25 18:43:13.951299 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:43:13.959775 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:43:13.969211 ignition[780]: Ignition 2.19.0 Jun 25 18:43:13.969221 ignition[780]: Stage: disks Jun 25 18:43:13.969370 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:13.972163 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:43:13.969379 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:43:13.973472 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:43:13.970160 ignition[780]: disks: disks passed Jun 25 18:43:13.974709 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:43:13.970202 ignition[780]: Ignition finished successfully Jun 25 18:43:13.976372 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:43:13.977888 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:43:13.979113 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:43:13.987773 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:43:13.998693 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 18:43:14.001919 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:43:14.003992 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:43:14.048650 kernel: EXT4-fs (vda9): mounted filesystem 91548e21-ce72-437e-94b9-d3fed380163a r/w with ordered data mode. Quota mode: none. Jun 25 18:43:14.049004 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:43:14.050166 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:43:14.064725 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:43:14.066331 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:43:14.067588 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 18:43:14.067627 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:43:14.072695 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (798) Jun 25 18:43:14.067660 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:43:14.074191 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:43:14.077786 kernel: BTRFS info (device vda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:43:14.077804 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:43:14.077814 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:43:14.077824 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:43:14.078575 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:43:14.093772 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:43:14.133622 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:43:14.136729 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:43:14.140011 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:43:14.143338 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:43:14.211264 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:43:14.219744 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:43:14.221131 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:43:14.226693 kernel: BTRFS info (device vda6): last unmount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:43:14.242510 ignition[912]: INFO : Ignition 2.19.0 Jun 25 18:43:14.242510 ignition[912]: INFO : Stage: mount Jun 25 18:43:14.243907 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:14.243907 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:43:14.243907 ignition[912]: INFO : mount: mount passed Jun 25 18:43:14.243907 ignition[912]: INFO : Ignition finished successfully Jun 25 18:43:14.244438 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:43:14.245600 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:43:14.256754 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:43:14.720334 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:43:14.738804 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:43:14.743654 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Jun 25 18:43:14.745159 kernel: BTRFS info (device vda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:43:14.745183 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:43:14.745193 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:43:14.747652 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:43:14.748581 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:43:14.765969 ignition[945]: INFO : Ignition 2.19.0 Jun 25 18:43:14.765969 ignition[945]: INFO : Stage: files Jun 25 18:43:14.767163 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:14.767163 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:43:14.767163 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:43:14.769874 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:43:14.769874 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:43:14.769874 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:43:14.769874 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:43:14.774171 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:43:14.774171 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 18:43:14.774171 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jun 25 18:43:14.769923 unknown[945]: wrote ssh authorized keys file for user: core Jun 25 18:43:14.807450 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:43:14.847644 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 18:43:14.849190 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:43:14.849190 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:43:14.849190 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:43:14.849190 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:43:14.849190 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:43:14.849190 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:43:14.849190 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:43:14.849190 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:43:14.849190 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:43:14.849190 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:43:14.849190 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 18:43:14.849190 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 18:43:14.849190 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 18:43:14.849190 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jun 25 18:43:15.008784 systemd-networkd[764]: eth0: Gained IPv6LL Jun 25 18:43:15.199705 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 18:43:15.439674 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 18:43:15.439674 ignition[945]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 18:43:15.442751 ignition[945]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:43:15.442751 ignition[945]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:43:15.442751 ignition[945]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 18:43:15.442751 ignition[945]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jun 25 18:43:15.442751 ignition[945]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 18:43:15.442751 ignition[945]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 18:43:15.442751 ignition[945]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jun 25 18:43:15.442751 ignition[945]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jun 25 18:43:15.462700 ignition[945]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 18:43:15.466226 ignition[945]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 18:43:15.468751 ignition[945]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jun 25 18:43:15.468751 ignition[945]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:43:15.468751 ignition[945]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:43:15.468751 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:43:15.468751 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:43:15.468751 ignition[945]: INFO : files: files passed Jun 25 18:43:15.468751 ignition[945]: INFO : Ignition finished successfully Jun 25 18:43:15.469060 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:43:15.481818 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:43:15.484088 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:43:15.485376 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:43:15.485458 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:43:15.490266 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Jun 25 18:43:15.491693 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:43:15.491693 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:43:15.494515 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:43:15.495268 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:43:15.496860 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:43:15.499249 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:43:15.520585 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:43:15.520712 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:43:15.522448 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:43:15.524056 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:43:15.525455 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:43:15.526143 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:43:15.540506 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:43:15.553804 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:43:15.562221 systemd[1]: Stopped target network.target - Network. Jun 25 18:43:15.563016 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:43:15.564425 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:43:15.565938 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:43:15.567329 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:43:15.567447 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:43:15.569254 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:43:15.570740 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:43:15.571986 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:43:15.573351 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:43:15.575083 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:43:15.576622 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:43:15.578194 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:43:15.579549 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:43:15.581344 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:43:15.582824 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:43:15.584109 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:43:15.584222 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:43:15.586138 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:43:15.587545 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:43:15.588908 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:43:15.589712 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:43:15.591132 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:43:15.591235 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:43:15.593716 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:43:15.593849 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:43:15.595273 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:43:15.596632 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:43:15.597751 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:43:15.599073 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:43:15.600375 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:43:15.602073 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:43:15.602160 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:43:15.603391 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:43:15.603478 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:43:15.604831 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:43:15.604939 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:43:15.606382 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:43:15.606475 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:43:15.612790 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:43:15.613689 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:43:15.613811 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:43:15.616366 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:43:15.617581 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:43:15.619036 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:43:15.620509 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:43:15.620704 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:43:15.623347 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:43:15.626047 ignition[999]: INFO : Ignition 2.19.0 Jun 25 18:43:15.626047 ignition[999]: INFO : Stage: umount Jun 25 18:43:15.626047 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:15.626047 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:43:15.623458 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:43:15.630244 ignition[999]: INFO : umount: umount passed Jun 25 18:43:15.630244 ignition[999]: INFO : Ignition finished successfully Jun 25 18:43:15.629332 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:43:15.629418 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:43:15.629731 systemd-networkd[764]: eth0: DHCPv6 lease lost Jun 25 18:43:15.632088 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:43:15.632669 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:43:15.633857 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:43:15.636294 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:43:15.636407 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:43:15.640217 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:43:15.640311 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:43:15.643115 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:43:15.643160 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:43:15.644622 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:43:15.644690 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:43:15.646065 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:43:15.646109 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:43:15.647405 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:43:15.647439 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:43:15.648941 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:43:15.648984 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:43:15.658735 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:43:15.659359 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:43:15.659407 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:43:15.660760 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:43:15.660806 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:43:15.662086 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:43:15.662125 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:43:15.663763 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:43:15.663807 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:43:15.665323 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:43:15.680748 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:43:15.681717 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:43:15.683646 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:43:15.683831 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:43:15.686150 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:43:15.686227 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:43:15.687227 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:43:15.687259 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:43:15.688535 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:43:15.688576 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:43:15.690905 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:43:15.690943 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:43:15.693170 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:43:15.693218 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:43:15.707841 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:43:15.708596 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:43:15.708655 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:43:15.710335 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:43:15.710370 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:43:15.712247 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:43:15.712332 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:43:15.713697 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:43:15.713771 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:43:15.715952 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:43:15.716733 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:43:15.716796 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:43:15.719083 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:43:15.729751 systemd[1]: Switching root. Jun 25 18:43:15.761551 systemd-journald[238]: Journal stopped Jun 25 18:43:16.450199 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jun 25 18:43:16.450248 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:43:16.450264 kernel: SELinux: policy capability open_perms=1 Jun 25 18:43:16.450274 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:43:16.450284 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:43:16.450296 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:43:16.450306 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:43:16.450317 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:43:16.450326 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:43:16.450336 systemd[1]: Successfully loaded SELinux policy in 31.360ms. Jun 25 18:43:16.450353 kernel: audit: type=1403 audit(1719340995.900:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:43:16.450366 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.576ms. Jun 25 18:43:16.450378 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:43:16.450390 systemd[1]: Detected virtualization kvm. Jun 25 18:43:16.450402 systemd[1]: Detected architecture arm64. Jun 25 18:43:16.450412 systemd[1]: Detected first boot. Jun 25 18:43:16.450423 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:43:16.450434 zram_generator::config[1043]: No configuration found. Jun 25 18:43:16.450445 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:43:16.450456 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 18:43:16.450467 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 18:43:16.450477 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 18:43:16.450490 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:43:16.450502 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:43:16.450514 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:43:16.450525 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:43:16.450536 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:43:16.450547 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:43:16.450558 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:43:16.450568 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:43:16.450579 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:43:16.450591 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:43:16.450602 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:43:16.450613 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:43:16.450626 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:43:16.450664 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:43:16.450678 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jun 25 18:43:16.450689 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:43:16.450700 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 18:43:16.450710 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 18:43:16.450723 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 18:43:16.450734 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:43:16.450745 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:43:16.450756 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:43:16.450773 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:43:16.450787 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:43:16.450797 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:43:16.450809 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:43:16.450821 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:43:16.450832 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:43:16.450843 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:43:16.450854 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:43:16.450864 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:43:16.450875 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:43:16.450886 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:43:16.450897 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:43:16.450907 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:43:16.450920 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:43:16.450931 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:43:16.450942 systemd[1]: Reached target machines.target - Containers. Jun 25 18:43:16.450953 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:43:16.450963 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:43:16.450974 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:43:16.450985 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:43:16.450996 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:43:16.451008 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:43:16.451020 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:43:16.451031 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:43:16.451041 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:43:16.451052 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:43:16.451063 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 18:43:16.451074 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 18:43:16.451084 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 18:43:16.451096 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 18:43:16.451107 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:43:16.451118 kernel: fuse: init (API version 7.39) Jun 25 18:43:16.451128 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:43:16.451139 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:43:16.451149 kernel: loop: module loaded Jun 25 18:43:16.451159 kernel: ACPI: bus type drm_connector registered Jun 25 18:43:16.451169 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:43:16.451180 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:43:16.451190 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 18:43:16.451202 systemd[1]: Stopped verity-setup.service. Jun 25 18:43:16.451213 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:43:16.451224 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:43:16.451235 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:43:16.451248 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:43:16.451258 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:43:16.451269 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:43:16.451296 systemd-journald[1109]: Collecting audit messages is disabled. Jun 25 18:43:16.451318 systemd-journald[1109]: Journal started Jun 25 18:43:16.451339 systemd-journald[1109]: Runtime Journal (/run/log/journal/dbf85b52f11048afb6c53e3188982ae1) is 5.9M, max 47.3M, 41.4M free. Jun 25 18:43:16.277687 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:43:16.293007 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 18:43:16.293334 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 18:43:16.453183 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:43:16.454717 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:43:16.455361 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:43:16.456559 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:43:16.456829 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:43:16.458078 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:43:16.458211 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:43:16.459439 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:43:16.459555 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:43:16.460587 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:43:16.460836 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:43:16.462048 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:43:16.462190 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:43:16.463385 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:43:16.463515 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:43:16.464983 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:43:16.466170 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:43:16.467501 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:43:16.479050 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:43:16.500784 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:43:16.502745 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:43:16.503811 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:43:16.503850 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:43:16.505576 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:43:16.507632 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:43:16.509588 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:43:16.510484 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:43:16.511790 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:43:16.513418 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:43:16.514344 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:43:16.517808 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:43:16.518820 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:43:16.521887 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:43:16.523975 systemd-journald[1109]: Time spent on flushing to /var/log/journal/dbf85b52f11048afb6c53e3188982ae1 is 22.152ms for 852 entries. Jun 25 18:43:16.523975 systemd-journald[1109]: System Journal (/var/log/journal/dbf85b52f11048afb6c53e3188982ae1) is 8.0M, max 195.6M, 187.6M free. Jun 25 18:43:16.561986 systemd-journald[1109]: Received client request to flush runtime journal. Jun 25 18:43:16.562033 kernel: loop0: detected capacity change from 0 to 59688 Jun 25 18:43:16.562052 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:43:16.526926 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:43:16.533940 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:43:16.537452 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:43:16.538592 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:43:16.539717 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:43:16.541583 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:43:16.543905 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:43:16.552609 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:43:16.557486 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:43:16.568838 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:43:16.571311 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:43:16.573596 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:43:16.580656 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:43:16.589509 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 18:43:16.598585 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:43:16.600297 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:43:16.603923 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:43:16.610812 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:43:16.614107 kernel: loop1: detected capacity change from 0 to 113712 Jun 25 18:43:16.629087 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jun 25 18:43:16.629106 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jun 25 18:43:16.633740 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:43:16.655691 kernel: loop2: detected capacity change from 0 to 193208 Jun 25 18:43:16.690666 kernel: loop3: detected capacity change from 0 to 59688 Jun 25 18:43:16.697664 kernel: loop4: detected capacity change from 0 to 113712 Jun 25 18:43:16.703661 kernel: loop5: detected capacity change from 0 to 193208 Jun 25 18:43:16.714927 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 25 18:43:16.715328 (sd-merge)[1181]: Merged extensions into '/usr'. Jun 25 18:43:16.719739 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:43:16.719869 systemd[1]: Reloading... Jun 25 18:43:16.779834 zram_generator::config[1209]: No configuration found. Jun 25 18:43:16.866455 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:43:16.895863 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:43:16.907057 systemd[1]: Reloading finished in 186 ms. Jun 25 18:43:16.941733 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:43:16.943088 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:43:16.953841 systemd[1]: Starting ensure-sysext.service... Jun 25 18:43:16.956063 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:43:16.967029 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:43:16.967043 systemd[1]: Reloading... Jun 25 18:43:16.982315 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:43:16.982573 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:43:16.983240 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:43:16.983451 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jun 25 18:43:16.983500 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jun 25 18:43:16.985809 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:43:16.985822 systemd-tmpfiles[1241]: Skipping /boot Jun 25 18:43:16.992268 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:43:16.992286 systemd-tmpfiles[1241]: Skipping /boot Jun 25 18:43:17.023082 zram_generator::config[1266]: No configuration found. Jun 25 18:43:17.104479 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:43:17.141541 systemd[1]: Reloading finished in 174 ms. Jun 25 18:43:17.154272 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:43:17.172120 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:43:17.179611 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:43:17.181975 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:43:17.184182 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:43:17.186935 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:43:17.204895 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:43:17.209915 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:43:17.211622 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:43:17.215371 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:43:17.217372 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:43:17.222843 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:43:17.226989 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:43:17.228009 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:43:17.229400 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:43:17.236013 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:43:17.237775 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:43:17.237940 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:43:17.239096 systemd-udevd[1313]: Using default interface naming scheme 'v255'. Jun 25 18:43:17.239205 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:43:17.239359 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:43:17.241073 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:43:17.241195 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:43:17.248845 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:43:17.257000 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:43:17.259942 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:43:17.263095 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:43:17.264934 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:43:17.265617 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:43:17.267355 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:43:17.269211 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:43:17.271081 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:43:17.271209 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:43:17.275979 augenrules[1336]: No rules Jun 25 18:43:17.276948 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:43:17.278182 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:43:17.278299 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:43:17.281474 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:43:17.296226 systemd[1]: Finished ensure-sysext.service. Jun 25 18:43:17.297601 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:43:17.299552 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:43:17.299973 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:43:17.308672 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1340) Jun 25 18:43:17.318177 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jun 25 18:43:17.321924 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:43:17.326944 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:43:17.330836 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:43:17.333847 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:43:17.335984 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:43:17.339049 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:43:17.343820 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 18:43:17.345816 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:43:17.346258 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:43:17.346409 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:43:17.347895 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:43:17.348381 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:43:17.349871 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:43:17.350004 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:43:17.356662 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1361) Jun 25 18:43:17.363631 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:43:17.365985 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:43:17.367385 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:43:17.367454 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:43:17.373136 systemd-resolved[1306]: Positive Trust Anchors: Jun 25 18:43:17.375388 systemd-resolved[1306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:43:17.375425 systemd-resolved[1306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:43:17.384840 systemd-resolved[1306]: Defaulting to hostname 'linux'. Jun 25 18:43:17.389767 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:43:17.393815 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:43:17.394938 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:43:17.423562 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 18:43:17.425727 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:43:17.426642 systemd-networkd[1379]: lo: Link UP Jun 25 18:43:17.426647 systemd-networkd[1379]: lo: Gained carrier Jun 25 18:43:17.427335 systemd-networkd[1379]: Enumeration completed Jun 25 18:43:17.427420 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:43:17.427997 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:43:17.428004 systemd-networkd[1379]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:43:17.428718 systemd-networkd[1379]: eth0: Link UP Jun 25 18:43:17.428815 systemd-networkd[1379]: eth0: Gained carrier Jun 25 18:43:17.428866 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:43:17.429985 systemd[1]: Reached target network.target - Network. Jun 25 18:43:17.442888 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:43:17.445474 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:43:17.449695 systemd-networkd[1379]: eth0: DHCPv4 address 10.0.0.123/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 18:43:17.450327 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. Jun 25 18:43:17.451307 systemd-timesyncd[1381]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 25 18:43:17.451359 systemd-timesyncd[1381]: Initial clock synchronization to Tue 2024-06-25 18:43:17.211871 UTC. Jun 25 18:43:17.453853 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:43:17.456265 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:43:17.478792 lvm[1398]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:43:17.496456 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:43:17.509062 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:43:17.510429 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:43:17.511392 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:43:17.512511 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:43:17.513734 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:43:17.515037 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:43:17.516151 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:43:17.517349 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:43:17.518487 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:43:17.518525 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:43:17.519390 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:43:17.521217 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:43:17.523421 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:43:17.534596 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:43:17.536546 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:43:17.537884 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:43:17.538770 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:43:17.539487 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:43:17.540286 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:43:17.540315 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:43:17.541202 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:43:17.543160 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:43:17.544080 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:43:17.546185 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:43:17.549888 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:43:17.550827 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:43:17.551781 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:43:17.553834 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:43:17.558058 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:43:17.560869 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:43:17.565486 jq[1409]: false Jun 25 18:43:17.565695 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:43:17.576101 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:43:17.576545 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:43:17.577245 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:43:17.579219 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:43:17.581065 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:43:17.582546 extend-filesystems[1410]: Found loop3 Jun 25 18:43:17.582546 extend-filesystems[1410]: Found loop4 Jun 25 18:43:17.582546 extend-filesystems[1410]: Found loop5 Jun 25 18:43:17.582546 extend-filesystems[1410]: Found vda Jun 25 18:43:17.582546 extend-filesystems[1410]: Found vda1 Jun 25 18:43:17.582546 extend-filesystems[1410]: Found vda2 Jun 25 18:43:17.582546 extend-filesystems[1410]: Found vda3 Jun 25 18:43:17.582546 extend-filesystems[1410]: Found usr Jun 25 18:43:17.582546 extend-filesystems[1410]: Found vda4 Jun 25 18:43:17.582546 extend-filesystems[1410]: Found vda6 Jun 25 18:43:17.582546 extend-filesystems[1410]: Found vda7 Jun 25 18:43:17.582546 extend-filesystems[1410]: Found vda9 Jun 25 18:43:17.582546 extend-filesystems[1410]: Checking size of /dev/vda9 Jun 25 18:43:17.608594 extend-filesystems[1410]: Resized partition /dev/vda9 Jun 25 18:43:17.593885 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:43:17.586240 dbus-daemon[1408]: [system] SELinux support is enabled Jun 25 18:43:17.597435 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:43:17.612605 jq[1425]: true Jun 25 18:43:17.597595 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:43:17.597871 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:43:17.598008 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:43:17.601009 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:43:17.601338 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:43:17.618143 extend-filesystems[1433]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 18:43:17.625093 jq[1434]: true Jun 25 18:43:17.619551 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:43:17.619596 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:43:17.626217 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:43:17.626260 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:43:17.631872 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 25 18:43:17.630014 (ntainerd)[1439]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:43:17.635120 systemd-logind[1417]: Watching system buttons on /dev/input/event0 (Power Button) Jun 25 18:43:17.635565 systemd-logind[1417]: New seat seat0. Jun 25 18:43:17.636422 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:43:17.652710 update_engine[1424]: I0625 18:43:17.651496 1424 main.cc:92] Flatcar Update Engine starting Jun 25 18:43:17.653998 tar[1430]: linux-arm64/helm Jun 25 18:43:17.663675 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1357) Jun 25 18:43:17.666161 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 25 18:43:17.666697 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:43:17.682563 update_engine[1424]: I0625 18:43:17.666714 1424 update_check_scheduler.cc:74] Next update check in 10m53s Jun 25 18:43:17.676964 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:43:17.682929 extend-filesystems[1433]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 18:43:17.682929 extend-filesystems[1433]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 18:43:17.682929 extend-filesystems[1433]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 25 18:43:17.687299 extend-filesystems[1410]: Resized filesystem in /dev/vda9 Jun 25 18:43:17.684556 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:43:17.684762 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:43:17.697873 bash[1461]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:43:17.699912 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:43:17.701992 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 18:43:17.756470 locksmithd[1462]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:43:17.865645 containerd[1439]: time="2024-06-25T18:43:17.865543560Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:43:17.893539 containerd[1439]: time="2024-06-25T18:43:17.893488760Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:43:17.893539 containerd[1439]: time="2024-06-25T18:43:17.893541880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:17.894963 containerd[1439]: time="2024-06-25T18:43:17.894922920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:43:17.894963 containerd[1439]: time="2024-06-25T18:43:17.894963200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:17.895188 containerd[1439]: time="2024-06-25T18:43:17.895164240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:43:17.895188 containerd[1439]: time="2024-06-25T18:43:17.895186320Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:43:17.895278 containerd[1439]: time="2024-06-25T18:43:17.895261120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:17.895332 containerd[1439]: time="2024-06-25T18:43:17.895315520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:43:17.895332 containerd[1439]: time="2024-06-25T18:43:17.895330440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:17.895400 containerd[1439]: time="2024-06-25T18:43:17.895386160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:17.895602 containerd[1439]: time="2024-06-25T18:43:17.895582680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:17.895634 containerd[1439]: time="2024-06-25T18:43:17.895605600Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:43:17.895634 containerd[1439]: time="2024-06-25T18:43:17.895615400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:17.895756 containerd[1439]: time="2024-06-25T18:43:17.895729120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:43:17.895756 containerd[1439]: time="2024-06-25T18:43:17.895746720Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:43:17.895829 containerd[1439]: time="2024-06-25T18:43:17.895810720Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:43:17.895829 containerd[1439]: time="2024-06-25T18:43:17.895826520Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:43:17.900021 containerd[1439]: time="2024-06-25T18:43:17.899992840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:43:17.900079 containerd[1439]: time="2024-06-25T18:43:17.900030920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:43:17.900079 containerd[1439]: time="2024-06-25T18:43:17.900043520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:43:17.900079 containerd[1439]: time="2024-06-25T18:43:17.900071040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:43:17.900149 containerd[1439]: time="2024-06-25T18:43:17.900084480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:43:17.900149 containerd[1439]: time="2024-06-25T18:43:17.900094080Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:43:17.900149 containerd[1439]: time="2024-06-25T18:43:17.900105240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:43:17.900248 containerd[1439]: time="2024-06-25T18:43:17.900226120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:43:17.900281 containerd[1439]: time="2024-06-25T18:43:17.900247240Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:43:17.900281 containerd[1439]: time="2024-06-25T18:43:17.900277400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:43:17.900316 containerd[1439]: time="2024-06-25T18:43:17.900296080Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:43:17.900316 containerd[1439]: time="2024-06-25T18:43:17.900309440Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:43:17.900355 containerd[1439]: time="2024-06-25T18:43:17.900325080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:43:17.900355 containerd[1439]: time="2024-06-25T18:43:17.900338360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:43:17.900405 containerd[1439]: time="2024-06-25T18:43:17.900352800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:43:17.900405 containerd[1439]: time="2024-06-25T18:43:17.900367280Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:43:17.900405 containerd[1439]: time="2024-06-25T18:43:17.900379640Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:43:17.900405 containerd[1439]: time="2024-06-25T18:43:17.900390560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:43:17.900469 containerd[1439]: time="2024-06-25T18:43:17.900410360Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:43:17.900775 containerd[1439]: time="2024-06-25T18:43:17.900500200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:43:17.900893 containerd[1439]: time="2024-06-25T18:43:17.900871480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:43:17.900919 containerd[1439]: time="2024-06-25T18:43:17.900912240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:43:17.900944 containerd[1439]: time="2024-06-25T18:43:17.900926280Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:43:17.900963 containerd[1439]: time="2024-06-25T18:43:17.900948920Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:43:17.901130 containerd[1439]: time="2024-06-25T18:43:17.901064800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:43:17.901130 containerd[1439]: time="2024-06-25T18:43:17.901081720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:43:17.901130 containerd[1439]: time="2024-06-25T18:43:17.901094760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:43:17.901186 containerd[1439]: time="2024-06-25T18:43:17.901105480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:43:17.901186 containerd[1439]: time="2024-06-25T18:43:17.901175560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:43:17.901224 containerd[1439]: time="2024-06-25T18:43:17.901188040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:43:17.901224 containerd[1439]: time="2024-06-25T18:43:17.901199480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:43:17.901224 containerd[1439]: time="2024-06-25T18:43:17.901211600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:43:17.901279 containerd[1439]: time="2024-06-25T18:43:17.901223800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:43:17.903449 containerd[1439]: time="2024-06-25T18:43:17.901355560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:43:17.903449 containerd[1439]: time="2024-06-25T18:43:17.901391040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:43:17.903449 containerd[1439]: time="2024-06-25T18:43:17.901404040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:43:17.903449 containerd[1439]: time="2024-06-25T18:43:17.901416040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:43:17.903449 containerd[1439]: time="2024-06-25T18:43:17.901428040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:43:17.903449 containerd[1439]: time="2024-06-25T18:43:17.901441760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:43:17.903449 containerd[1439]: time="2024-06-25T18:43:17.901455760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:43:17.903449 containerd[1439]: time="2024-06-25T18:43:17.901466640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:43:17.903654 containerd[1439]: time="2024-06-25T18:43:17.901858800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:43:17.903654 containerd[1439]: time="2024-06-25T18:43:17.901917320Z" level=info msg="Connect containerd service" Jun 25 18:43:17.903654 containerd[1439]: time="2024-06-25T18:43:17.901941080Z" level=info msg="using legacy CRI server" Jun 25 18:43:17.903654 containerd[1439]: time="2024-06-25T18:43:17.901947640Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:43:17.903654 containerd[1439]: time="2024-06-25T18:43:17.902086440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:43:17.903654 containerd[1439]: time="2024-06-25T18:43:17.902850120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:43:17.903654 containerd[1439]: time="2024-06-25T18:43:17.902891000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:43:17.903654 containerd[1439]: time="2024-06-25T18:43:17.902909000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:43:17.903654 containerd[1439]: time="2024-06-25T18:43:17.902918920Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:43:17.903654 containerd[1439]: time="2024-06-25T18:43:17.902930760Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:43:17.903654 containerd[1439]: time="2024-06-25T18:43:17.903310920Z" level=info msg="Start subscribing containerd event" Jun 25 18:43:17.903654 containerd[1439]: time="2024-06-25T18:43:17.903437200Z" level=info msg="Start recovering state" Jun 25 18:43:17.903654 containerd[1439]: time="2024-06-25T18:43:17.903370160Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:43:17.903654 containerd[1439]: time="2024-06-25T18:43:17.903575760Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:43:17.903654 containerd[1439]: time="2024-06-25T18:43:17.903503880Z" level=info msg="Start event monitor" Jun 25 18:43:17.903654 containerd[1439]: time="2024-06-25T18:43:17.903609600Z" level=info msg="Start snapshots syncer" Jun 25 18:43:17.903654 containerd[1439]: time="2024-06-25T18:43:17.903619120Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:43:17.903654 containerd[1439]: time="2024-06-25T18:43:17.903627000Z" level=info msg="Start streaming server" Jun 25 18:43:17.905621 containerd[1439]: time="2024-06-25T18:43:17.903762040Z" level=info msg="containerd successfully booted in 0.039162s" Jun 25 18:43:17.903845 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:43:18.025441 tar[1430]: linux-arm64/LICENSE Jun 25 18:43:18.025441 tar[1430]: linux-arm64/README.md Jun 25 18:43:18.037667 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:43:19.040829 systemd-networkd[1379]: eth0: Gained IPv6LL Jun 25 18:43:19.046388 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:43:19.048048 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:43:19.056931 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 25 18:43:19.059129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:19.060867 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:43:19.078147 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 25 18:43:19.079689 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 25 18:43:19.081774 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:43:19.085465 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:43:19.111831 sshd_keygen[1428]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:43:19.129426 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:43:19.139924 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:43:19.144702 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:43:19.144872 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:43:19.147787 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:43:19.158812 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:43:19.161489 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:43:19.163298 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 25 18:43:19.164436 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:43:19.533238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:19.534550 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:43:19.536945 (kubelet)[1521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:43:19.537157 systemd[1]: Startup finished in 532ms (kernel) + 4.191s (initrd) + 3.675s (userspace) = 8.399s. Jun 25 18:43:20.027924 kubelet[1521]: E0625 18:43:20.027797 1521 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:43:20.030909 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:43:20.031064 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:43:24.627861 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:43:24.629174 systemd[1]: Started sshd@0-10.0.0.123:22-10.0.0.1:49572.service - OpenSSH per-connection server daemon (10.0.0.1:49572). Jun 25 18:43:24.708685 sshd[1535]: Accepted publickey for core from 10.0.0.1 port 49572 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:43:24.710414 sshd[1535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:24.718181 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:43:24.732922 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:43:24.737956 systemd-logind[1417]: New session 1 of user core. Jun 25 18:43:24.747788 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:43:24.760894 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:43:24.763706 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:24.853056 systemd[1539]: Queued start job for default target default.target. Jun 25 18:43:24.863693 systemd[1539]: Created slice app.slice - User Application Slice. Jun 25 18:43:24.863727 systemd[1539]: Reached target paths.target - Paths. Jun 25 18:43:24.863739 systemd[1539]: Reached target timers.target - Timers. Jun 25 18:43:24.865151 systemd[1539]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:43:24.876274 systemd[1539]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:43:24.876391 systemd[1539]: Reached target sockets.target - Sockets. Jun 25 18:43:24.876408 systemd[1539]: Reached target basic.target - Basic System. Jun 25 18:43:24.876445 systemd[1539]: Reached target default.target - Main User Target. Jun 25 18:43:24.876482 systemd[1539]: Startup finished in 106ms. Jun 25 18:43:24.876692 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:43:24.878143 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:43:24.965049 systemd[1]: Started sshd@1-10.0.0.123:22-10.0.0.1:49582.service - OpenSSH per-connection server daemon (10.0.0.1:49582). Jun 25 18:43:25.002936 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 49582 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:43:25.004184 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:25.008454 systemd-logind[1417]: New session 2 of user core. Jun 25 18:43:25.020856 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:43:25.072886 sshd[1550]: pam_unix(sshd:session): session closed for user core Jun 25 18:43:25.087090 systemd[1]: sshd@1-10.0.0.123:22-10.0.0.1:49582.service: Deactivated successfully. Jun 25 18:43:25.088513 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 18:43:25.091979 systemd-logind[1417]: Session 2 logged out. Waiting for processes to exit. Jun 25 18:43:25.093210 systemd[1]: Started sshd@2-10.0.0.123:22-10.0.0.1:49586.service - OpenSSH per-connection server daemon (10.0.0.1:49586). Jun 25 18:43:25.093986 systemd-logind[1417]: Removed session 2. Jun 25 18:43:25.131387 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 49586 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:43:25.132492 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:25.136340 systemd-logind[1417]: New session 3 of user core. Jun 25 18:43:25.147772 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:43:25.194965 sshd[1557]: pam_unix(sshd:session): session closed for user core Jun 25 18:43:25.206068 systemd[1]: sshd@2-10.0.0.123:22-10.0.0.1:49586.service: Deactivated successfully. Jun 25 18:43:25.207531 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 18:43:25.210659 systemd-logind[1417]: Session 3 logged out. Waiting for processes to exit. Jun 25 18:43:25.211668 systemd[1]: Started sshd@3-10.0.0.123:22-10.0.0.1:49602.service - OpenSSH per-connection server daemon (10.0.0.1:49602). Jun 25 18:43:25.212313 systemd-logind[1417]: Removed session 3. Jun 25 18:43:25.249374 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 49602 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:43:25.250492 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:25.254462 systemd-logind[1417]: New session 4 of user core. Jun 25 18:43:25.268355 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:43:25.319720 sshd[1564]: pam_unix(sshd:session): session closed for user core Jun 25 18:43:25.335784 systemd[1]: sshd@3-10.0.0.123:22-10.0.0.1:49602.service: Deactivated successfully. Jun 25 18:43:25.337056 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 18:43:25.339670 systemd-logind[1417]: Session 4 logged out. Waiting for processes to exit. Jun 25 18:43:25.340778 systemd[1]: Started sshd@4-10.0.0.123:22-10.0.0.1:49618.service - OpenSSH per-connection server daemon (10.0.0.1:49618). Jun 25 18:43:25.341462 systemd-logind[1417]: Removed session 4. Jun 25 18:43:25.377612 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 49618 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:43:25.378778 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:25.383084 systemd-logind[1417]: New session 5 of user core. Jun 25 18:43:25.393788 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:43:25.449258 sudo[1574]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:43:25.449505 sudo[1574]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:43:25.464280 sudo[1574]: pam_unix(sudo:session): session closed for user root Jun 25 18:43:25.466081 sshd[1571]: pam_unix(sshd:session): session closed for user core Jun 25 18:43:25.477952 systemd[1]: sshd@4-10.0.0.123:22-10.0.0.1:49618.service: Deactivated successfully. Jun 25 18:43:25.480051 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 18:43:25.481390 systemd-logind[1417]: Session 5 logged out. Waiting for processes to exit. Jun 25 18:43:25.483373 systemd[1]: Started sshd@5-10.0.0.123:22-10.0.0.1:49630.service - OpenSSH per-connection server daemon (10.0.0.1:49630). Jun 25 18:43:25.484204 systemd-logind[1417]: Removed session 5. Jun 25 18:43:25.521075 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 49630 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:43:25.522318 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:25.526396 systemd-logind[1417]: New session 6 of user core. Jun 25 18:43:25.532789 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:43:25.583369 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:43:25.583607 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:43:25.586642 sudo[1583]: pam_unix(sudo:session): session closed for user root Jun 25 18:43:25.590961 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:43:25.591181 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:43:25.606973 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:43:25.608052 auditctl[1586]: No rules Jun 25 18:43:25.608913 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:43:25.609156 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:43:25.610794 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:43:25.632759 augenrules[1604]: No rules Jun 25 18:43:25.634033 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:43:25.635236 sudo[1582]: pam_unix(sudo:session): session closed for user root Jun 25 18:43:25.636825 sshd[1579]: pam_unix(sshd:session): session closed for user core Jun 25 18:43:25.656991 systemd[1]: sshd@5-10.0.0.123:22-10.0.0.1:49630.service: Deactivated successfully. Jun 25 18:43:25.658389 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:43:25.660736 systemd-logind[1417]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:43:25.661825 systemd[1]: Started sshd@6-10.0.0.123:22-10.0.0.1:49634.service - OpenSSH per-connection server daemon (10.0.0.1:49634). Jun 25 18:43:25.662583 systemd-logind[1417]: Removed session 6. Jun 25 18:43:25.698965 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 49634 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:43:25.700170 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:25.704011 systemd-logind[1417]: New session 7 of user core. Jun 25 18:43:25.714831 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:43:25.764828 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:43:25.765071 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:43:25.878869 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:43:25.878995 (dockerd)[1625]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:43:26.106259 dockerd[1625]: time="2024-06-25T18:43:26.106133179Z" level=info msg="Starting up" Jun 25 18:43:26.193092 dockerd[1625]: time="2024-06-25T18:43:26.193053330Z" level=info msg="Loading containers: start." Jun 25 18:43:26.282675 kernel: Initializing XFRM netlink socket Jun 25 18:43:26.350182 systemd-networkd[1379]: docker0: Link UP Jun 25 18:43:26.365133 dockerd[1625]: time="2024-06-25T18:43:26.364893093Z" level=info msg="Loading containers: done." Jun 25 18:43:26.420010 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3077083073-merged.mount: Deactivated successfully. Jun 25 18:43:26.421998 dockerd[1625]: time="2024-06-25T18:43:26.421937936Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:43:26.422156 dockerd[1625]: time="2024-06-25T18:43:26.422134534Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:43:26.422301 dockerd[1625]: time="2024-06-25T18:43:26.422283863Z" level=info msg="Daemon has completed initialization" Jun 25 18:43:26.448763 dockerd[1625]: time="2024-06-25T18:43:26.448669540Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:43:26.448891 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:43:27.018625 containerd[1439]: time="2024-06-25T18:43:27.018557347Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 18:43:27.629114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1206133906.mount: Deactivated successfully. Jun 25 18:43:28.798991 containerd[1439]: time="2024-06-25T18:43:28.798917256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:28.799308 containerd[1439]: time="2024-06-25T18:43:28.799265136Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=31671540" Jun 25 18:43:28.800576 containerd[1439]: time="2024-06-25T18:43:28.800536728Z" level=info msg="ImageCreate event name:\"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:28.803480 containerd[1439]: time="2024-06-25T18:43:28.803440847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:28.804427 containerd[1439]: time="2024-06-25T18:43:28.804388529Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"31668338\" in 1.785790817s" Jun 25 18:43:28.804465 containerd[1439]: time="2024-06-25T18:43:28.804424920Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jun 25 18:43:28.824723 containerd[1439]: time="2024-06-25T18:43:28.824689904Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 18:43:30.281304 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:43:30.289839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:30.374293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:30.378911 (kubelet)[1840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:43:30.424680 containerd[1439]: time="2024-06-25T18:43:30.424060813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:30.426486 containerd[1439]: time="2024-06-25T18:43:30.426453228Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=28893120" Jun 25 18:43:30.427448 containerd[1439]: time="2024-06-25T18:43:30.427403308Z" level=info msg="ImageCreate event name:\"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:30.429857 containerd[1439]: time="2024-06-25T18:43:30.429823515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:30.430965 containerd[1439]: time="2024-06-25T18:43:30.430921776Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"30445463\" in 1.606197885s" Jun 25 18:43:30.430965 containerd[1439]: time="2024-06-25T18:43:30.430961137Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jun 25 18:43:30.446173 kubelet[1840]: E0625 18:43:30.446065 1840 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:43:30.449714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:43:30.449849 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:43:30.453488 containerd[1439]: time="2024-06-25T18:43:30.453447603Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 18:43:32.804729 containerd[1439]: time="2024-06-25T18:43:32.804681605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:32.805668 containerd[1439]: time="2024-06-25T18:43:32.805528400Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=15358440" Jun 25 18:43:32.806433 containerd[1439]: time="2024-06-25T18:43:32.806403742Z" level=info msg="ImageCreate event name:\"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:32.809663 containerd[1439]: time="2024-06-25T18:43:32.809462324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:32.811954 containerd[1439]: time="2024-06-25T18:43:32.811904444Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"16910801\" in 2.358417506s" Jun 25 18:43:32.811954 containerd[1439]: time="2024-06-25T18:43:32.811949157Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jun 25 18:43:32.830156 containerd[1439]: time="2024-06-25T18:43:32.830126419Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 18:43:35.190115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2468462576.mount: Deactivated successfully. Jun 25 18:43:35.379046 containerd[1439]: time="2024-06-25T18:43:35.379003006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:35.379462 containerd[1439]: time="2024-06-25T18:43:35.379430241Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=24772463" Jun 25 18:43:35.380305 containerd[1439]: time="2024-06-25T18:43:35.380258872Z" level=info msg="ImageCreate event name:\"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:35.382785 containerd[1439]: time="2024-06-25T18:43:35.382631675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:35.383802 containerd[1439]: time="2024-06-25T18:43:35.383771820Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"24771480\" in 2.553609273s" Jun 25 18:43:35.383979 containerd[1439]: time="2024-06-25T18:43:35.383885507Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jun 25 18:43:35.402263 containerd[1439]: time="2024-06-25T18:43:35.402237685Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:43:35.851681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1191487349.mount: Deactivated successfully. Jun 25 18:43:35.856613 containerd[1439]: time="2024-06-25T18:43:35.856563253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:35.857100 containerd[1439]: time="2024-06-25T18:43:35.857060910Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jun 25 18:43:35.857923 containerd[1439]: time="2024-06-25T18:43:35.857888942Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:35.859916 containerd[1439]: time="2024-06-25T18:43:35.859886828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:35.861475 containerd[1439]: time="2024-06-25T18:43:35.861446153Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 459.17884ms" Jun 25 18:43:35.861512 containerd[1439]: time="2024-06-25T18:43:35.861475781Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jun 25 18:43:35.882228 containerd[1439]: time="2024-06-25T18:43:35.882194342Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 18:43:36.473455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1052240148.mount: Deactivated successfully. Jun 25 18:43:38.320594 containerd[1439]: time="2024-06-25T18:43:38.320527084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:38.321464 containerd[1439]: time="2024-06-25T18:43:38.321412047Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jun 25 18:43:38.322100 containerd[1439]: time="2024-06-25T18:43:38.322062378Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:38.325531 containerd[1439]: time="2024-06-25T18:43:38.325490145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:38.327284 containerd[1439]: time="2024-06-25T18:43:38.327244585Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.445014944s" Jun 25 18:43:38.327325 containerd[1439]: time="2024-06-25T18:43:38.327282466Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jun 25 18:43:38.344599 containerd[1439]: time="2024-06-25T18:43:38.344531155Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 18:43:38.960740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount270044462.mount: Deactivated successfully. Jun 25 18:43:39.265771 containerd[1439]: time="2024-06-25T18:43:39.265664114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:39.266807 containerd[1439]: time="2024-06-25T18:43:39.266777333Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558464" Jun 25 18:43:39.267525 containerd[1439]: time="2024-06-25T18:43:39.267497825Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:39.269713 containerd[1439]: time="2024-06-25T18:43:39.269680503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:39.270740 containerd[1439]: time="2024-06-25T18:43:39.270697936Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 926.102193ms" Jun 25 18:43:39.270784 containerd[1439]: time="2024-06-25T18:43:39.270739940Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jun 25 18:43:40.700167 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 18:43:40.709896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:40.791335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:40.794697 (kubelet)[2031]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:43:40.831307 kubelet[2031]: E0625 18:43:40.831252 2031 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:43:40.834221 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:43:40.834364 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:43:43.252122 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:43.262007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:43.279931 systemd[1]: Reloading requested from client PID 2046 ('systemctl') (unit session-7.scope)... Jun 25 18:43:43.279947 systemd[1]: Reloading... Jun 25 18:43:43.350684 zram_generator::config[2083]: No configuration found. Jun 25 18:43:43.524398 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:43:43.577276 systemd[1]: Reloading finished in 297 ms. Jun 25 18:43:43.616516 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:43.618852 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:43:43.620790 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:43.629935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:43.713201 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:43.716872 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:43:43.760881 kubelet[2130]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:43:43.760881 kubelet[2130]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:43:43.760881 kubelet[2130]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:43:43.761181 kubelet[2130]: I0625 18:43:43.760922 2130 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:43:44.618977 kubelet[2130]: I0625 18:43:44.618930 2130 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:43:44.618977 kubelet[2130]: I0625 18:43:44.618962 2130 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:43:44.620966 kubelet[2130]: I0625 18:43:44.620941 2130 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:43:44.642280 kubelet[2130]: I0625 18:43:44.642068 2130 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:43:44.644649 kubelet[2130]: E0625 18:43:44.644615 2130 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.123:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.123:6443: connect: connection refused Jun 25 18:43:44.654517 kubelet[2130]: W0625 18:43:44.654467 2130 machine.go:65] Cannot read vendor id correctly, set empty. Jun 25 18:43:44.655214 kubelet[2130]: I0625 18:43:44.655190 2130 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:43:44.655406 kubelet[2130]: I0625 18:43:44.655392 2130 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:43:44.655590 kubelet[2130]: I0625 18:43:44.655564 2130 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:43:44.655702 kubelet[2130]: I0625 18:43:44.655593 2130 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:43:44.655702 kubelet[2130]: I0625 18:43:44.655602 2130 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:43:44.655797 kubelet[2130]: I0625 18:43:44.655781 2130 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:43:44.659089 kubelet[2130]: I0625 18:43:44.659046 2130 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:43:44.659089 kubelet[2130]: I0625 18:43:44.659075 2130 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:43:44.659176 kubelet[2130]: I0625 18:43:44.659160 2130 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:43:44.659176 kubelet[2130]: I0625 18:43:44.659177 2130 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:43:44.660708 kubelet[2130]: W0625 18:43:44.660613 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.123:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jun 25 18:43:44.660708 kubelet[2130]: W0625 18:43:44.660623 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jun 25 18:43:44.660708 kubelet[2130]: E0625 18:43:44.660705 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.123:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jun 25 18:43:44.661536 kubelet[2130]: E0625 18:43:44.660850 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jun 25 18:43:44.661536 kubelet[2130]: I0625 18:43:44.661156 2130 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:43:44.666525 kubelet[2130]: W0625 18:43:44.666293 2130 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:43:44.666995 kubelet[2130]: I0625 18:43:44.666963 2130 server.go:1232] "Started kubelet" Jun 25 18:43:44.669267 kubelet[2130]: I0625 18:43:44.668855 2130 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:43:44.669267 kubelet[2130]: I0625 18:43:44.669006 2130 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:43:44.669267 kubelet[2130]: I0625 18:43:44.669103 2130 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:43:44.669267 kubelet[2130]: I0625 18:43:44.668857 2130 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:43:44.670178 kubelet[2130]: I0625 18:43:44.669790 2130 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:43:44.670178 kubelet[2130]: I0625 18:43:44.670109 2130 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:43:44.670269 kubelet[2130]: I0625 18:43:44.670204 2130 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:43:44.670269 kubelet[2130]: I0625 18:43:44.670265 2130 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:43:44.670595 kubelet[2130]: W0625 18:43:44.670540 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jun 25 18:43:44.670595 kubelet[2130]: E0625 18:43:44.670593 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jun 25 18:43:44.670827 kubelet[2130]: E0625 18:43:44.670809 2130 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="200ms" Jun 25 18:43:44.671741 kubelet[2130]: E0625 18:43:44.671714 2130 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:43:44.671741 kubelet[2130]: E0625 18:43:44.671743 2130 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:43:44.675309 kubelet[2130]: E0625 18:43:44.675061 2130 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17dc538d6f0776b3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.June, 25, 18, 43, 44, 666941107, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 18, 43, 44, 666941107, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.123:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.123:6443: connect: connection refused'(may retry after sleeping) Jun 25 18:43:44.684969 kubelet[2130]: I0625 18:43:44.684851 2130 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:43:44.686677 kubelet[2130]: I0625 18:43:44.686552 2130 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:43:44.686677 kubelet[2130]: I0625 18:43:44.686574 2130 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:43:44.686677 kubelet[2130]: I0625 18:43:44.686589 2130 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:43:44.686677 kubelet[2130]: E0625 18:43:44.686666 2130 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:43:44.688423 kubelet[2130]: W0625 18:43:44.688355 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jun 25 18:43:44.688423 kubelet[2130]: E0625 18:43:44.688414 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jun 25 18:43:44.694372 kubelet[2130]: I0625 18:43:44.694345 2130 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:43:44.694372 kubelet[2130]: I0625 18:43:44.694364 2130 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:43:44.694372 kubelet[2130]: I0625 18:43:44.694381 2130 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:43:44.696758 kubelet[2130]: I0625 18:43:44.696726 2130 policy_none.go:49] "None policy: Start" Jun 25 18:43:44.697884 kubelet[2130]: I0625 18:43:44.697473 2130 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:43:44.697884 kubelet[2130]: I0625 18:43:44.697499 2130 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:43:44.704262 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 18:43:44.717148 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 18:43:44.719929 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 18:43:44.735425 kubelet[2130]: I0625 18:43:44.735385 2130 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:43:44.735764 kubelet[2130]: I0625 18:43:44.735729 2130 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:43:44.736385 kubelet[2130]: E0625 18:43:44.736340 2130 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 25 18:43:44.771591 kubelet[2130]: I0625 18:43:44.771550 2130 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:43:44.773050 kubelet[2130]: E0625 18:43:44.772369 2130 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" Jun 25 18:43:44.787659 kubelet[2130]: I0625 18:43:44.787532 2130 topology_manager.go:215] "Topology Admit Handler" podUID="d7f405bb80907fe78adde46158295856" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 18:43:44.789429 kubelet[2130]: I0625 18:43:44.789409 2130 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 18:43:44.791050 kubelet[2130]: I0625 18:43:44.791027 2130 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 18:43:44.796424 systemd[1]: Created slice kubepods-burstable-podd7f405bb80907fe78adde46158295856.slice - libcontainer container kubepods-burstable-podd7f405bb80907fe78adde46158295856.slice. Jun 25 18:43:44.819510 systemd[1]: Created slice kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice - libcontainer container kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice. Jun 25 18:43:44.832527 systemd[1]: Created slice kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice - libcontainer container kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice. Jun 25 18:43:44.872328 kubelet[2130]: E0625 18:43:44.872217 2130 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="400ms" Jun 25 18:43:44.872495 kubelet[2130]: I0625 18:43:44.872480 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d7f405bb80907fe78adde46158295856-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d7f405bb80907fe78adde46158295856\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:43:44.872534 kubelet[2130]: I0625 18:43:44.872507 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d7f405bb80907fe78adde46158295856-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d7f405bb80907fe78adde46158295856\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:43:44.872534 kubelet[2130]: I0625 18:43:44.872531 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d7f405bb80907fe78adde46158295856-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d7f405bb80907fe78adde46158295856\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:43:44.872687 kubelet[2130]: I0625 18:43:44.872551 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:43:44.872687 kubelet[2130]: I0625 18:43:44.872582 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:43:44.872687 kubelet[2130]: I0625 18:43:44.872628 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:43:44.872687 kubelet[2130]: I0625 18:43:44.872671 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:43:44.872842 kubelet[2130]: I0625 18:43:44.872746 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:43:44.872842 kubelet[2130]: I0625 18:43:44.872786 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jun 25 18:43:44.977928 kubelet[2130]: I0625 18:43:44.977814 2130 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:43:44.978170 kubelet[2130]: E0625 18:43:44.978156 2130 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" Jun 25 18:43:45.117174 kubelet[2130]: E0625 18:43:45.117129 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:45.117839 containerd[1439]: time="2024-06-25T18:43:45.117796135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d7f405bb80907fe78adde46158295856,Namespace:kube-system,Attempt:0,}" Jun 25 18:43:45.129721 kubelet[2130]: E0625 18:43:45.129463 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:45.130424 containerd[1439]: time="2024-06-25T18:43:45.130387608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jun 25 18:43:45.135976 kubelet[2130]: E0625 18:43:45.135791 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:45.136173 containerd[1439]: time="2024-06-25T18:43:45.136094203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jun 25 18:43:45.273421 kubelet[2130]: E0625 18:43:45.273390 2130 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="800ms" Jun 25 18:43:45.379781 kubelet[2130]: I0625 18:43:45.379668 2130 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:43:45.380307 kubelet[2130]: E0625 18:43:45.380287 2130 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" Jun 25 18:43:45.610955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2257189231.mount: Deactivated successfully. Jun 25 18:43:45.619260 containerd[1439]: time="2024-06-25T18:43:45.619165387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:43:45.620541 containerd[1439]: time="2024-06-25T18:43:45.620502339Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:43:45.621361 containerd[1439]: time="2024-06-25T18:43:45.621257444Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:43:45.623017 containerd[1439]: time="2024-06-25T18:43:45.622870931Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:43:45.624685 containerd[1439]: time="2024-06-25T18:43:45.624050012Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:43:45.625851 containerd[1439]: time="2024-06-25T18:43:45.625806382Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:43:45.626567 containerd[1439]: time="2024-06-25T18:43:45.626534390Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jun 25 18:43:45.628903 containerd[1439]: time="2024-06-25T18:43:45.628864933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:43:45.631294 containerd[1439]: time="2024-06-25T18:43:45.631195197Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 513.296426ms" Jun 25 18:43:45.632550 containerd[1439]: time="2024-06-25T18:43:45.632454492Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 501.983552ms" Jun 25 18:43:45.633266 containerd[1439]: time="2024-06-25T18:43:45.633054124Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 496.891456ms" Jun 25 18:43:45.660305 kubelet[2130]: W0625 18:43:45.660057 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.123:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jun 25 18:43:45.660305 kubelet[2130]: E0625 18:43:45.660119 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.123:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jun 25 18:43:45.672457 kubelet[2130]: W0625 18:43:45.671808 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jun 25 18:43:45.672457 kubelet[2130]: E0625 18:43:45.671862 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jun 25 18:43:45.806454 containerd[1439]: time="2024-06-25T18:43:45.806215840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:43:45.806559 containerd[1439]: time="2024-06-25T18:43:45.806456365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:43:45.806778 containerd[1439]: time="2024-06-25T18:43:45.806519513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:43:45.806778 containerd[1439]: time="2024-06-25T18:43:45.806535380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:43:45.807429 containerd[1439]: time="2024-06-25T18:43:45.807187290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:43:45.807429 containerd[1439]: time="2024-06-25T18:43:45.807230495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:43:45.807429 containerd[1439]: time="2024-06-25T18:43:45.807243284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:43:45.807429 containerd[1439]: time="2024-06-25T18:43:45.807252357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:43:45.809352 containerd[1439]: time="2024-06-25T18:43:45.808496704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:43:45.809352 containerd[1439]: time="2024-06-25T18:43:45.808550340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:43:45.809352 containerd[1439]: time="2024-06-25T18:43:45.808577918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:43:45.809352 containerd[1439]: time="2024-06-25T18:43:45.808591747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:43:45.811064 kubelet[2130]: W0625 18:43:45.811012 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jun 25 18:43:45.811338 kubelet[2130]: E0625 18:43:45.811069 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jun 25 18:43:45.833233 systemd[1]: Started cri-containerd-4e9b02a505c61bbae6f23e1306e5b0d796ea8c314603a33e8d5b38e1f857f5c6.scope - libcontainer container 4e9b02a505c61bbae6f23e1306e5b0d796ea8c314603a33e8d5b38e1f857f5c6. Jun 25 18:43:45.834261 systemd[1]: Started cri-containerd-a8dd2347cb08cb023e620a341d9edc13d6d93227391c6ef204b7b4242ed7cf18.scope - libcontainer container a8dd2347cb08cb023e620a341d9edc13d6d93227391c6ef204b7b4242ed7cf18. Jun 25 18:43:45.837938 systemd[1]: Started cri-containerd-da57cf14436066e1d6493d4b9e7d42d8e3b983719eeca624fc9bf954ec10b5e3.scope - libcontainer container da57cf14436066e1d6493d4b9e7d42d8e3b983719eeca624fc9bf954ec10b5e3. Jun 25 18:43:45.871084 containerd[1439]: time="2024-06-25T18:43:45.870176587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e9b02a505c61bbae6f23e1306e5b0d796ea8c314603a33e8d5b38e1f857f5c6\"" Jun 25 18:43:45.871084 containerd[1439]: time="2024-06-25T18:43:45.870839288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"da57cf14436066e1d6493d4b9e7d42d8e3b983719eeca624fc9bf954ec10b5e3\"" Jun 25 18:43:45.871661 kubelet[2130]: E0625 18:43:45.871622 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:45.871760 kubelet[2130]: E0625 18:43:45.871744 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:45.874343 containerd[1439]: time="2024-06-25T18:43:45.874310143Z" level=info msg="CreateContainer within sandbox \"4e9b02a505c61bbae6f23e1306e5b0d796ea8c314603a33e8d5b38e1f857f5c6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:43:45.874630 containerd[1439]: time="2024-06-25T18:43:45.874607901Z" level=info msg="CreateContainer within sandbox \"da57cf14436066e1d6493d4b9e7d42d8e3b983719eeca624fc9bf954ec10b5e3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:43:45.877274 containerd[1439]: time="2024-06-25T18:43:45.877249471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d7f405bb80907fe78adde46158295856,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8dd2347cb08cb023e620a341d9edc13d6d93227391c6ef204b7b4242ed7cf18\"" Jun 25 18:43:45.878485 kubelet[2130]: E0625 18:43:45.878466 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:45.880497 containerd[1439]: time="2024-06-25T18:43:45.880461897Z" level=info msg="CreateContainer within sandbox \"a8dd2347cb08cb023e620a341d9edc13d6d93227391c6ef204b7b4242ed7cf18\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:43:45.903886 containerd[1439]: time="2024-06-25T18:43:45.903798465Z" level=info msg="CreateContainer within sandbox \"4e9b02a505c61bbae6f23e1306e5b0d796ea8c314603a33e8d5b38e1f857f5c6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a42038ce1fe5335aee37f8ded9d8575104fec3b09b1c037766024bfa2ab872a1\"" Jun 25 18:43:45.905008 containerd[1439]: time="2024-06-25T18:43:45.904976107Z" level=info msg="CreateContainer within sandbox \"da57cf14436066e1d6493d4b9e7d42d8e3b983719eeca624fc9bf954ec10b5e3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e4ac7ffc92267be68848dfa97ccc64a8001f31a295fe4590314afb3df6637a0d\"" Jun 25 18:43:45.907622 containerd[1439]: time="2024-06-25T18:43:45.906677282Z" level=info msg="StartContainer for \"a42038ce1fe5335aee37f8ded9d8575104fec3b09b1c037766024bfa2ab872a1\"" Jun 25 18:43:45.909899 containerd[1439]: time="2024-06-25T18:43:45.909872042Z" level=info msg="StartContainer for \"e4ac7ffc92267be68848dfa97ccc64a8001f31a295fe4590314afb3df6637a0d\"" Jun 25 18:43:45.910626 containerd[1439]: time="2024-06-25T18:43:45.910591017Z" level=info msg="CreateContainer within sandbox \"a8dd2347cb08cb023e620a341d9edc13d6d93227391c6ef204b7b4242ed7cf18\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4a044e252eb2b591863e937018c55119ee6be19b7f2cddfbad02ee5b526ccbfd\"" Jun 25 18:43:45.911177 containerd[1439]: time="2024-06-25T18:43:45.911145566Z" level=info msg="StartContainer for \"4a044e252eb2b591863e937018c55119ee6be19b7f2cddfbad02ee5b526ccbfd\"" Jun 25 18:43:45.933906 systemd[1]: Started cri-containerd-e4ac7ffc92267be68848dfa97ccc64a8001f31a295fe4590314afb3df6637a0d.scope - libcontainer container e4ac7ffc92267be68848dfa97ccc64a8001f31a295fe4590314afb3df6637a0d. Jun 25 18:43:45.937972 systemd[1]: Started cri-containerd-4a044e252eb2b591863e937018c55119ee6be19b7f2cddfbad02ee5b526ccbfd.scope - libcontainer container 4a044e252eb2b591863e937018c55119ee6be19b7f2cddfbad02ee5b526ccbfd. Jun 25 18:43:45.939861 systemd[1]: Started cri-containerd-a42038ce1fe5335aee37f8ded9d8575104fec3b09b1c037766024bfa2ab872a1.scope - libcontainer container a42038ce1fe5335aee37f8ded9d8575104fec3b09b1c037766024bfa2ab872a1. Jun 25 18:43:46.006446 containerd[1439]: time="2024-06-25T18:43:46.006316148Z" level=info msg="StartContainer for \"e4ac7ffc92267be68848dfa97ccc64a8001f31a295fe4590314afb3df6637a0d\" returns successfully" Jun 25 18:43:46.006446 containerd[1439]: time="2024-06-25T18:43:46.006361276Z" level=info msg="StartContainer for \"4a044e252eb2b591863e937018c55119ee6be19b7f2cddfbad02ee5b526ccbfd\" returns successfully" Jun 25 18:43:46.006446 containerd[1439]: time="2024-06-25T18:43:46.006417716Z" level=info msg="StartContainer for \"a42038ce1fe5335aee37f8ded9d8575104fec3b09b1c037766024bfa2ab872a1\" returns successfully" Jun 25 18:43:46.077286 kubelet[2130]: E0625 18:43:46.074672 2130 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="1.6s" Jun 25 18:43:46.183018 kubelet[2130]: I0625 18:43:46.182894 2130 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:43:46.183661 kubelet[2130]: E0625 18:43:46.183380 2130 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" Jun 25 18:43:46.699325 kubelet[2130]: E0625 18:43:46.699304 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:46.701895 kubelet[2130]: E0625 18:43:46.701870 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:46.702194 kubelet[2130]: E0625 18:43:46.702174 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:47.690430 kubelet[2130]: E0625 18:43:47.690390 2130 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 25 18:43:47.704630 kubelet[2130]: E0625 18:43:47.704587 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:47.785437 kubelet[2130]: I0625 18:43:47.785394 2130 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:43:47.792656 kubelet[2130]: I0625 18:43:47.792612 2130 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jun 25 18:43:47.799445 kubelet[2130]: E0625 18:43:47.799419 2130 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:43:47.899854 kubelet[2130]: E0625 18:43:47.899824 2130 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:43:48.000987 kubelet[2130]: E0625 18:43:48.000900 2130 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:43:48.101464 kubelet[2130]: E0625 18:43:48.101431 2130 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:43:48.202023 kubelet[2130]: E0625 18:43:48.201996 2130 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:43:48.302579 kubelet[2130]: E0625 18:43:48.302499 2130 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:43:48.403111 kubelet[2130]: E0625 18:43:48.403049 2130 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:43:48.503563 kubelet[2130]: E0625 18:43:48.503529 2130 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:43:48.604507 kubelet[2130]: E0625 18:43:48.604383 2130 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:43:48.704608 kubelet[2130]: E0625 18:43:48.704579 2130 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:43:48.805388 kubelet[2130]: E0625 18:43:48.805348 2130 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:43:49.380715 kubelet[2130]: E0625 18:43:49.380688 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:49.662339 kubelet[2130]: I0625 18:43:49.662307 2130 apiserver.go:52] "Watching apiserver" Jun 25 18:43:49.670816 kubelet[2130]: I0625 18:43:49.670783 2130 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:43:49.706404 kubelet[2130]: E0625 18:43:49.706381 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:50.322835 systemd[1]: Reloading requested from client PID 2410 ('systemctl') (unit session-7.scope)... Jun 25 18:43:50.322849 systemd[1]: Reloading... Jun 25 18:43:50.389728 zram_generator::config[2447]: No configuration found. Jun 25 18:43:50.468976 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:43:50.533353 systemd[1]: Reloading finished in 210 ms. Jun 25 18:43:50.565566 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:50.574773 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:43:50.575711 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:50.575829 systemd[1]: kubelet.service: Consumed 1.273s CPU time, 117.3M memory peak, 0B memory swap peak. Jun 25 18:43:50.585944 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:50.670842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:50.674151 (kubelet)[2489]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:43:50.723257 kubelet[2489]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:43:50.723257 kubelet[2489]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:43:50.723257 kubelet[2489]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:43:50.723257 kubelet[2489]: I0625 18:43:50.722933 2489 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:43:50.728311 kubelet[2489]: I0625 18:43:50.727979 2489 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:43:50.728311 kubelet[2489]: I0625 18:43:50.728002 2489 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:43:50.728311 kubelet[2489]: I0625 18:43:50.728155 2489 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:43:50.729823 kubelet[2489]: I0625 18:43:50.729716 2489 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:43:50.730586 kubelet[2489]: I0625 18:43:50.730570 2489 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:43:50.736687 kubelet[2489]: W0625 18:43:50.735466 2489 machine.go:65] Cannot read vendor id correctly, set empty. Jun 25 18:43:50.736687 kubelet[2489]: I0625 18:43:50.736170 2489 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:43:50.736687 kubelet[2489]: I0625 18:43:50.736363 2489 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:43:50.736687 kubelet[2489]: I0625 18:43:50.736502 2489 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:43:50.736687 kubelet[2489]: I0625 18:43:50.736527 2489 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:43:50.736687 kubelet[2489]: I0625 18:43:50.736535 2489 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:43:50.736904 kubelet[2489]: I0625 18:43:50.736567 2489 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:43:50.736958 kubelet[2489]: I0625 18:43:50.736942 2489 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:43:50.737013 kubelet[2489]: I0625 18:43:50.737004 2489 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:43:50.737100 kubelet[2489]: I0625 18:43:50.737087 2489 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:43:50.737334 kubelet[2489]: I0625 18:43:50.737291 2489 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:43:50.737953 kubelet[2489]: I0625 18:43:50.737926 2489 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:43:50.738761 kubelet[2489]: I0625 18:43:50.738744 2489 server.go:1232] "Started kubelet" Jun 25 18:43:50.739049 kubelet[2489]: I0625 18:43:50.739025 2489 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:43:50.739216 kubelet[2489]: I0625 18:43:50.739182 2489 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:43:50.739264 kubelet[2489]: I0625 18:43:50.739225 2489 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:43:50.739711 kubelet[2489]: E0625 18:43:50.739686 2489 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:43:50.739711 kubelet[2489]: E0625 18:43:50.739714 2489 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:43:50.742466 kubelet[2489]: I0625 18:43:50.742440 2489 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:43:50.747202 kubelet[2489]: I0625 18:43:50.745084 2489 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:43:50.747202 kubelet[2489]: I0625 18:43:50.745249 2489 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:43:50.747202 kubelet[2489]: I0625 18:43:50.745415 2489 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:43:50.747202 kubelet[2489]: I0625 18:43:50.745630 2489 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:43:50.758161 kubelet[2489]: I0625 18:43:50.758144 2489 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:43:50.759057 kubelet[2489]: I0625 18:43:50.759039 2489 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:43:50.759150 kubelet[2489]: I0625 18:43:50.759140 2489 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:43:50.759209 kubelet[2489]: I0625 18:43:50.759200 2489 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:43:50.759303 kubelet[2489]: E0625 18:43:50.759293 2489 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:43:50.810696 kubelet[2489]: I0625 18:43:50.810668 2489 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:43:50.810696 kubelet[2489]: I0625 18:43:50.810691 2489 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:43:50.810836 kubelet[2489]: I0625 18:43:50.810709 2489 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:43:50.810871 kubelet[2489]: I0625 18:43:50.810855 2489 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:43:50.810908 kubelet[2489]: I0625 18:43:50.810880 2489 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:43:50.810908 kubelet[2489]: I0625 18:43:50.810888 2489 policy_none.go:49] "None policy: Start" Jun 25 18:43:50.811424 kubelet[2489]: I0625 18:43:50.811401 2489 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:43:50.811478 kubelet[2489]: I0625 18:43:50.811431 2489 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:43:50.811566 kubelet[2489]: I0625 18:43:50.811551 2489 state_mem.go:75] "Updated machine memory state" Jun 25 18:43:50.815090 kubelet[2489]: I0625 18:43:50.815073 2489 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:43:50.815399 kubelet[2489]: I0625 18:43:50.815291 2489 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:43:50.850450 kubelet[2489]: I0625 18:43:50.850377 2489 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:43:50.856012 kubelet[2489]: I0625 18:43:50.855978 2489 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jun 25 18:43:50.856090 kubelet[2489]: I0625 18:43:50.856045 2489 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jun 25 18:43:50.859497 kubelet[2489]: I0625 18:43:50.859474 2489 topology_manager.go:215] "Topology Admit Handler" podUID="d7f405bb80907fe78adde46158295856" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 18:43:50.859497 kubelet[2489]: I0625 18:43:50.859594 2489 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 18:43:50.859497 kubelet[2489]: I0625 18:43:50.859633 2489 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 18:43:50.865667 kubelet[2489]: E0625 18:43:50.865619 2489 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jun 25 18:43:51.046978 kubelet[2489]: I0625 18:43:51.046934 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d7f405bb80907fe78adde46158295856-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d7f405bb80907fe78adde46158295856\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:43:51.046978 kubelet[2489]: I0625 18:43:51.046987 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d7f405bb80907fe78adde46158295856-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d7f405bb80907fe78adde46158295856\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:43:51.047093 kubelet[2489]: I0625 18:43:51.047015 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d7f405bb80907fe78adde46158295856-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d7f405bb80907fe78adde46158295856\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:43:51.047093 kubelet[2489]: I0625 18:43:51.047037 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:43:51.047093 kubelet[2489]: I0625 18:43:51.047057 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:43:51.047200 kubelet[2489]: I0625 18:43:51.047123 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:43:51.047200 kubelet[2489]: I0625 18:43:51.047158 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:43:51.047200 kubelet[2489]: I0625 18:43:51.047180 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:43:51.047200 kubelet[2489]: I0625 18:43:51.047202 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jun 25 18:43:51.165585 kubelet[2489]: E0625 18:43:51.165558 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:51.166044 kubelet[2489]: E0625 18:43:51.165661 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:51.166044 kubelet[2489]: E0625 18:43:51.165920 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:51.738606 kubelet[2489]: I0625 18:43:51.737933 2489 apiserver.go:52] "Watching apiserver" Jun 25 18:43:51.746868 kubelet[2489]: I0625 18:43:51.746791 2489 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:43:51.791299 kubelet[2489]: E0625 18:43:51.790189 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:51.791880 kubelet[2489]: E0625 18:43:51.791859 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:51.796997 kubelet[2489]: E0625 18:43:51.796956 2489 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 18:43:51.797539 kubelet[2489]: E0625 18:43:51.797443 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:51.846425 kubelet[2489]: I0625 18:43:51.846374 2489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.846320188 podCreationTimestamp="2024-06-25 18:43:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:43:51.829456267 +0000 UTC m=+1.151426490" watchObservedRunningTime="2024-06-25 18:43:51.846320188 +0000 UTC m=+1.168290411" Jun 25 18:43:51.861432 kubelet[2489]: I0625 18:43:51.861391 2489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8612129469999998 podCreationTimestamp="2024-06-25 18:43:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:43:51.846695371 +0000 UTC m=+1.168665634" watchObservedRunningTime="2024-06-25 18:43:51.861212947 +0000 UTC m=+1.183183170" Jun 25 18:43:51.861565 kubelet[2489]: I0625 18:43:51.861525 2489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.861505765 podCreationTimestamp="2024-06-25 18:43:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:43:51.857675209 +0000 UTC m=+1.179645432" watchObservedRunningTime="2024-06-25 18:43:51.861505765 +0000 UTC m=+1.183475988" Jun 25 18:43:52.793680 kubelet[2489]: E0625 18:43:52.791070 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:54.439385 kubelet[2489]: E0625 18:43:54.439297 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:55.603882 sudo[1615]: pam_unix(sudo:session): session closed for user root Jun 25 18:43:55.606742 sshd[1612]: pam_unix(sshd:session): session closed for user core Jun 25 18:43:55.609173 systemd[1]: sshd@6-10.0.0.123:22-10.0.0.1:49634.service: Deactivated successfully. Jun 25 18:43:55.610821 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:43:55.611671 systemd[1]: session-7.scope: Consumed 6.117s CPU time, 136.5M memory peak, 0B memory swap peak. Jun 25 18:43:55.612722 systemd-logind[1417]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:43:55.613739 systemd-logind[1417]: Removed session 7. Jun 25 18:43:58.588534 kubelet[2489]: E0625 18:43:58.588237 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:58.804733 kubelet[2489]: E0625 18:43:58.803563 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:59.742047 kubelet[2489]: E0625 18:43:59.742008 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:43:59.805375 kubelet[2489]: E0625 18:43:59.805344 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:03.038675 kubelet[2489]: I0625 18:44:03.036940 2489 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:44:03.039494 containerd[1439]: time="2024-06-25T18:44:03.039325346Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:44:03.039771 kubelet[2489]: I0625 18:44:03.039547 2489 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:44:03.140727 update_engine[1424]: I0625 18:44:03.140678 1424 update_attempter.cc:509] Updating boot flags... Jun 25 18:44:03.170727 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2588) Jun 25 18:44:03.192675 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2586) Jun 25 18:44:03.226743 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2586) Jun 25 18:44:03.483906 kubelet[2489]: I0625 18:44:03.483875 2489 topology_manager.go:215] "Topology Admit Handler" podUID="68c77468-3a7a-46de-82b9-822ff47249f1" podNamespace="kube-system" podName="kube-proxy-9244w" Jun 25 18:44:03.492420 systemd[1]: Created slice kubepods-besteffort-pod68c77468_3a7a_46de_82b9_822ff47249f1.slice - libcontainer container kubepods-besteffort-pod68c77468_3a7a_46de_82b9_822ff47249f1.slice. Jun 25 18:44:03.562234 kubelet[2489]: I0625 18:44:03.562213 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hj6j\" (UniqueName: \"kubernetes.io/projected/68c77468-3a7a-46de-82b9-822ff47249f1-kube-api-access-4hj6j\") pod \"kube-proxy-9244w\" (UID: \"68c77468-3a7a-46de-82b9-822ff47249f1\") " pod="kube-system/kube-proxy-9244w" Jun 25 18:44:03.562365 kubelet[2489]: I0625 18:44:03.562248 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/68c77468-3a7a-46de-82b9-822ff47249f1-kube-proxy\") pod \"kube-proxy-9244w\" (UID: \"68c77468-3a7a-46de-82b9-822ff47249f1\") " pod="kube-system/kube-proxy-9244w" Jun 25 18:44:03.562365 kubelet[2489]: I0625 18:44:03.562273 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68c77468-3a7a-46de-82b9-822ff47249f1-xtables-lock\") pod \"kube-proxy-9244w\" (UID: \"68c77468-3a7a-46de-82b9-822ff47249f1\") " pod="kube-system/kube-proxy-9244w" Jun 25 18:44:03.562365 kubelet[2489]: I0625 18:44:03.562311 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68c77468-3a7a-46de-82b9-822ff47249f1-lib-modules\") pod \"kube-proxy-9244w\" (UID: \"68c77468-3a7a-46de-82b9-822ff47249f1\") " pod="kube-system/kube-proxy-9244w" Jun 25 18:44:03.804283 kubelet[2489]: E0625 18:44:03.804194 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:03.805891 containerd[1439]: time="2024-06-25T18:44:03.805830317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9244w,Uid:68c77468-3a7a-46de-82b9-822ff47249f1,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:03.825332 containerd[1439]: time="2024-06-25T18:44:03.825239465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:03.825332 containerd[1439]: time="2024-06-25T18:44:03.825293707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:03.825332 containerd[1439]: time="2024-06-25T18:44:03.825311827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:03.825484 containerd[1439]: time="2024-06-25T18:44:03.825325468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:03.852798 systemd[1]: Started cri-containerd-a8a31022c6cd1e3a785be6757c5e1d3e55c4e6c9caedd8e299f1d8f92c7dd613.scope - libcontainer container a8a31022c6cd1e3a785be6757c5e1d3e55c4e6c9caedd8e299f1d8f92c7dd613. Jun 25 18:44:03.869088 containerd[1439]: time="2024-06-25T18:44:03.869046923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9244w,Uid:68c77468-3a7a-46de-82b9-822ff47249f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8a31022c6cd1e3a785be6757c5e1d3e55c4e6c9caedd8e299f1d8f92c7dd613\"" Jun 25 18:44:03.870419 kubelet[2489]: E0625 18:44:03.870258 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:03.873618 containerd[1439]: time="2024-06-25T18:44:03.873373303Z" level=info msg="CreateContainer within sandbox \"a8a31022c6cd1e3a785be6757c5e1d3e55c4e6c9caedd8e299f1d8f92c7dd613\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:44:03.891459 containerd[1439]: time="2024-06-25T18:44:03.891367725Z" level=info msg="CreateContainer within sandbox \"a8a31022c6cd1e3a785be6757c5e1d3e55c4e6c9caedd8e299f1d8f92c7dd613\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6d691ae13f946b1c6385eeee246503d04b58d3e31f4f846dd647999ce9896e4d\"" Jun 25 18:44:03.892019 containerd[1439]: time="2024-06-25T18:44:03.891985025Z" level=info msg="StartContainer for \"6d691ae13f946b1c6385eeee246503d04b58d3e31f4f846dd647999ce9896e4d\"" Jun 25 18:44:03.923971 systemd[1]: Started cri-containerd-6d691ae13f946b1c6385eeee246503d04b58d3e31f4f846dd647999ce9896e4d.scope - libcontainer container 6d691ae13f946b1c6385eeee246503d04b58d3e31f4f846dd647999ce9896e4d. Jun 25 18:44:03.951680 kubelet[2489]: I0625 18:44:03.950525 2489 topology_manager.go:215] "Topology Admit Handler" podUID="c3ce35ac-64d5-4063-881d-cd94e65d628a" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-hqc2l" Jun 25 18:44:03.966361 containerd[1439]: time="2024-06-25T18:44:03.966306231Z" level=info msg="StartContainer for \"6d691ae13f946b1c6385eeee246503d04b58d3e31f4f846dd647999ce9896e4d\" returns successfully" Jun 25 18:44:03.967223 systemd[1]: Created slice kubepods-besteffort-podc3ce35ac_64d5_4063_881d_cd94e65d628a.slice - libcontainer container kubepods-besteffort-podc3ce35ac_64d5_4063_881d_cd94e65d628a.slice. Jun 25 18:44:04.065744 kubelet[2489]: I0625 18:44:04.065654 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c3ce35ac-64d5-4063-881d-cd94e65d628a-var-lib-calico\") pod \"tigera-operator-76c4974c85-hqc2l\" (UID: \"c3ce35ac-64d5-4063-881d-cd94e65d628a\") " pod="tigera-operator/tigera-operator-76c4974c85-hqc2l" Jun 25 18:44:04.065744 kubelet[2489]: I0625 18:44:04.065697 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrbwx\" (UniqueName: \"kubernetes.io/projected/c3ce35ac-64d5-4063-881d-cd94e65d628a-kube-api-access-rrbwx\") pod \"tigera-operator-76c4974c85-hqc2l\" (UID: \"c3ce35ac-64d5-4063-881d-cd94e65d628a\") " pod="tigera-operator/tigera-operator-76c4974c85-hqc2l" Jun 25 18:44:04.272582 containerd[1439]: time="2024-06-25T18:44:04.272527196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-hqc2l,Uid:c3ce35ac-64d5-4063-881d-cd94e65d628a,Namespace:tigera-operator,Attempt:0,}" Jun 25 18:44:04.296491 containerd[1439]: time="2024-06-25T18:44:04.296332650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:04.296491 containerd[1439]: time="2024-06-25T18:44:04.296388851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:04.296491 containerd[1439]: time="2024-06-25T18:44:04.296401892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:04.296491 containerd[1439]: time="2024-06-25T18:44:04.296411332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:04.314785 systemd[1]: Started cri-containerd-10289352732cb92a8471badc5b35084826523216535d59497d98fd9fc8e1325d.scope - libcontainer container 10289352732cb92a8471badc5b35084826523216535d59497d98fd9fc8e1325d. Jun 25 18:44:04.339659 containerd[1439]: time="2024-06-25T18:44:04.339528580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-hqc2l,Uid:c3ce35ac-64d5-4063-881d-cd94e65d628a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"10289352732cb92a8471badc5b35084826523216535d59497d98fd9fc8e1325d\"" Jun 25 18:44:04.342439 containerd[1439]: time="2024-06-25T18:44:04.342355987Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 18:44:04.447302 kubelet[2489]: E0625 18:44:04.446221 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:04.677180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3817049491.mount: Deactivated successfully. Jun 25 18:44:04.818791 kubelet[2489]: E0625 18:44:04.818764 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:04.824804 kubelet[2489]: I0625 18:44:04.824772 2489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9244w" podStartSLOduration=1.824633281 podCreationTimestamp="2024-06-25 18:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:04.824357313 +0000 UTC m=+14.146327536" watchObservedRunningTime="2024-06-25 18:44:04.824633281 +0000 UTC m=+14.146603504" Jun 25 18:44:05.387941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2218142767.mount: Deactivated successfully. Jun 25 18:44:06.715456 containerd[1439]: time="2024-06-25T18:44:06.715412088Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:06.716375 containerd[1439]: time="2024-06-25T18:44:06.716178349Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473646" Jun 25 18:44:06.717670 containerd[1439]: time="2024-06-25T18:44:06.717275140Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:06.719898 containerd[1439]: time="2024-06-25T18:44:06.719668327Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:06.720692 containerd[1439]: time="2024-06-25T18:44:06.720577432Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 2.378137603s" Jun 25 18:44:06.720692 containerd[1439]: time="2024-06-25T18:44:06.720612673Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jun 25 18:44:06.730512 containerd[1439]: time="2024-06-25T18:44:06.730471909Z" level=info msg="CreateContainer within sandbox \"10289352732cb92a8471badc5b35084826523216535d59497d98fd9fc8e1325d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 18:44:06.740997 containerd[1439]: time="2024-06-25T18:44:06.740959362Z" level=info msg="CreateContainer within sandbox \"10289352732cb92a8471badc5b35084826523216535d59497d98fd9fc8e1325d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2a81c7884dd686a27c8425b0ea3e03e201b46ee97eac5187266f3c8802020fe3\"" Jun 25 18:44:06.741320 containerd[1439]: time="2024-06-25T18:44:06.741296971Z" level=info msg="StartContainer for \"2a81c7884dd686a27c8425b0ea3e03e201b46ee97eac5187266f3c8802020fe3\"" Jun 25 18:44:06.772823 systemd[1]: Started cri-containerd-2a81c7884dd686a27c8425b0ea3e03e201b46ee97eac5187266f3c8802020fe3.scope - libcontainer container 2a81c7884dd686a27c8425b0ea3e03e201b46ee97eac5187266f3c8802020fe3. Jun 25 18:44:06.791371 containerd[1439]: time="2024-06-25T18:44:06.791331290Z" level=info msg="StartContainer for \"2a81c7884dd686a27c8425b0ea3e03e201b46ee97eac5187266f3c8802020fe3\" returns successfully" Jun 25 18:44:06.831865 kubelet[2489]: I0625 18:44:06.831719 2489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-hqc2l" podStartSLOduration=1.45104506 podCreationTimestamp="2024-06-25 18:44:03 +0000 UTC" firstStartedPulling="2024-06-25 18:44:04.340737577 +0000 UTC m=+13.662707800" lastFinishedPulling="2024-06-25 18:44:06.721373255 +0000 UTC m=+16.043343478" observedRunningTime="2024-06-25 18:44:06.830159095 +0000 UTC m=+16.152129358" watchObservedRunningTime="2024-06-25 18:44:06.831680738 +0000 UTC m=+16.153650961" Jun 25 18:44:10.349056 kubelet[2489]: I0625 18:44:10.349009 2489 topology_manager.go:215] "Topology Admit Handler" podUID="566ccb74-c593-452f-ae65-cb3f015791c3" podNamespace="calico-system" podName="calico-typha-6945f8b459-j7mxm" Jun 25 18:44:10.361626 systemd[1]: Created slice kubepods-besteffort-pod566ccb74_c593_452f_ae65_cb3f015791c3.slice - libcontainer container kubepods-besteffort-pod566ccb74_c593_452f_ae65_cb3f015791c3.slice. Jun 25 18:44:10.398762 kubelet[2489]: I0625 18:44:10.398720 2489 topology_manager.go:215] "Topology Admit Handler" podUID="75b52719-14ba-4545-9e43-3a685992b217" podNamespace="calico-system" podName="calico-node-74pgt" Jun 25 18:44:10.407071 systemd[1]: Created slice kubepods-besteffort-pod75b52719_14ba_4545_9e43_3a685992b217.slice - libcontainer container kubepods-besteffort-pod75b52719_14ba_4545_9e43_3a685992b217.slice. Jun 25 18:44:10.410334 kubelet[2489]: I0625 18:44:10.410286 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/566ccb74-c593-452f-ae65-cb3f015791c3-tigera-ca-bundle\") pod \"calico-typha-6945f8b459-j7mxm\" (UID: \"566ccb74-c593-452f-ae65-cb3f015791c3\") " pod="calico-system/calico-typha-6945f8b459-j7mxm" Jun 25 18:44:10.410469 kubelet[2489]: I0625 18:44:10.410368 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/566ccb74-c593-452f-ae65-cb3f015791c3-typha-certs\") pod \"calico-typha-6945f8b459-j7mxm\" (UID: \"566ccb74-c593-452f-ae65-cb3f015791c3\") " pod="calico-system/calico-typha-6945f8b459-j7mxm" Jun 25 18:44:10.410469 kubelet[2489]: I0625 18:44:10.410399 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn2hb\" (UniqueName: \"kubernetes.io/projected/566ccb74-c593-452f-ae65-cb3f015791c3-kube-api-access-vn2hb\") pod \"calico-typha-6945f8b459-j7mxm\" (UID: \"566ccb74-c593-452f-ae65-cb3f015791c3\") " pod="calico-system/calico-typha-6945f8b459-j7mxm" Jun 25 18:44:10.512679 kubelet[2489]: I0625 18:44:10.510761 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-policysync\") pod \"calico-node-74pgt\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " pod="calico-system/calico-node-74pgt" Jun 25 18:44:10.512679 kubelet[2489]: I0625 18:44:10.510847 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-flexvol-driver-host\") pod \"calico-node-74pgt\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " pod="calico-system/calico-node-74pgt" Jun 25 18:44:10.512679 kubelet[2489]: I0625 18:44:10.511288 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-var-run-calico\") pod \"calico-node-74pgt\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " pod="calico-system/calico-node-74pgt" Jun 25 18:44:10.512679 kubelet[2489]: I0625 18:44:10.510918 2489 topology_manager.go:215] "Topology Admit Handler" podUID="42da1b33-d6af-464d-8bc6-37e59885f0c5" podNamespace="calico-system" podName="csi-node-driver-g4ws7" Jun 25 18:44:10.512679 kubelet[2489]: I0625 18:44:10.511334 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-var-lib-calico\") pod \"calico-node-74pgt\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " pod="calico-system/calico-node-74pgt" Jun 25 18:44:10.512679 kubelet[2489]: I0625 18:44:10.511424 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-cni-net-dir\") pod \"calico-node-74pgt\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " pod="calico-system/calico-node-74pgt" Jun 25 18:44:10.512921 kubelet[2489]: I0625 18:44:10.511464 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/75b52719-14ba-4545-9e43-3a685992b217-node-certs\") pod \"calico-node-74pgt\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " pod="calico-system/calico-node-74pgt" Jun 25 18:44:10.512921 kubelet[2489]: I0625 18:44:10.511517 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-lib-modules\") pod \"calico-node-74pgt\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " pod="calico-system/calico-node-74pgt" Jun 25 18:44:10.512921 kubelet[2489]: I0625 18:44:10.511538 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-cni-log-dir\") pod \"calico-node-74pgt\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " pod="calico-system/calico-node-74pgt" Jun 25 18:44:10.512921 kubelet[2489]: I0625 18:44:10.511558 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8nz2\" (UniqueName: \"kubernetes.io/projected/75b52719-14ba-4545-9e43-3a685992b217-kube-api-access-g8nz2\") pod \"calico-node-74pgt\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " pod="calico-system/calico-node-74pgt" Jun 25 18:44:10.512921 kubelet[2489]: I0625 18:44:10.511578 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75b52719-14ba-4545-9e43-3a685992b217-tigera-ca-bundle\") pod \"calico-node-74pgt\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " pod="calico-system/calico-node-74pgt" Jun 25 18:44:10.513036 kubelet[2489]: E0625 18:44:10.511583 2489 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g4ws7" podUID="42da1b33-d6af-464d-8bc6-37e59885f0c5" Jun 25 18:44:10.513036 kubelet[2489]: I0625 18:44:10.511599 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-cni-bin-dir\") pod \"calico-node-74pgt\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " pod="calico-system/calico-node-74pgt" Jun 25 18:44:10.513036 kubelet[2489]: I0625 18:44:10.511621 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-xtables-lock\") pod \"calico-node-74pgt\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " pod="calico-system/calico-node-74pgt" Jun 25 18:44:10.612335 kubelet[2489]: I0625 18:44:10.612237 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/42da1b33-d6af-464d-8bc6-37e59885f0c5-registration-dir\") pod \"csi-node-driver-g4ws7\" (UID: \"42da1b33-d6af-464d-8bc6-37e59885f0c5\") " pod="calico-system/csi-node-driver-g4ws7" Jun 25 18:44:10.612335 kubelet[2489]: I0625 18:44:10.612280 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/42da1b33-d6af-464d-8bc6-37e59885f0c5-varrun\") pod \"csi-node-driver-g4ws7\" (UID: \"42da1b33-d6af-464d-8bc6-37e59885f0c5\") " pod="calico-system/csi-node-driver-g4ws7" Jun 25 18:44:10.612335 kubelet[2489]: I0625 18:44:10.612302 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42da1b33-d6af-464d-8bc6-37e59885f0c5-kubelet-dir\") pod \"csi-node-driver-g4ws7\" (UID: \"42da1b33-d6af-464d-8bc6-37e59885f0c5\") " pod="calico-system/csi-node-driver-g4ws7" Jun 25 18:44:10.612502 kubelet[2489]: I0625 18:44:10.612422 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/42da1b33-d6af-464d-8bc6-37e59885f0c5-socket-dir\") pod \"csi-node-driver-g4ws7\" (UID: \"42da1b33-d6af-464d-8bc6-37e59885f0c5\") " pod="calico-system/csi-node-driver-g4ws7" Jun 25 18:44:10.612502 kubelet[2489]: I0625 18:44:10.612469 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgv5g\" (UniqueName: \"kubernetes.io/projected/42da1b33-d6af-464d-8bc6-37e59885f0c5-kube-api-access-tgv5g\") pod \"csi-node-driver-g4ws7\" (UID: \"42da1b33-d6af-464d-8bc6-37e59885f0c5\") " pod="calico-system/csi-node-driver-g4ws7" Jun 25 18:44:10.615839 kubelet[2489]: E0625 18:44:10.615802 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.615839 kubelet[2489]: W0625 18:44:10.615821 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.615839 kubelet[2489]: E0625 18:44:10.615850 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.625726 kubelet[2489]: E0625 18:44:10.625040 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.625726 kubelet[2489]: W0625 18:44:10.625060 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.625726 kubelet[2489]: E0625 18:44:10.625079 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.666628 kubelet[2489]: E0625 18:44:10.666377 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:10.667430 containerd[1439]: time="2024-06-25T18:44:10.667337765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6945f8b459-j7mxm,Uid:566ccb74-c593-452f-ae65-cb3f015791c3,Namespace:calico-system,Attempt:0,}" Jun 25 18:44:10.690338 containerd[1439]: time="2024-06-25T18:44:10.690231337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:10.690338 containerd[1439]: time="2024-06-25T18:44:10.690307659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:10.690338 containerd[1439]: time="2024-06-25T18:44:10.690328299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:10.690338 containerd[1439]: time="2024-06-25T18:44:10.690346339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:10.709451 kubelet[2489]: E0625 18:44:10.709412 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:10.715073 kubelet[2489]: E0625 18:44:10.715044 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.715073 kubelet[2489]: W0625 18:44:10.715062 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.715171 kubelet[2489]: E0625 18:44:10.715080 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.715295 containerd[1439]: time="2024-06-25T18:44:10.715249718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-74pgt,Uid:75b52719-14ba-4545-9e43-3a685992b217,Namespace:calico-system,Attempt:0,}" Jun 25 18:44:10.719461 kubelet[2489]: E0625 18:44:10.719132 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.719461 kubelet[2489]: W0625 18:44:10.719146 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.719461 kubelet[2489]: E0625 18:44:10.719165 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.719461 kubelet[2489]: E0625 18:44:10.719382 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.719461 kubelet[2489]: W0625 18:44:10.719391 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.719461 kubelet[2489]: E0625 18:44:10.719453 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.719212 systemd[1]: Started cri-containerd-dddf10f3abc3ce8f8490cdec5798f68f022d53d6f1b491bf2857331205c5744c.scope - libcontainer container dddf10f3abc3ce8f8490cdec5798f68f022d53d6f1b491bf2857331205c5744c. Jun 25 18:44:10.719840 kubelet[2489]: E0625 18:44:10.719778 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.719840 kubelet[2489]: W0625 18:44:10.719787 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.719840 kubelet[2489]: E0625 18:44:10.719814 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.719988 kubelet[2489]: E0625 18:44:10.719958 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.719988 kubelet[2489]: W0625 18:44:10.719974 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.720047 kubelet[2489]: E0625 18:44:10.719991 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.720212 kubelet[2489]: E0625 18:44:10.720193 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.720241 kubelet[2489]: W0625 18:44:10.720210 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.720268 kubelet[2489]: E0625 18:44:10.720242 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.720450 kubelet[2489]: E0625 18:44:10.720438 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.720450 kubelet[2489]: W0625 18:44:10.720448 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.720507 kubelet[2489]: E0625 18:44:10.720468 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.720701 kubelet[2489]: E0625 18:44:10.720686 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.720701 kubelet[2489]: W0625 18:44:10.720698 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.720843 kubelet[2489]: E0625 18:44:10.720803 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.720898 kubelet[2489]: E0625 18:44:10.720884 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.720898 kubelet[2489]: W0625 18:44:10.720894 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.720951 kubelet[2489]: E0625 18:44:10.720941 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.721655 kubelet[2489]: E0625 18:44:10.721616 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.721655 kubelet[2489]: W0625 18:44:10.721630 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.721750 kubelet[2489]: E0625 18:44:10.721743 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.721932 kubelet[2489]: E0625 18:44:10.721896 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.721932 kubelet[2489]: W0625 18:44:10.721908 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.723091 kubelet[2489]: E0625 18:44:10.722924 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.723256 kubelet[2489]: E0625 18:44:10.723201 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.723256 kubelet[2489]: W0625 18:44:10.723219 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.723310 kubelet[2489]: E0625 18:44:10.723280 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.725201 kubelet[2489]: E0625 18:44:10.725179 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.725201 kubelet[2489]: W0625 18:44:10.725196 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.725311 kubelet[2489]: E0625 18:44:10.725289 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.729713 kubelet[2489]: E0625 18:44:10.729685 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.729713 kubelet[2489]: W0625 18:44:10.729706 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.729878 kubelet[2489]: E0625 18:44:10.729842 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.742884 kubelet[2489]: E0625 18:44:10.742465 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.742884 kubelet[2489]: W0625 18:44:10.742482 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.742884 kubelet[2489]: E0625 18:44:10.742550 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.742884 kubelet[2489]: E0625 18:44:10.742768 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.742884 kubelet[2489]: W0625 18:44:10.742778 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.742884 kubelet[2489]: E0625 18:44:10.742839 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.743251 kubelet[2489]: E0625 18:44:10.743155 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.743251 kubelet[2489]: W0625 18:44:10.743167 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.743251 kubelet[2489]: E0625 18:44:10.743236 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.743695 kubelet[2489]: E0625 18:44:10.743557 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.743695 kubelet[2489]: W0625 18:44:10.743569 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.743817 kubelet[2489]: E0625 18:44:10.743803 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.744013 kubelet[2489]: E0625 18:44:10.744000 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.744249 kubelet[2489]: W0625 18:44:10.744233 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.744417 containerd[1439]: time="2024-06-25T18:44:10.743445774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:10.744581 kubelet[2489]: E0625 18:44:10.744543 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.744616 containerd[1439]: time="2024-06-25T18:44:10.744500598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:10.744678 containerd[1439]: time="2024-06-25T18:44:10.744529439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:10.744678 containerd[1439]: time="2024-06-25T18:44:10.744667442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:10.744885 kubelet[2489]: E0625 18:44:10.744791 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.744885 kubelet[2489]: W0625 18:44:10.744804 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.744885 kubelet[2489]: E0625 18:44:10.744843 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.745887 kubelet[2489]: E0625 18:44:10.745745 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.745887 kubelet[2489]: W0625 18:44:10.745761 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.745887 kubelet[2489]: E0625 18:44:10.745863 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.746094 kubelet[2489]: E0625 18:44:10.746081 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.746157 kubelet[2489]: W0625 18:44:10.746139 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.746280 kubelet[2489]: E0625 18:44:10.746270 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.749779 kubelet[2489]: E0625 18:44:10.749722 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.749779 kubelet[2489]: W0625 18:44:10.749770 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.749924 kubelet[2489]: E0625 18:44:10.749905 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.751006 kubelet[2489]: E0625 18:44:10.750037 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.751006 kubelet[2489]: W0625 18:44:10.750065 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.751006 kubelet[2489]: E0625 18:44:10.750106 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.751006 kubelet[2489]: E0625 18:44:10.750317 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.751006 kubelet[2489]: W0625 18:44:10.750326 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.751006 kubelet[2489]: E0625 18:44:10.750338 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.768060 systemd[1]: Started cri-containerd-6c893c8a1ac496261937a5e6a0bee5805d9ff2d9c672ecb42c06ac4ec465da2e.scope - libcontainer container 6c893c8a1ac496261937a5e6a0bee5805d9ff2d9c672ecb42c06ac4ec465da2e. Jun 25 18:44:10.776060 kubelet[2489]: E0625 18:44:10.776034 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:10.776060 kubelet[2489]: W0625 18:44:10.776054 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:10.776193 kubelet[2489]: E0625 18:44:10.776080 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:10.780609 containerd[1439]: time="2024-06-25T18:44:10.779455891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6945f8b459-j7mxm,Uid:566ccb74-c593-452f-ae65-cb3f015791c3,Namespace:calico-system,Attempt:0,} returns sandbox id \"dddf10f3abc3ce8f8490cdec5798f68f022d53d6f1b491bf2857331205c5744c\"" Jun 25 18:44:10.780714 kubelet[2489]: E0625 18:44:10.780164 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:10.784894 containerd[1439]: time="2024-06-25T18:44:10.784854016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 18:44:10.798052 containerd[1439]: time="2024-06-25T18:44:10.797949520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-74pgt,Uid:75b52719-14ba-4545-9e43-3a685992b217,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c893c8a1ac496261937a5e6a0bee5805d9ff2d9c672ecb42c06ac4ec465da2e\"" Jun 25 18:44:10.798734 kubelet[2489]: E0625 18:44:10.798712 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:12.759935 kubelet[2489]: E0625 18:44:12.759514 2489 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g4ws7" podUID="42da1b33-d6af-464d-8bc6-37e59885f0c5" Jun 25 18:44:13.557037 containerd[1439]: time="2024-06-25T18:44:13.556979710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:13.557992 containerd[1439]: time="2024-06-25T18:44:13.557774366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jun 25 18:44:13.558716 containerd[1439]: time="2024-06-25T18:44:13.558681065Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:13.561711 containerd[1439]: time="2024-06-25T18:44:13.561419161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:13.561997 containerd[1439]: time="2024-06-25T18:44:13.561966092Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 2.777073275s" Jun 25 18:44:13.562059 containerd[1439]: time="2024-06-25T18:44:13.561998493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jun 25 18:44:13.563020 containerd[1439]: time="2024-06-25T18:44:13.562932952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 18:44:13.569941 containerd[1439]: time="2024-06-25T18:44:13.569879054Z" level=info msg="CreateContainer within sandbox \"dddf10f3abc3ce8f8490cdec5798f68f022d53d6f1b491bf2857331205c5744c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 18:44:13.583885 containerd[1439]: time="2024-06-25T18:44:13.583834179Z" level=info msg="CreateContainer within sandbox \"dddf10f3abc3ce8f8490cdec5798f68f022d53d6f1b491bf2857331205c5744c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230\"" Jun 25 18:44:13.585237 containerd[1439]: time="2024-06-25T18:44:13.585209847Z" level=info msg="StartContainer for \"011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230\"" Jun 25 18:44:13.618818 systemd[1]: Started cri-containerd-011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230.scope - libcontainer container 011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230. Jun 25 18:44:13.666590 containerd[1439]: time="2024-06-25T18:44:13.666544388Z" level=info msg="StartContainer for \"011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230\" returns successfully" Jun 25 18:44:13.837616 containerd[1439]: time="2024-06-25T18:44:13.837487801Z" level=info msg="StopContainer for \"011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230\" with timeout 300 (s)" Jun 25 18:44:13.838290 containerd[1439]: time="2024-06-25T18:44:13.838256336Z" level=info msg="Stop container \"011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230\" with signal terminated" Jun 25 18:44:13.853402 systemd[1]: cri-containerd-011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230.scope: Deactivated successfully. Jun 25 18:44:13.856607 kubelet[2489]: I0625 18:44:13.856361 2489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6945f8b459-j7mxm" podStartSLOduration=1.076283843 podCreationTimestamp="2024-06-25 18:44:10 +0000 UTC" firstStartedPulling="2024-06-25 18:44:10.782292917 +0000 UTC m=+20.104263140" lastFinishedPulling="2024-06-25 18:44:13.562330899 +0000 UTC m=+22.884301122" observedRunningTime="2024-06-25 18:44:13.851767732 +0000 UTC m=+23.173737915" watchObservedRunningTime="2024-06-25 18:44:13.856321825 +0000 UTC m=+23.178292048" Jun 25 18:44:13.972434 containerd[1439]: time="2024-06-25T18:44:13.972373596Z" level=info msg="shim disconnected" id=011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230 namespace=k8s.io Jun 25 18:44:13.972434 containerd[1439]: time="2024-06-25T18:44:13.972430078Z" level=warning msg="cleaning up after shim disconnected" id=011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230 namespace=k8s.io Jun 25 18:44:13.972434 containerd[1439]: time="2024-06-25T18:44:13.972439198Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:44:13.989973 containerd[1439]: time="2024-06-25T18:44:13.989921475Z" level=info msg="StopContainer for \"011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230\" returns successfully" Jun 25 18:44:13.991594 containerd[1439]: time="2024-06-25T18:44:13.990574128Z" level=info msg="StopPodSandbox for \"dddf10f3abc3ce8f8490cdec5798f68f022d53d6f1b491bf2857331205c5744c\"" Jun 25 18:44:13.991594 containerd[1439]: time="2024-06-25T18:44:13.990613009Z" level=info msg="Container to stop \"011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:44:13.998669 systemd[1]: cri-containerd-dddf10f3abc3ce8f8490cdec5798f68f022d53d6f1b491bf2857331205c5744c.scope: Deactivated successfully. Jun 25 18:44:14.025553 containerd[1439]: time="2024-06-25T18:44:14.025493782Z" level=info msg="shim disconnected" id=dddf10f3abc3ce8f8490cdec5798f68f022d53d6f1b491bf2857331205c5744c namespace=k8s.io Jun 25 18:44:14.025553 containerd[1439]: time="2024-06-25T18:44:14.025548583Z" level=warning msg="cleaning up after shim disconnected" id=dddf10f3abc3ce8f8490cdec5798f68f022d53d6f1b491bf2857331205c5744c namespace=k8s.io Jun 25 18:44:14.025553 containerd[1439]: time="2024-06-25T18:44:14.025559223Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:44:14.039397 containerd[1439]: time="2024-06-25T18:44:14.039336773Z" level=info msg="TearDown network for sandbox \"dddf10f3abc3ce8f8490cdec5798f68f022d53d6f1b491bf2857331205c5744c\" successfully" Jun 25 18:44:14.039397 containerd[1439]: time="2024-06-25T18:44:14.039377974Z" level=info msg="StopPodSandbox for \"dddf10f3abc3ce8f8490cdec5798f68f022d53d6f1b491bf2857331205c5744c\" returns successfully" Jun 25 18:44:14.060015 kubelet[2489]: I0625 18:44:14.059540 2489 topology_manager.go:215] "Topology Admit Handler" podUID="649c2742-9e8f-4b08-a31e-e3a16d2fa735" podNamespace="calico-system" podName="calico-typha-5d5d9b9f4d-m8ffx" Jun 25 18:44:14.060015 kubelet[2489]: E0625 18:44:14.059595 2489 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="566ccb74-c593-452f-ae65-cb3f015791c3" containerName="calico-typha" Jun 25 18:44:14.060015 kubelet[2489]: I0625 18:44:14.059624 2489 memory_manager.go:346] "RemoveStaleState removing state" podUID="566ccb74-c593-452f-ae65-cb3f015791c3" containerName="calico-typha" Jun 25 18:44:14.066101 systemd[1]: Created slice kubepods-besteffort-pod649c2742_9e8f_4b08_a31e_e3a16d2fa735.slice - libcontainer container kubepods-besteffort-pod649c2742_9e8f_4b08_a31e_e3a16d2fa735.slice. Jun 25 18:44:14.104228 kubelet[2489]: E0625 18:44:14.104141 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.104228 kubelet[2489]: W0625 18:44:14.104165 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.104228 kubelet[2489]: E0625 18:44:14.104188 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.104465 kubelet[2489]: E0625 18:44:14.104433 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.104465 kubelet[2489]: W0625 18:44:14.104462 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.104515 kubelet[2489]: E0625 18:44:14.104476 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.105210 kubelet[2489]: E0625 18:44:14.105189 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.105210 kubelet[2489]: W0625 18:44:14.105208 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.105544 kubelet[2489]: E0625 18:44:14.105412 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.105890 kubelet[2489]: E0625 18:44:14.105869 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.105890 kubelet[2489]: W0625 18:44:14.105889 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.105991 kubelet[2489]: E0625 18:44:14.105904 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.106289 kubelet[2489]: E0625 18:44:14.106271 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.106289 kubelet[2489]: W0625 18:44:14.106287 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.106289 kubelet[2489]: E0625 18:44:14.106300 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.106558 kubelet[2489]: E0625 18:44:14.106507 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.106606 kubelet[2489]: W0625 18:44:14.106560 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.106606 kubelet[2489]: E0625 18:44:14.106603 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.107275 kubelet[2489]: E0625 18:44:14.107257 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.107381 kubelet[2489]: W0625 18:44:14.107275 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.107423 kubelet[2489]: E0625 18:44:14.107388 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.107887 kubelet[2489]: E0625 18:44:14.107860 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.107887 kubelet[2489]: W0625 18:44:14.107881 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.107946 kubelet[2489]: E0625 18:44:14.107895 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.109305 kubelet[2489]: E0625 18:44:14.109288 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.109616 kubelet[2489]: W0625 18:44:14.109304 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.109680 kubelet[2489]: E0625 18:44:14.109621 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.110991 kubelet[2489]: E0625 18:44:14.110879 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.110991 kubelet[2489]: W0625 18:44:14.110896 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.110991 kubelet[2489]: E0625 18:44:14.110916 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.111266 kubelet[2489]: E0625 18:44:14.111112 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.111266 kubelet[2489]: W0625 18:44:14.111123 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.111266 kubelet[2489]: E0625 18:44:14.111136 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.111653 kubelet[2489]: E0625 18:44:14.111606 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.111653 kubelet[2489]: W0625 18:44:14.111619 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.111653 kubelet[2489]: E0625 18:44:14.111632 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.141223 kubelet[2489]: E0625 18:44:14.141199 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.141223 kubelet[2489]: W0625 18:44:14.141217 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.141369 kubelet[2489]: E0625 18:44:14.141237 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.141369 kubelet[2489]: I0625 18:44:14.141274 2489 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/566ccb74-c593-452f-ae65-cb3f015791c3-tigera-ca-bundle\") pod \"566ccb74-c593-452f-ae65-cb3f015791c3\" (UID: \"566ccb74-c593-452f-ae65-cb3f015791c3\") " Jun 25 18:44:14.141481 kubelet[2489]: E0625 18:44:14.141470 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.141481 kubelet[2489]: W0625 18:44:14.141481 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.141481 kubelet[2489]: E0625 18:44:14.141492 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.141571 kubelet[2489]: I0625 18:44:14.141513 2489 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/566ccb74-c593-452f-ae65-cb3f015791c3-typha-certs\") pod \"566ccb74-c593-452f-ae65-cb3f015791c3\" (UID: \"566ccb74-c593-452f-ae65-cb3f015791c3\") " Jun 25 18:44:14.141856 kubelet[2489]: E0625 18:44:14.141741 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.141856 kubelet[2489]: W0625 18:44:14.141755 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.141856 kubelet[2489]: E0625 18:44:14.141767 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.141856 kubelet[2489]: I0625 18:44:14.141788 2489 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn2hb\" (UniqueName: \"kubernetes.io/projected/566ccb74-c593-452f-ae65-cb3f015791c3-kube-api-access-vn2hb\") pod \"566ccb74-c593-452f-ae65-cb3f015791c3\" (UID: \"566ccb74-c593-452f-ae65-cb3f015791c3\") " Jun 25 18:44:14.142306 kubelet[2489]: E0625 18:44:14.142056 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.142306 kubelet[2489]: W0625 18:44:14.142074 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.142306 kubelet[2489]: E0625 18:44:14.142099 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.142552 kubelet[2489]: E0625 18:44:14.142536 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.142552 kubelet[2489]: W0625 18:44:14.142549 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.142621 kubelet[2489]: E0625 18:44:14.142563 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.142621 kubelet[2489]: I0625 18:44:14.142585 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/649c2742-9e8f-4b08-a31e-e3a16d2fa735-tigera-ca-bundle\") pod \"calico-typha-5d5d9b9f4d-m8ffx\" (UID: \"649c2742-9e8f-4b08-a31e-e3a16d2fa735\") " pod="calico-system/calico-typha-5d5d9b9f4d-m8ffx" Jun 25 18:44:14.142776 kubelet[2489]: E0625 18:44:14.142763 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.142776 kubelet[2489]: W0625 18:44:14.142775 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.142824 kubelet[2489]: E0625 18:44:14.142787 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.142824 kubelet[2489]: I0625 18:44:14.142805 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kt9l\" (UniqueName: \"kubernetes.io/projected/649c2742-9e8f-4b08-a31e-e3a16d2fa735-kube-api-access-5kt9l\") pod \"calico-typha-5d5d9b9f4d-m8ffx\" (UID: \"649c2742-9e8f-4b08-a31e-e3a16d2fa735\") " pod="calico-system/calico-typha-5d5d9b9f4d-m8ffx" Jun 25 18:44:14.142958 kubelet[2489]: E0625 18:44:14.142947 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.142981 kubelet[2489]: W0625 18:44:14.142957 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.142981 kubelet[2489]: E0625 18:44:14.142968 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.143034 kubelet[2489]: I0625 18:44:14.142986 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/649c2742-9e8f-4b08-a31e-e3a16d2fa735-typha-certs\") pod \"calico-typha-5d5d9b9f4d-m8ffx\" (UID: \"649c2742-9e8f-4b08-a31e-e3a16d2fa735\") " pod="calico-system/calico-typha-5d5d9b9f4d-m8ffx" Jun 25 18:44:14.143193 kubelet[2489]: E0625 18:44:14.143177 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.143193 kubelet[2489]: W0625 18:44:14.143190 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.143304 kubelet[2489]: E0625 18:44:14.143202 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.143368 kubelet[2489]: E0625 18:44:14.143353 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.143368 kubelet[2489]: W0625 18:44:14.143365 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.143417 kubelet[2489]: E0625 18:44:14.143377 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.143593 kubelet[2489]: E0625 18:44:14.143573 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.143593 kubelet[2489]: W0625 18:44:14.143591 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.143669 kubelet[2489]: E0625 18:44:14.143603 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.144579 kubelet[2489]: E0625 18:44:14.144466 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.144579 kubelet[2489]: W0625 18:44:14.144485 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.144579 kubelet[2489]: E0625 18:44:14.144501 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.144764 kubelet[2489]: I0625 18:44:14.144738 2489 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/566ccb74-c593-452f-ae65-cb3f015791c3-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "566ccb74-c593-452f-ae65-cb3f015791c3" (UID: "566ccb74-c593-452f-ae65-cb3f015791c3"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 18:44:14.145119 kubelet[2489]: E0625 18:44:14.145100 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.145119 kubelet[2489]: W0625 18:44:14.145118 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.145192 kubelet[2489]: E0625 18:44:14.145133 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.145460 kubelet[2489]: E0625 18:44:14.145335 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.145460 kubelet[2489]: W0625 18:44:14.145350 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.145460 kubelet[2489]: E0625 18:44:14.145361 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.145585 kubelet[2489]: E0625 18:44:14.145507 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.145585 kubelet[2489]: W0625 18:44:14.145516 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.145585 kubelet[2489]: E0625 18:44:14.145528 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.146245 kubelet[2489]: I0625 18:44:14.145853 2489 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/566ccb74-c593-452f-ae65-cb3f015791c3-kube-api-access-vn2hb" (OuterVolumeSpecName: "kube-api-access-vn2hb") pod "566ccb74-c593-452f-ae65-cb3f015791c3" (UID: "566ccb74-c593-452f-ae65-cb3f015791c3"). InnerVolumeSpecName "kube-api-access-vn2hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:44:14.146245 kubelet[2489]: E0625 18:44:14.145914 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.146245 kubelet[2489]: W0625 18:44:14.145927 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.146245 kubelet[2489]: E0625 18:44:14.145942 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.146245 kubelet[2489]: I0625 18:44:14.146203 2489 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/566ccb74-c593-452f-ae65-cb3f015791c3-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "566ccb74-c593-452f-ae65-cb3f015791c3" (UID: "566ccb74-c593-452f-ae65-cb3f015791c3"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:44:14.244101 kubelet[2489]: E0625 18:44:14.244061 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.244624 kubelet[2489]: W0625 18:44:14.244602 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.244748 kubelet[2489]: E0625 18:44:14.244734 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.245049 kubelet[2489]: E0625 18:44:14.245032 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.245208 kubelet[2489]: W0625 18:44:14.245193 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.245289 kubelet[2489]: E0625 18:44:14.245269 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.245474 kubelet[2489]: E0625 18:44:14.245451 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.245474 kubelet[2489]: W0625 18:44:14.245469 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.245540 kubelet[2489]: E0625 18:44:14.245492 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.245674 kubelet[2489]: E0625 18:44:14.245663 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.245674 kubelet[2489]: W0625 18:44:14.245673 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.245736 kubelet[2489]: E0625 18:44:14.245688 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.245836 kubelet[2489]: E0625 18:44:14.245826 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.245836 kubelet[2489]: W0625 18:44:14.245835 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.245882 kubelet[2489]: E0625 18:44:14.245846 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.246045 kubelet[2489]: E0625 18:44:14.246027 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.246045 kubelet[2489]: W0625 18:44:14.246038 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.246108 kubelet[2489]: E0625 18:44:14.246053 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.246312 kubelet[2489]: E0625 18:44:14.246296 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.246312 kubelet[2489]: W0625 18:44:14.246311 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.246511 kubelet[2489]: E0625 18:44:14.246360 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.246511 kubelet[2489]: I0625 18:44:14.246406 2489 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/566ccb74-c593-452f-ae65-cb3f015791c3-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jun 25 18:44:14.246511 kubelet[2489]: I0625 18:44:14.246421 2489 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/566ccb74-c593-452f-ae65-cb3f015791c3-typha-certs\") on node \"localhost\" DevicePath \"\"" Jun 25 18:44:14.246511 kubelet[2489]: I0625 18:44:14.246431 2489 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vn2hb\" (UniqueName: \"kubernetes.io/projected/566ccb74-c593-452f-ae65-cb3f015791c3-kube-api-access-vn2hb\") on node \"localhost\" DevicePath \"\"" Jun 25 18:44:14.246835 kubelet[2489]: E0625 18:44:14.246697 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.246835 kubelet[2489]: W0625 18:44:14.246712 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.246835 kubelet[2489]: E0625 18:44:14.246728 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.247137 kubelet[2489]: E0625 18:44:14.247045 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.247137 kubelet[2489]: W0625 18:44:14.247058 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.247137 kubelet[2489]: E0625 18:44:14.247073 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.247561 kubelet[2489]: E0625 18:44:14.247444 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.247561 kubelet[2489]: W0625 18:44:14.247459 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.247561 kubelet[2489]: E0625 18:44:14.247473 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.247765 kubelet[2489]: E0625 18:44:14.247752 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.247829 kubelet[2489]: W0625 18:44:14.247818 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.247953 kubelet[2489]: E0625 18:44:14.247898 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.248062 kubelet[2489]: E0625 18:44:14.248050 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.248116 kubelet[2489]: W0625 18:44:14.248105 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.248255 kubelet[2489]: E0625 18:44:14.248170 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.248373 kubelet[2489]: E0625 18:44:14.248360 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.248436 kubelet[2489]: W0625 18:44:14.248425 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.248504 kubelet[2489]: E0625 18:44:14.248495 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.248761 kubelet[2489]: E0625 18:44:14.248713 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.249047 kubelet[2489]: W0625 18:44:14.248811 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.249047 kubelet[2489]: E0625 18:44:14.248837 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.249257 kubelet[2489]: E0625 18:44:14.249152 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.249257 kubelet[2489]: W0625 18:44:14.249166 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.249257 kubelet[2489]: E0625 18:44:14.249184 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.249491 kubelet[2489]: E0625 18:44:14.249385 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.249491 kubelet[2489]: W0625 18:44:14.249401 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.249491 kubelet[2489]: E0625 18:44:14.249418 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.249608 kubelet[2489]: E0625 18:44:14.249586 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.249608 kubelet[2489]: W0625 18:44:14.249593 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.249608 kubelet[2489]: E0625 18:44:14.249603 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.258977 kubelet[2489]: E0625 18:44:14.258960 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.258977 kubelet[2489]: W0625 18:44:14.258974 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.259081 kubelet[2489]: E0625 18:44:14.258988 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.368564 kubelet[2489]: E0625 18:44:14.368458 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:14.370002 containerd[1439]: time="2024-06-25T18:44:14.369366404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d5d9b9f4d-m8ffx,Uid:649c2742-9e8f-4b08-a31e-e3a16d2fa735,Namespace:calico-system,Attempt:0,}" Jun 25 18:44:14.396357 containerd[1439]: time="2024-06-25T18:44:14.396258692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:14.396357 containerd[1439]: time="2024-06-25T18:44:14.396318653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:14.396543 containerd[1439]: time="2024-06-25T18:44:14.396333573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:14.396610 containerd[1439]: time="2024-06-25T18:44:14.396526937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:14.419978 systemd[1]: Started cri-containerd-be2d4dfb2f8d6c2122c79a7b70ae28a684dfe368a98714bf2aa338d593c07f10.scope - libcontainer container be2d4dfb2f8d6c2122c79a7b70ae28a684dfe368a98714bf2aa338d593c07f10. Jun 25 18:44:14.460525 containerd[1439]: time="2024-06-25T18:44:14.459861579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d5d9b9f4d-m8ffx,Uid:649c2742-9e8f-4b08-a31e-e3a16d2fa735,Namespace:calico-system,Attempt:0,} returns sandbox id \"be2d4dfb2f8d6c2122c79a7b70ae28a684dfe368a98714bf2aa338d593c07f10\"" Jun 25 18:44:14.462357 kubelet[2489]: E0625 18:44:14.462180 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:14.471989 containerd[1439]: time="2024-06-25T18:44:14.471928815Z" level=info msg="CreateContainer within sandbox \"be2d4dfb2f8d6c2122c79a7b70ae28a684dfe368a98714bf2aa338d593c07f10\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 18:44:14.487678 containerd[1439]: time="2024-06-25T18:44:14.487609683Z" level=info msg="CreateContainer within sandbox \"be2d4dfb2f8d6c2122c79a7b70ae28a684dfe368a98714bf2aa338d593c07f10\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f9e628de715e3ae19039be322f9f9972512764a97b51e7a98ecc9a325465b3f4\"" Jun 25 18:44:14.488699 containerd[1439]: time="2024-06-25T18:44:14.488435579Z" level=info msg="StartContainer for \"f9e628de715e3ae19039be322f9f9972512764a97b51e7a98ecc9a325465b3f4\"" Jun 25 18:44:14.513788 systemd[1]: Started cri-containerd-f9e628de715e3ae19039be322f9f9972512764a97b51e7a98ecc9a325465b3f4.scope - libcontainer container f9e628de715e3ae19039be322f9f9972512764a97b51e7a98ecc9a325465b3f4. Jun 25 18:44:14.553574 containerd[1439]: time="2024-06-25T18:44:14.553474534Z" level=info msg="StartContainer for \"f9e628de715e3ae19039be322f9f9972512764a97b51e7a98ecc9a325465b3f4\" returns successfully" Jun 25 18:44:14.573086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230-rootfs.mount: Deactivated successfully. Jun 25 18:44:14.573508 systemd[1]: var-lib-kubelet-pods-566ccb74\x2dc593\x2d452f\x2dae65\x2dcb3f015791c3-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jun 25 18:44:14.573664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dddf10f3abc3ce8f8490cdec5798f68f022d53d6f1b491bf2857331205c5744c-rootfs.mount: Deactivated successfully. Jun 25 18:44:14.573792 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dddf10f3abc3ce8f8490cdec5798f68f022d53d6f1b491bf2857331205c5744c-shm.mount: Deactivated successfully. Jun 25 18:44:14.573912 systemd[1]: var-lib-kubelet-pods-566ccb74\x2dc593\x2d452f\x2dae65\x2dcb3f015791c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvn2hb.mount: Deactivated successfully. Jun 25 18:44:14.574054 systemd[1]: var-lib-kubelet-pods-566ccb74\x2dc593\x2d452f\x2dae65\x2dcb3f015791c3-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jun 25 18:44:14.761167 kubelet[2489]: E0625 18:44:14.760844 2489 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g4ws7" podUID="42da1b33-d6af-464d-8bc6-37e59885f0c5" Jun 25 18:44:14.767240 systemd[1]: Removed slice kubepods-besteffort-pod566ccb74_c593_452f_ae65_cb3f015791c3.slice - libcontainer container kubepods-besteffort-pod566ccb74_c593_452f_ae65_cb3f015791c3.slice. Jun 25 18:44:14.840312 kubelet[2489]: I0625 18:44:14.840149 2489 scope.go:117] "RemoveContainer" containerID="011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230" Jun 25 18:44:14.842848 containerd[1439]: time="2024-06-25T18:44:14.842666244Z" level=info msg="RemoveContainer for \"011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230\"" Jun 25 18:44:14.845458 kubelet[2489]: E0625 18:44:14.845430 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:14.848907 containerd[1439]: time="2024-06-25T18:44:14.848872366Z" level=info msg="RemoveContainer for \"011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230\" returns successfully" Jun 25 18:44:14.849899 kubelet[2489]: I0625 18:44:14.849875 2489 scope.go:117] "RemoveContainer" containerID="011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230" Jun 25 18:44:14.850162 containerd[1439]: time="2024-06-25T18:44:14.850091590Z" level=error msg="ContainerStatus for \"011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230\": not found" Jun 25 18:44:14.850313 kubelet[2489]: E0625 18:44:14.850297 2489 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230\": not found" containerID="011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230" Jun 25 18:44:14.850362 kubelet[2489]: I0625 18:44:14.850345 2489 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230"} err="failed to get container status \"011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230\": rpc error: code = NotFound desc = an error occurred when try to find container \"011756e6257f883f68d5cdfdadefe5c9a490e8c20bafc77da5be844bce108230\": not found" Jun 25 18:44:14.873094 kubelet[2489]: I0625 18:44:14.873053 2489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5d5d9b9f4d-m8ffx" podStartSLOduration=3.873005399 podCreationTimestamp="2024-06-25 18:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:14.872765154 +0000 UTC m=+24.194735377" watchObservedRunningTime="2024-06-25 18:44:14.873005399 +0000 UTC m=+24.194975622" Jun 25 18:44:14.919271 kubelet[2489]: E0625 18:44:14.919240 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.919271 kubelet[2489]: W0625 18:44:14.919264 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.919426 kubelet[2489]: E0625 18:44:14.919287 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.919827 kubelet[2489]: E0625 18:44:14.919811 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.919827 kubelet[2489]: W0625 18:44:14.919825 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.919905 kubelet[2489]: E0625 18:44:14.919848 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.920332 kubelet[2489]: E0625 18:44:14.920317 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.920332 kubelet[2489]: W0625 18:44:14.920332 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.920404 kubelet[2489]: E0625 18:44:14.920382 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.921186 kubelet[2489]: E0625 18:44:14.921045 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.921186 kubelet[2489]: W0625 18:44:14.921179 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.921186 kubelet[2489]: E0625 18:44:14.921194 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.921734 kubelet[2489]: E0625 18:44:14.921716 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.921734 kubelet[2489]: W0625 18:44:14.921733 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.921814 kubelet[2489]: E0625 18:44:14.921750 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.922123 kubelet[2489]: E0625 18:44:14.922105 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.922123 kubelet[2489]: W0625 18:44:14.922123 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.922191 kubelet[2489]: E0625 18:44:14.922136 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.922571 kubelet[2489]: E0625 18:44:14.922554 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.922571 kubelet[2489]: W0625 18:44:14.922569 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.922649 kubelet[2489]: E0625 18:44:14.922582 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.922939 kubelet[2489]: E0625 18:44:14.922921 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.922939 kubelet[2489]: W0625 18:44:14.922935 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.923037 kubelet[2489]: E0625 18:44:14.922953 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.923312 kubelet[2489]: E0625 18:44:14.923296 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.923312 kubelet[2489]: W0625 18:44:14.923310 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.923384 kubelet[2489]: E0625 18:44:14.923328 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.923673 kubelet[2489]: E0625 18:44:14.923655 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.923673 kubelet[2489]: W0625 18:44:14.923671 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.923785 kubelet[2489]: E0625 18:44:14.923682 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.923993 kubelet[2489]: E0625 18:44:14.923976 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.923993 kubelet[2489]: W0625 18:44:14.923993 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.924062 kubelet[2489]: E0625 18:44:14.924005 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.924553 kubelet[2489]: E0625 18:44:14.924537 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.924553 kubelet[2489]: W0625 18:44:14.924552 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.924625 kubelet[2489]: E0625 18:44:14.924564 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.924848 kubelet[2489]: E0625 18:44:14.924816 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.924959 kubelet[2489]: W0625 18:44:14.924830 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.925000 kubelet[2489]: E0625 18:44:14.924966 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.925296 kubelet[2489]: E0625 18:44:14.925276 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.925296 kubelet[2489]: W0625 18:44:14.925294 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.925370 kubelet[2489]: E0625 18:44:14.925316 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.925742 kubelet[2489]: E0625 18:44:14.925724 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.925742 kubelet[2489]: W0625 18:44:14.925739 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.925871 kubelet[2489]: E0625 18:44:14.925761 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.951891 kubelet[2489]: E0625 18:44:14.951864 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.951891 kubelet[2489]: W0625 18:44:14.951886 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.952039 kubelet[2489]: E0625 18:44:14.951905 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.952155 kubelet[2489]: E0625 18:44:14.952138 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.952155 kubelet[2489]: W0625 18:44:14.952150 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.952210 kubelet[2489]: E0625 18:44:14.952170 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.952413 kubelet[2489]: E0625 18:44:14.952397 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.952413 kubelet[2489]: W0625 18:44:14.952411 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.952468 kubelet[2489]: E0625 18:44:14.952430 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.952730 kubelet[2489]: E0625 18:44:14.952713 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.952730 kubelet[2489]: W0625 18:44:14.952727 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.952795 kubelet[2489]: E0625 18:44:14.952745 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.952967 kubelet[2489]: E0625 18:44:14.952948 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.953003 kubelet[2489]: W0625 18:44:14.952967 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.953003 kubelet[2489]: E0625 18:44:14.952990 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.953411 kubelet[2489]: E0625 18:44:14.953396 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.953451 kubelet[2489]: W0625 18:44:14.953411 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.953451 kubelet[2489]: E0625 18:44:14.953430 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.953714 kubelet[2489]: E0625 18:44:14.953696 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.953760 kubelet[2489]: W0625 18:44:14.953713 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.953760 kubelet[2489]: E0625 18:44:14.953749 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.953933 kubelet[2489]: E0625 18:44:14.953918 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.953971 kubelet[2489]: W0625 18:44:14.953937 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.953996 kubelet[2489]: E0625 18:44:14.953968 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.954171 kubelet[2489]: E0625 18:44:14.954156 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.954171 kubelet[2489]: W0625 18:44:14.954170 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.954280 kubelet[2489]: E0625 18:44:14.954265 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.954423 kubelet[2489]: E0625 18:44:14.954407 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.954462 kubelet[2489]: W0625 18:44:14.954427 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.954462 kubelet[2489]: E0625 18:44:14.954444 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.957827 kubelet[2489]: E0625 18:44:14.957788 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.958044 kubelet[2489]: W0625 18:44:14.957807 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.958044 kubelet[2489]: E0625 18:44:14.957933 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.958602 kubelet[2489]: E0625 18:44:14.958456 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.958602 kubelet[2489]: W0625 18:44:14.958475 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.958602 kubelet[2489]: E0625 18:44:14.958489 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.958862 kubelet[2489]: E0625 18:44:14.958847 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.958939 kubelet[2489]: W0625 18:44:14.958926 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.959401 kubelet[2489]: E0625 18:44:14.959376 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.960117 kubelet[2489]: E0625 18:44:14.959996 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.960117 kubelet[2489]: W0625 18:44:14.960012 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.960117 kubelet[2489]: E0625 18:44:14.960045 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.960510 kubelet[2489]: E0625 18:44:14.960385 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.960510 kubelet[2489]: W0625 18:44:14.960399 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.960510 kubelet[2489]: E0625 18:44:14.960425 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.960644 kubelet[2489]: E0625 18:44:14.960593 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.960644 kubelet[2489]: W0625 18:44:14.960611 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.960644 kubelet[2489]: E0625 18:44:14.960630 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.960848 kubelet[2489]: E0625 18:44:14.960835 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.960848 kubelet[2489]: W0625 18:44:14.960846 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.960917 kubelet[2489]: E0625 18:44:14.960858 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.961240 kubelet[2489]: E0625 18:44:14.961222 2489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:14.961240 kubelet[2489]: W0625 18:44:14.961236 2489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:14.961320 kubelet[2489]: E0625 18:44:14.961248 2489 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:14.979829 containerd[1439]: time="2024-06-25T18:44:14.979742892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jun 25 18:44:14.979829 containerd[1439]: time="2024-06-25T18:44:14.979769732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:14.980604 containerd[1439]: time="2024-06-25T18:44:14.980562948Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:14.985721 containerd[1439]: time="2024-06-25T18:44:14.985686608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:14.987349 containerd[1439]: time="2024-06-25T18:44:14.987302560Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.424329368s" Jun 25 18:44:14.987420 containerd[1439]: time="2024-06-25T18:44:14.987353281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jun 25 18:44:14.989923 containerd[1439]: time="2024-06-25T18:44:14.989377721Z" level=info msg="CreateContainer within sandbox \"6c893c8a1ac496261937a5e6a0bee5805d9ff2d9c672ecb42c06ac4ec465da2e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 18:44:15.000416 containerd[1439]: time="2024-06-25T18:44:15.000349936Z" level=info msg="CreateContainer within sandbox \"6c893c8a1ac496261937a5e6a0bee5805d9ff2d9c672ecb42c06ac4ec465da2e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6183e40eba0bb2922deee1855a900a56635cd716c5b7af0360c5bcb0afeb8e61\"" Jun 25 18:44:15.001845 containerd[1439]: time="2024-06-25T18:44:15.000966428Z" level=info msg="StartContainer for \"6183e40eba0bb2922deee1855a900a56635cd716c5b7af0360c5bcb0afeb8e61\"" Jun 25 18:44:15.034861 systemd[1]: Started cri-containerd-6183e40eba0bb2922deee1855a900a56635cd716c5b7af0360c5bcb0afeb8e61.scope - libcontainer container 6183e40eba0bb2922deee1855a900a56635cd716c5b7af0360c5bcb0afeb8e61. Jun 25 18:44:15.066760 containerd[1439]: time="2024-06-25T18:44:15.066708147Z" level=info msg="StartContainer for \"6183e40eba0bb2922deee1855a900a56635cd716c5b7af0360c5bcb0afeb8e61\" returns successfully" Jun 25 18:44:15.103697 systemd[1]: cri-containerd-6183e40eba0bb2922deee1855a900a56635cd716c5b7af0360c5bcb0afeb8e61.scope: Deactivated successfully. Jun 25 18:44:15.151734 containerd[1439]: time="2024-06-25T18:44:15.151298220Z" level=info msg="shim disconnected" id=6183e40eba0bb2922deee1855a900a56635cd716c5b7af0360c5bcb0afeb8e61 namespace=k8s.io Jun 25 18:44:15.151734 containerd[1439]: time="2024-06-25T18:44:15.151347981Z" level=warning msg="cleaning up after shim disconnected" id=6183e40eba0bb2922deee1855a900a56635cd716c5b7af0360c5bcb0afeb8e61 namespace=k8s.io Jun 25 18:44:15.151734 containerd[1439]: time="2024-06-25T18:44:15.151358301Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:44:15.566881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6183e40eba0bb2922deee1855a900a56635cd716c5b7af0360c5bcb0afeb8e61-rootfs.mount: Deactivated successfully. Jun 25 18:44:15.850541 containerd[1439]: time="2024-06-25T18:44:15.849452370Z" level=info msg="StopPodSandbox for \"6c893c8a1ac496261937a5e6a0bee5805d9ff2d9c672ecb42c06ac4ec465da2e\"" Jun 25 18:44:15.850541 containerd[1439]: time="2024-06-25T18:44:15.849525331Z" level=info msg="Container to stop \"6183e40eba0bb2922deee1855a900a56635cd716c5b7af0360c5bcb0afeb8e61\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:44:15.851578 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c893c8a1ac496261937a5e6a0bee5805d9ff2d9c672ecb42c06ac4ec465da2e-shm.mount: Deactivated successfully. Jun 25 18:44:15.858188 systemd[1]: cri-containerd-6c893c8a1ac496261937a5e6a0bee5805d9ff2d9c672ecb42c06ac4ec465da2e.scope: Deactivated successfully. Jun 25 18:44:15.885931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c893c8a1ac496261937a5e6a0bee5805d9ff2d9c672ecb42c06ac4ec465da2e-rootfs.mount: Deactivated successfully. Jun 25 18:44:15.890436 containerd[1439]: time="2024-06-25T18:44:15.890238378Z" level=info msg="shim disconnected" id=6c893c8a1ac496261937a5e6a0bee5805d9ff2d9c672ecb42c06ac4ec465da2e namespace=k8s.io Jun 25 18:44:15.890436 containerd[1439]: time="2024-06-25T18:44:15.890290099Z" level=warning msg="cleaning up after shim disconnected" id=6c893c8a1ac496261937a5e6a0bee5805d9ff2d9c672ecb42c06ac4ec465da2e namespace=k8s.io Jun 25 18:44:15.890436 containerd[1439]: time="2024-06-25T18:44:15.890298579Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:44:15.901425 containerd[1439]: time="2024-06-25T18:44:15.901345027Z" level=info msg="TearDown network for sandbox \"6c893c8a1ac496261937a5e6a0bee5805d9ff2d9c672ecb42c06ac4ec465da2e\" successfully" Jun 25 18:44:15.901425 containerd[1439]: time="2024-06-25T18:44:15.901380548Z" level=info msg="StopPodSandbox for \"6c893c8a1ac496261937a5e6a0bee5805d9ff2d9c672ecb42c06ac4ec465da2e\" returns successfully" Jun 25 18:44:15.962126 kubelet[2489]: I0625 18:44:15.962077 2489 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-cni-log-dir\") pod \"75b52719-14ba-4545-9e43-3a685992b217\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " Jun 25 18:44:15.962126 kubelet[2489]: I0625 18:44:15.962127 2489 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75b52719-14ba-4545-9e43-3a685992b217-tigera-ca-bundle\") pod \"75b52719-14ba-4545-9e43-3a685992b217\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " Jun 25 18:44:15.962467 kubelet[2489]: I0625 18:44:15.962130 2489 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "75b52719-14ba-4545-9e43-3a685992b217" (UID: "75b52719-14ba-4545-9e43-3a685992b217"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:44:15.962467 kubelet[2489]: I0625 18:44:15.962147 2489 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-policysync\") pod \"75b52719-14ba-4545-9e43-3a685992b217\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " Jun 25 18:44:15.962467 kubelet[2489]: I0625 18:44:15.962165 2489 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-flexvol-driver-host\") pod \"75b52719-14ba-4545-9e43-3a685992b217\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " Jun 25 18:44:15.962467 kubelet[2489]: I0625 18:44:15.962185 2489 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-xtables-lock\") pod \"75b52719-14ba-4545-9e43-3a685992b217\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " Jun 25 18:44:15.962467 kubelet[2489]: I0625 18:44:15.962206 2489 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-cni-net-dir\") pod \"75b52719-14ba-4545-9e43-3a685992b217\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " Jun 25 18:44:15.962467 kubelet[2489]: I0625 18:44:15.962229 2489 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-var-lib-calico\") pod \"75b52719-14ba-4545-9e43-3a685992b217\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " Jun 25 18:44:15.962607 kubelet[2489]: I0625 18:44:15.962248 2489 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-cni-bin-dir\") pod \"75b52719-14ba-4545-9e43-3a685992b217\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " Jun 25 18:44:15.962607 kubelet[2489]: I0625 18:44:15.962266 2489 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-var-run-calico\") pod \"75b52719-14ba-4545-9e43-3a685992b217\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " Jun 25 18:44:15.962607 kubelet[2489]: I0625 18:44:15.962288 2489 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/75b52719-14ba-4545-9e43-3a685992b217-node-certs\") pod \"75b52719-14ba-4545-9e43-3a685992b217\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " Jun 25 18:44:15.962607 kubelet[2489]: I0625 18:44:15.962309 2489 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8nz2\" (UniqueName: \"kubernetes.io/projected/75b52719-14ba-4545-9e43-3a685992b217-kube-api-access-g8nz2\") pod \"75b52719-14ba-4545-9e43-3a685992b217\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " Jun 25 18:44:15.962607 kubelet[2489]: I0625 18:44:15.962327 2489 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-lib-modules\") pod \"75b52719-14ba-4545-9e43-3a685992b217\" (UID: \"75b52719-14ba-4545-9e43-3a685992b217\") " Jun 25 18:44:15.962607 kubelet[2489]: I0625 18:44:15.962367 2489 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Jun 25 18:44:15.962769 kubelet[2489]: I0625 18:44:15.962395 2489 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "75b52719-14ba-4545-9e43-3a685992b217" (UID: "75b52719-14ba-4545-9e43-3a685992b217"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:44:15.962769 kubelet[2489]: I0625 18:44:15.962417 2489 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-policysync" (OuterVolumeSpecName: "policysync") pod "75b52719-14ba-4545-9e43-3a685992b217" (UID: "75b52719-14ba-4545-9e43-3a685992b217"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:44:15.962769 kubelet[2489]: I0625 18:44:15.962433 2489 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "75b52719-14ba-4545-9e43-3a685992b217" (UID: "75b52719-14ba-4545-9e43-3a685992b217"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:44:15.962769 kubelet[2489]: I0625 18:44:15.962450 2489 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "75b52719-14ba-4545-9e43-3a685992b217" (UID: "75b52719-14ba-4545-9e43-3a685992b217"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:44:15.962769 kubelet[2489]: I0625 18:44:15.962464 2489 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "75b52719-14ba-4545-9e43-3a685992b217" (UID: "75b52719-14ba-4545-9e43-3a685992b217"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:44:15.962955 kubelet[2489]: I0625 18:44:15.962473 2489 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b52719-14ba-4545-9e43-3a685992b217-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "75b52719-14ba-4545-9e43-3a685992b217" (UID: "75b52719-14ba-4545-9e43-3a685992b217"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:44:15.962955 kubelet[2489]: I0625 18:44:15.962478 2489 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "75b52719-14ba-4545-9e43-3a685992b217" (UID: "75b52719-14ba-4545-9e43-3a685992b217"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:44:15.962955 kubelet[2489]: I0625 18:44:15.962489 2489 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "75b52719-14ba-4545-9e43-3a685992b217" (UID: "75b52719-14ba-4545-9e43-3a685992b217"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:44:15.962955 kubelet[2489]: I0625 18:44:15.962719 2489 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "75b52719-14ba-4545-9e43-3a685992b217" (UID: "75b52719-14ba-4545-9e43-3a685992b217"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:44:15.965730 systemd[1]: var-lib-kubelet-pods-75b52719\x2d14ba\x2d4545\x2d9e43\x2d3a685992b217-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jun 25 18:44:15.965838 kubelet[2489]: I0625 18:44:15.965751 2489 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b52719-14ba-4545-9e43-3a685992b217-node-certs" (OuterVolumeSpecName: "node-certs") pod "75b52719-14ba-4545-9e43-3a685992b217" (UID: "75b52719-14ba-4545-9e43-3a685992b217"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 18:44:15.966840 kubelet[2489]: I0625 18:44:15.966632 2489 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b52719-14ba-4545-9e43-3a685992b217-kube-api-access-g8nz2" (OuterVolumeSpecName: "kube-api-access-g8nz2") pod "75b52719-14ba-4545-9e43-3a685992b217" (UID: "75b52719-14ba-4545-9e43-3a685992b217"). InnerVolumeSpecName "kube-api-access-g8nz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:44:15.968144 systemd[1]: var-lib-kubelet-pods-75b52719\x2d14ba\x2d4545\x2d9e43\x2d3a685992b217-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg8nz2.mount: Deactivated successfully. Jun 25 18:44:16.062679 kubelet[2489]: I0625 18:44:16.062500 2489 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75b52719-14ba-4545-9e43-3a685992b217-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jun 25 18:44:16.062679 kubelet[2489]: I0625 18:44:16.062533 2489 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Jun 25 18:44:16.062679 kubelet[2489]: I0625 18:44:16.062544 2489 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jun 25 18:44:16.062679 kubelet[2489]: I0625 18:44:16.062553 2489 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-policysync\") on node \"localhost\" DevicePath \"\"" Jun 25 18:44:16.062679 kubelet[2489]: I0625 18:44:16.062563 2489 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Jun 25 18:44:16.062679 kubelet[2489]: I0625 18:44:16.062572 2489 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Jun 25 18:44:16.062679 kubelet[2489]: I0625 18:44:16.062581 2489 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Jun 25 18:44:16.062679 kubelet[2489]: I0625 18:44:16.062591 2489 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-var-run-calico\") on node \"localhost\" DevicePath \"\"" Jun 25 18:44:16.062962 kubelet[2489]: I0625 18:44:16.062601 2489 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-g8nz2\" (UniqueName: \"kubernetes.io/projected/75b52719-14ba-4545-9e43-3a685992b217-kube-api-access-g8nz2\") on node \"localhost\" DevicePath \"\"" Jun 25 18:44:16.062962 kubelet[2489]: I0625 18:44:16.062612 2489 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/75b52719-14ba-4545-9e43-3a685992b217-node-certs\") on node \"localhost\" DevicePath \"\"" Jun 25 18:44:16.062962 kubelet[2489]: I0625 18:44:16.062623 2489 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75b52719-14ba-4545-9e43-3a685992b217-lib-modules\") on node \"localhost\" DevicePath \"\"" Jun 25 18:44:16.761305 kubelet[2489]: E0625 18:44:16.760989 2489 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g4ws7" podUID="42da1b33-d6af-464d-8bc6-37e59885f0c5" Jun 25 18:44:16.763240 kubelet[2489]: I0625 18:44:16.763202 2489 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="566ccb74-c593-452f-ae65-cb3f015791c3" path="/var/lib/kubelet/pods/566ccb74-c593-452f-ae65-cb3f015791c3/volumes" Jun 25 18:44:16.767764 systemd[1]: Removed slice kubepods-besteffort-pod75b52719_14ba_4545_9e43_3a685992b217.slice - libcontainer container kubepods-besteffort-pod75b52719_14ba_4545_9e43_3a685992b217.slice. Jun 25 18:44:16.853131 kubelet[2489]: I0625 18:44:16.852839 2489 scope.go:117] "RemoveContainer" containerID="6183e40eba0bb2922deee1855a900a56635cd716c5b7af0360c5bcb0afeb8e61" Jun 25 18:44:16.854909 containerd[1439]: time="2024-06-25T18:44:16.854725368Z" level=info msg="RemoveContainer for \"6183e40eba0bb2922deee1855a900a56635cd716c5b7af0360c5bcb0afeb8e61\"" Jun 25 18:44:16.859417 containerd[1439]: time="2024-06-25T18:44:16.859090727Z" level=info msg="RemoveContainer for \"6183e40eba0bb2922deee1855a900a56635cd716c5b7af0360c5bcb0afeb8e61\" returns successfully" Jun 25 18:44:16.893829 kubelet[2489]: I0625 18:44:16.893796 2489 topology_manager.go:215] "Topology Admit Handler" podUID="fb0bdbd7-9537-4aab-993a-d717cf069922" podNamespace="calico-system" podName="calico-node-q2hcf" Jun 25 18:44:16.894123 kubelet[2489]: E0625 18:44:16.893847 2489 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="75b52719-14ba-4545-9e43-3a685992b217" containerName="flexvol-driver" Jun 25 18:44:16.894123 kubelet[2489]: I0625 18:44:16.893871 2489 memory_manager.go:346] "RemoveStaleState removing state" podUID="75b52719-14ba-4545-9e43-3a685992b217" containerName="flexvol-driver" Jun 25 18:44:16.904050 systemd[1]: Created slice kubepods-besteffort-podfb0bdbd7_9537_4aab_993a_d717cf069922.slice - libcontainer container kubepods-besteffort-podfb0bdbd7_9537_4aab_993a_d717cf069922.slice. Jun 25 18:44:16.968344 kubelet[2489]: I0625 18:44:16.968311 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fb0bdbd7-9537-4aab-993a-d717cf069922-cni-bin-dir\") pod \"calico-node-q2hcf\" (UID: \"fb0bdbd7-9537-4aab-993a-d717cf069922\") " pod="calico-system/calico-node-q2hcf" Jun 25 18:44:16.968711 kubelet[2489]: I0625 18:44:16.968358 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fb0bdbd7-9537-4aab-993a-d717cf069922-var-lib-calico\") pod \"calico-node-q2hcf\" (UID: \"fb0bdbd7-9537-4aab-993a-d717cf069922\") " pod="calico-system/calico-node-q2hcf" Jun 25 18:44:16.968711 kubelet[2489]: I0625 18:44:16.968381 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb0bdbd7-9537-4aab-993a-d717cf069922-xtables-lock\") pod \"calico-node-q2hcf\" (UID: \"fb0bdbd7-9537-4aab-993a-d717cf069922\") " pod="calico-system/calico-node-q2hcf" Jun 25 18:44:16.968711 kubelet[2489]: I0625 18:44:16.968400 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb0bdbd7-9537-4aab-993a-d717cf069922-tigera-ca-bundle\") pod \"calico-node-q2hcf\" (UID: \"fb0bdbd7-9537-4aab-993a-d717cf069922\") " pod="calico-system/calico-node-q2hcf" Jun 25 18:44:16.968711 kubelet[2489]: I0625 18:44:16.968422 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fb0bdbd7-9537-4aab-993a-d717cf069922-node-certs\") pod \"calico-node-q2hcf\" (UID: \"fb0bdbd7-9537-4aab-993a-d717cf069922\") " pod="calico-system/calico-node-q2hcf" Jun 25 18:44:16.968711 kubelet[2489]: I0625 18:44:16.968440 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fb0bdbd7-9537-4aab-993a-d717cf069922-flexvol-driver-host\") pod \"calico-node-q2hcf\" (UID: \"fb0bdbd7-9537-4aab-993a-d717cf069922\") " pod="calico-system/calico-node-q2hcf" Jun 25 18:44:16.968842 kubelet[2489]: I0625 18:44:16.968483 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fb0bdbd7-9537-4aab-993a-d717cf069922-policysync\") pod \"calico-node-q2hcf\" (UID: \"fb0bdbd7-9537-4aab-993a-d717cf069922\") " pod="calico-system/calico-node-q2hcf" Jun 25 18:44:16.968842 kubelet[2489]: I0625 18:44:16.968504 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb0bdbd7-9537-4aab-993a-d717cf069922-lib-modules\") pod \"calico-node-q2hcf\" (UID: \"fb0bdbd7-9537-4aab-993a-d717cf069922\") " pod="calico-system/calico-node-q2hcf" Jun 25 18:44:16.968842 kubelet[2489]: I0625 18:44:16.968523 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fb0bdbd7-9537-4aab-993a-d717cf069922-cni-net-dir\") pod \"calico-node-q2hcf\" (UID: \"fb0bdbd7-9537-4aab-993a-d717cf069922\") " pod="calico-system/calico-node-q2hcf" Jun 25 18:44:16.968842 kubelet[2489]: I0625 18:44:16.968543 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fb0bdbd7-9537-4aab-993a-d717cf069922-cni-log-dir\") pod \"calico-node-q2hcf\" (UID: \"fb0bdbd7-9537-4aab-993a-d717cf069922\") " pod="calico-system/calico-node-q2hcf" Jun 25 18:44:16.968842 kubelet[2489]: I0625 18:44:16.968562 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dj4q\" (UniqueName: \"kubernetes.io/projected/fb0bdbd7-9537-4aab-993a-d717cf069922-kube-api-access-9dj4q\") pod \"calico-node-q2hcf\" (UID: \"fb0bdbd7-9537-4aab-993a-d717cf069922\") " pod="calico-system/calico-node-q2hcf" Jun 25 18:44:16.968962 kubelet[2489]: I0625 18:44:16.968581 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fb0bdbd7-9537-4aab-993a-d717cf069922-var-run-calico\") pod \"calico-node-q2hcf\" (UID: \"fb0bdbd7-9537-4aab-993a-d717cf069922\") " pod="calico-system/calico-node-q2hcf" Jun 25 18:44:17.207940 kubelet[2489]: E0625 18:44:17.207704 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:17.208997 containerd[1439]: time="2024-06-25T18:44:17.208957844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q2hcf,Uid:fb0bdbd7-9537-4aab-993a-d717cf069922,Namespace:calico-system,Attempt:0,}" Jun 25 18:44:17.229256 containerd[1439]: time="2024-06-25T18:44:17.229125435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:17.229528 containerd[1439]: time="2024-06-25T18:44:17.229464281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:17.229619 containerd[1439]: time="2024-06-25T18:44:17.229517202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:17.229804 containerd[1439]: time="2024-06-25T18:44:17.229607724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:17.250817 systemd[1]: Started cri-containerd-384cc7443c09b6f410b5a6f3b1c249076880c99151df79388bcb40d7790dcabe.scope - libcontainer container 384cc7443c09b6f410b5a6f3b1c249076880c99151df79388bcb40d7790dcabe. Jun 25 18:44:17.270180 containerd[1439]: time="2024-06-25T18:44:17.270134910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q2hcf,Uid:fb0bdbd7-9537-4aab-993a-d717cf069922,Namespace:calico-system,Attempt:0,} returns sandbox id \"384cc7443c09b6f410b5a6f3b1c249076880c99151df79388bcb40d7790dcabe\"" Jun 25 18:44:17.270923 kubelet[2489]: E0625 18:44:17.270902 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:17.273917 containerd[1439]: time="2024-06-25T18:44:17.273884535Z" level=info msg="CreateContainer within sandbox \"384cc7443c09b6f410b5a6f3b1c249076880c99151df79388bcb40d7790dcabe\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 18:44:17.284599 containerd[1439]: time="2024-06-25T18:44:17.284499881Z" level=info msg="CreateContainer within sandbox \"384cc7443c09b6f410b5a6f3b1c249076880c99151df79388bcb40d7790dcabe\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"62969f23eb11b811b6d8434a8b55ddb153e39693a33e6f166cef5afc637b6f31\"" Jun 25 18:44:17.285784 containerd[1439]: time="2024-06-25T18:44:17.285677581Z" level=info msg="StartContainer for \"62969f23eb11b811b6d8434a8b55ddb153e39693a33e6f166cef5afc637b6f31\"" Jun 25 18:44:17.307885 systemd[1]: Started cri-containerd-62969f23eb11b811b6d8434a8b55ddb153e39693a33e6f166cef5afc637b6f31.scope - libcontainer container 62969f23eb11b811b6d8434a8b55ddb153e39693a33e6f166cef5afc637b6f31. Jun 25 18:44:17.333759 containerd[1439]: time="2024-06-25T18:44:17.333603017Z" level=info msg="StartContainer for \"62969f23eb11b811b6d8434a8b55ddb153e39693a33e6f166cef5afc637b6f31\" returns successfully" Jun 25 18:44:17.344671 systemd[1]: cri-containerd-62969f23eb11b811b6d8434a8b55ddb153e39693a33e6f166cef5afc637b6f31.scope: Deactivated successfully. Jun 25 18:44:17.374478 containerd[1439]: time="2024-06-25T18:44:17.374391528Z" level=info msg="shim disconnected" id=62969f23eb11b811b6d8434a8b55ddb153e39693a33e6f166cef5afc637b6f31 namespace=k8s.io Jun 25 18:44:17.374478 containerd[1439]: time="2024-06-25T18:44:17.374444129Z" level=warning msg="cleaning up after shim disconnected" id=62969f23eb11b811b6d8434a8b55ddb153e39693a33e6f166cef5afc637b6f31 namespace=k8s.io Jun 25 18:44:17.374478 containerd[1439]: time="2024-06-25T18:44:17.374452849Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:44:17.856609 kubelet[2489]: E0625 18:44:17.856574 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:17.858564 containerd[1439]: time="2024-06-25T18:44:17.858458767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 18:44:18.760711 kubelet[2489]: E0625 18:44:18.759894 2489 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g4ws7" podUID="42da1b33-d6af-464d-8bc6-37e59885f0c5" Jun 25 18:44:18.762413 kubelet[2489]: I0625 18:44:18.762382 2489 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="75b52719-14ba-4545-9e43-3a685992b217" path="/var/lib/kubelet/pods/75b52719-14ba-4545-9e43-3a685992b217/volumes" Jun 25 18:44:20.760137 kubelet[2489]: E0625 18:44:20.759898 2489 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g4ws7" podUID="42da1b33-d6af-464d-8bc6-37e59885f0c5" Jun 25 18:44:22.760921 kubelet[2489]: E0625 18:44:22.760571 2489 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g4ws7" podUID="42da1b33-d6af-464d-8bc6-37e59885f0c5" Jun 25 18:44:23.300024 containerd[1439]: time="2024-06-25T18:44:23.299977923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:23.300601 containerd[1439]: time="2024-06-25T18:44:23.300452649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jun 25 18:44:23.301493 containerd[1439]: time="2024-06-25T18:44:23.301259461Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:23.303817 containerd[1439]: time="2024-06-25T18:44:23.303787297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:23.304609 containerd[1439]: time="2024-06-25T18:44:23.304585228Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 5.446088701s" Jun 25 18:44:23.304696 containerd[1439]: time="2024-06-25T18:44:23.304611748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jun 25 18:44:23.307318 containerd[1439]: time="2024-06-25T18:44:23.307285986Z" level=info msg="CreateContainer within sandbox \"384cc7443c09b6f410b5a6f3b1c249076880c99151df79388bcb40d7790dcabe\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 18:44:23.319138 containerd[1439]: time="2024-06-25T18:44:23.319109154Z" level=info msg="CreateContainer within sandbox \"384cc7443c09b6f410b5a6f3b1c249076880c99151df79388bcb40d7790dcabe\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"29ee01781f6f728a9c75f53a02a2bdcdda6bb73fd22a0319955b0bc6545b32dc\"" Jun 25 18:44:23.320293 containerd[1439]: time="2024-06-25T18:44:23.319564880Z" level=info msg="StartContainer for \"29ee01781f6f728a9c75f53a02a2bdcdda6bb73fd22a0319955b0bc6545b32dc\"" Jun 25 18:44:23.353789 systemd[1]: Started cri-containerd-29ee01781f6f728a9c75f53a02a2bdcdda6bb73fd22a0319955b0bc6545b32dc.scope - libcontainer container 29ee01781f6f728a9c75f53a02a2bdcdda6bb73fd22a0319955b0bc6545b32dc. Jun 25 18:44:23.376714 containerd[1439]: time="2024-06-25T18:44:23.376615688Z" level=info msg="StartContainer for \"29ee01781f6f728a9c75f53a02a2bdcdda6bb73fd22a0319955b0bc6545b32dc\" returns successfully" Jun 25 18:44:23.808151 containerd[1439]: time="2024-06-25T18:44:23.808108802Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:44:23.810149 systemd[1]: cri-containerd-29ee01781f6f728a9c75f53a02a2bdcdda6bb73fd22a0319955b0bc6545b32dc.scope: Deactivated successfully. Jun 25 18:44:23.826501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29ee01781f6f728a9c75f53a02a2bdcdda6bb73fd22a0319955b0bc6545b32dc-rootfs.mount: Deactivated successfully. Jun 25 18:44:23.830323 kubelet[2489]: I0625 18:44:23.830149 2489 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 18:44:23.848286 kubelet[2489]: I0625 18:44:23.848246 2489 topology_manager.go:215] "Topology Admit Handler" podUID="320816e0-861f-4b3b-bda1-d52532a4e96c" podNamespace="kube-system" podName="coredns-5dd5756b68-cgvbt" Jun 25 18:44:23.850141 kubelet[2489]: I0625 18:44:23.850097 2489 topology_manager.go:215] "Topology Admit Handler" podUID="9373fe30-4e66-4a3f-b97d-c1995dacde38" podNamespace="calico-system" podName="calico-kube-controllers-f6744788b-kzz4x" Jun 25 18:44:23.857696 systemd[1]: Created slice kubepods-burstable-pod320816e0_861f_4b3b_bda1_d52532a4e96c.slice - libcontainer container kubepods-burstable-pod320816e0_861f_4b3b_bda1_d52532a4e96c.slice. Jun 25 18:44:23.862090 kubelet[2489]: I0625 18:44:23.861732 2489 topology_manager.go:215] "Topology Admit Handler" podUID="a41210ea-56a1-4e5b-869d-62550a89978d" podNamespace="kube-system" podName="coredns-5dd5756b68-lwd6d" Jun 25 18:44:23.869066 systemd[1]: Created slice kubepods-besteffort-pod9373fe30_4e66_4a3f_b97d_c1995dacde38.slice - libcontainer container kubepods-besteffort-pod9373fe30_4e66_4a3f_b97d_c1995dacde38.slice. Jun 25 18:44:23.869769 kubelet[2489]: E0625 18:44:23.869752 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:23.871345 systemd[1]: Created slice kubepods-burstable-poda41210ea_56a1_4e5b_869d_62550a89978d.slice - libcontainer container kubepods-burstable-poda41210ea_56a1_4e5b_869d_62550a89978d.slice. Jun 25 18:44:23.949015 kubelet[2489]: I0625 18:44:23.948982 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/320816e0-861f-4b3b-bda1-d52532a4e96c-config-volume\") pod \"coredns-5dd5756b68-cgvbt\" (UID: \"320816e0-861f-4b3b-bda1-d52532a4e96c\") " pod="kube-system/coredns-5dd5756b68-cgvbt" Jun 25 18:44:23.949015 kubelet[2489]: I0625 18:44:23.949021 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a41210ea-56a1-4e5b-869d-62550a89978d-config-volume\") pod \"coredns-5dd5756b68-lwd6d\" (UID: \"a41210ea-56a1-4e5b-869d-62550a89978d\") " pod="kube-system/coredns-5dd5756b68-lwd6d" Jun 25 18:44:23.949164 kubelet[2489]: I0625 18:44:23.949044 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2qxk\" (UniqueName: \"kubernetes.io/projected/320816e0-861f-4b3b-bda1-d52532a4e96c-kube-api-access-j2qxk\") pod \"coredns-5dd5756b68-cgvbt\" (UID: \"320816e0-861f-4b3b-bda1-d52532a4e96c\") " pod="kube-system/coredns-5dd5756b68-cgvbt" Jun 25 18:44:23.949164 kubelet[2489]: I0625 18:44:23.949066 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9373fe30-4e66-4a3f-b97d-c1995dacde38-tigera-ca-bundle\") pod \"calico-kube-controllers-f6744788b-kzz4x\" (UID: \"9373fe30-4e66-4a3f-b97d-c1995dacde38\") " pod="calico-system/calico-kube-controllers-f6744788b-kzz4x" Jun 25 18:44:23.949164 kubelet[2489]: I0625 18:44:23.949088 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqzl8\" (UniqueName: \"kubernetes.io/projected/9373fe30-4e66-4a3f-b97d-c1995dacde38-kube-api-access-hqzl8\") pod \"calico-kube-controllers-f6744788b-kzz4x\" (UID: \"9373fe30-4e66-4a3f-b97d-c1995dacde38\") " pod="calico-system/calico-kube-controllers-f6744788b-kzz4x" Jun 25 18:44:23.949164 kubelet[2489]: I0625 18:44:23.949119 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rbnl\" (UniqueName: \"kubernetes.io/projected/a41210ea-56a1-4e5b-869d-62550a89978d-kube-api-access-5rbnl\") pod \"coredns-5dd5756b68-lwd6d\" (UID: \"a41210ea-56a1-4e5b-869d-62550a89978d\") " pod="kube-system/coredns-5dd5756b68-lwd6d" Jun 25 18:44:23.955322 containerd[1439]: time="2024-06-25T18:44:23.955262727Z" level=info msg="shim disconnected" id=29ee01781f6f728a9c75f53a02a2bdcdda6bb73fd22a0319955b0bc6545b32dc namespace=k8s.io Jun 25 18:44:23.955322 containerd[1439]: time="2024-06-25T18:44:23.955320568Z" level=warning msg="cleaning up after shim disconnected" id=29ee01781f6f728a9c75f53a02a2bdcdda6bb73fd22a0319955b0bc6545b32dc namespace=k8s.io Jun 25 18:44:23.955464 containerd[1439]: time="2024-06-25T18:44:23.955329128Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:44:24.161215 kubelet[2489]: E0625 18:44:24.161176 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:24.162824 containerd[1439]: time="2024-06-25T18:44:24.162788918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-cgvbt,Uid:320816e0-861f-4b3b-bda1-d52532a4e96c,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:24.177367 kubelet[2489]: E0625 18:44:24.175877 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:24.177923 containerd[1439]: time="2024-06-25T18:44:24.177614241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lwd6d,Uid:a41210ea-56a1-4e5b-869d-62550a89978d,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:24.181289 containerd[1439]: time="2024-06-25T18:44:24.178736457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f6744788b-kzz4x,Uid:9373fe30-4e66-4a3f-b97d-c1995dacde38,Namespace:calico-system,Attempt:0,}" Jun 25 18:44:24.460044 containerd[1439]: time="2024-06-25T18:44:24.459936479Z" level=error msg="Failed to destroy network for sandbox \"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:24.461086 containerd[1439]: time="2024-06-25T18:44:24.460944933Z" level=error msg="encountered an error cleaning up failed sandbox \"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:24.461086 containerd[1439]: time="2024-06-25T18:44:24.461000614Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lwd6d,Uid:a41210ea-56a1-4e5b-869d-62550a89978d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:24.461620 kubelet[2489]: E0625 18:44:24.461592 2489 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:24.461692 kubelet[2489]: E0625 18:44:24.461679 2489 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-lwd6d" Jun 25 18:44:24.461721 kubelet[2489]: E0625 18:44:24.461701 2489 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-lwd6d" Jun 25 18:44:24.461748 containerd[1439]: time="2024-06-25T18:44:24.461694864Z" level=error msg="Failed to destroy network for sandbox \"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:24.461779 kubelet[2489]: E0625 18:44:24.461769 2489 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-lwd6d_kube-system(a41210ea-56a1-4e5b-869d-62550a89978d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-lwd6d_kube-system(a41210ea-56a1-4e5b-869d-62550a89978d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-lwd6d" podUID="a41210ea-56a1-4e5b-869d-62550a89978d" Jun 25 18:44:24.462265 containerd[1439]: time="2024-06-25T18:44:24.462079149Z" level=error msg="encountered an error cleaning up failed sandbox \"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:24.462265 containerd[1439]: time="2024-06-25T18:44:24.462139510Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f6744788b-kzz4x,Uid:9373fe30-4e66-4a3f-b97d-c1995dacde38,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:24.462940 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540-shm.mount: Deactivated successfully. Jun 25 18:44:24.463720 kubelet[2489]: E0625 18:44:24.463703 2489 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:24.463766 kubelet[2489]: E0625 18:44:24.463748 2489 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f6744788b-kzz4x" Jun 25 18:44:24.463800 containerd[1439]: time="2024-06-25T18:44:24.463740092Z" level=error msg="Failed to destroy network for sandbox \"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:24.463829 kubelet[2489]: E0625 18:44:24.463770 2489 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f6744788b-kzz4x" Jun 25 18:44:24.463829 kubelet[2489]: E0625 18:44:24.463808 2489 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f6744788b-kzz4x_calico-system(9373fe30-4e66-4a3f-b97d-c1995dacde38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f6744788b-kzz4x_calico-system(9373fe30-4e66-4a3f-b97d-c1995dacde38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f6744788b-kzz4x" podUID="9373fe30-4e66-4a3f-b97d-c1995dacde38" Jun 25 18:44:24.464219 containerd[1439]: time="2024-06-25T18:44:24.464188298Z" level=error msg="encountered an error cleaning up failed sandbox \"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:24.464260 containerd[1439]: time="2024-06-25T18:44:24.464235578Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-cgvbt,Uid:320816e0-861f-4b3b-bda1-d52532a4e96c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:24.464880 kubelet[2489]: E0625 18:44:24.464860 2489 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:24.464943 kubelet[2489]: E0625 18:44:24.464929 2489 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-cgvbt" Jun 25 18:44:24.464981 kubelet[2489]: E0625 18:44:24.464954 2489 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-cgvbt" Jun 25 18:44:24.465009 kubelet[2489]: E0625 18:44:24.464993 2489 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-cgvbt_kube-system(320816e0-861f-4b3b-bda1-d52532a4e96c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-cgvbt_kube-system(320816e0-861f-4b3b-bda1-d52532a4e96c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-cgvbt" podUID="320816e0-861f-4b3b-bda1-d52532a4e96c" Jun 25 18:44:24.466041 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773-shm.mount: Deactivated successfully. Jun 25 18:44:24.466145 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b-shm.mount: Deactivated successfully. Jun 25 18:44:24.764796 systemd[1]: Created slice kubepods-besteffort-pod42da1b33_d6af_464d_8bc6_37e59885f0c5.slice - libcontainer container kubepods-besteffort-pod42da1b33_d6af_464d_8bc6_37e59885f0c5.slice. Jun 25 18:44:24.766634 containerd[1439]: time="2024-06-25T18:44:24.766599012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g4ws7,Uid:42da1b33-d6af-464d-8bc6-37e59885f0c5,Namespace:calico-system,Attempt:0,}" Jun 25 18:44:24.810226 containerd[1439]: time="2024-06-25T18:44:24.810176291Z" level=error msg="Failed to destroy network for sandbox \"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:24.810622 containerd[1439]: time="2024-06-25T18:44:24.810597656Z" level=error msg="encountered an error cleaning up failed sandbox \"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:24.810763 containerd[1439]: time="2024-06-25T18:44:24.810740218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g4ws7,Uid:42da1b33-d6af-464d-8bc6-37e59885f0c5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:24.811042 kubelet[2489]: E0625 18:44:24.811016 2489 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:24.811105 kubelet[2489]: E0625 18:44:24.811066 2489 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g4ws7" Jun 25 18:44:24.811105 kubelet[2489]: E0625 18:44:24.811087 2489 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g4ws7" Jun 25 18:44:24.811172 kubelet[2489]: E0625 18:44:24.811140 2489 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g4ws7_calico-system(42da1b33-d6af-464d-8bc6-37e59885f0c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g4ws7_calico-system(42da1b33-d6af-464d-8bc6-37e59885f0c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g4ws7" podUID="42da1b33-d6af-464d-8bc6-37e59885f0c5" Jun 25 18:44:24.872515 kubelet[2489]: I0625 18:44:24.872441 2489 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Jun 25 18:44:24.873701 containerd[1439]: time="2024-06-25T18:44:24.873207956Z" level=info msg="StopPodSandbox for \"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\"" Jun 25 18:44:24.873701 containerd[1439]: time="2024-06-25T18:44:24.873694843Z" level=info msg="Ensure that sandbox d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540 in task-service has been cleanup successfully" Jun 25 18:44:24.877328 kubelet[2489]: E0625 18:44:24.876804 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:24.878103 containerd[1439]: time="2024-06-25T18:44:24.878082783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 18:44:24.879050 kubelet[2489]: I0625 18:44:24.878979 2489 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Jun 25 18:44:24.880463 containerd[1439]: time="2024-06-25T18:44:24.880436776Z" level=info msg="StopPodSandbox for \"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\"" Jun 25 18:44:24.880838 containerd[1439]: time="2024-06-25T18:44:24.880606778Z" level=info msg="Ensure that sandbox efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9 in task-service has been cleanup successfully" Jun 25 18:44:24.887406 kubelet[2489]: I0625 18:44:24.886891 2489 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Jun 25 18:44:24.887482 containerd[1439]: time="2024-06-25T18:44:24.887418632Z" level=info msg="StopPodSandbox for \"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\"" Jun 25 18:44:24.887657 containerd[1439]: time="2024-06-25T18:44:24.887607834Z" level=info msg="Ensure that sandbox bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773 in task-service has been cleanup successfully" Jun 25 18:44:24.890649 kubelet[2489]: I0625 18:44:24.890369 2489 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Jun 25 18:44:24.890907 containerd[1439]: time="2024-06-25T18:44:24.890747877Z" level=info msg="StopPodSandbox for \"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\"" Jun 25 18:44:24.893303 containerd[1439]: time="2024-06-25T18:44:24.893265712Z" level=info msg="Ensure that sandbox 800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b in task-service has been cleanup successfully" Jun 25 18:44:24.914245 containerd[1439]: time="2024-06-25T18:44:24.914197679Z" level=error msg="StopPodSandbox for \"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\" failed" error="failed to destroy network for sandbox \"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:24.914713 kubelet[2489]: E0625 18:44:24.914568 2489 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Jun 25 18:44:24.914713 kubelet[2489]: E0625 18:44:24.914610 2489 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540"} Jun 25 18:44:24.914713 kubelet[2489]: E0625 18:44:24.914666 2489 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a41210ea-56a1-4e5b-869d-62550a89978d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:44:24.914713 kubelet[2489]: E0625 18:44:24.914694 2489 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a41210ea-56a1-4e5b-869d-62550a89978d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-lwd6d" podUID="a41210ea-56a1-4e5b-869d-62550a89978d" Jun 25 18:44:24.919617 containerd[1439]: time="2024-06-25T18:44:24.919582793Z" level=error msg="StopPodSandbox for \"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\" failed" error="failed to destroy network for sandbox \"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:24.919874 kubelet[2489]: E0625 18:44:24.919857 2489 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Jun 25 18:44:24.919983 kubelet[2489]: E0625 18:44:24.919971 2489 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9"} Jun 25 18:44:24.920061 kubelet[2489]: E0625 18:44:24.920053 2489 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42da1b33-d6af-464d-8bc6-37e59885f0c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:44:24.920170 kubelet[2489]: E0625 18:44:24.920160 2489 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42da1b33-d6af-464d-8bc6-37e59885f0c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g4ws7" podUID="42da1b33-d6af-464d-8bc6-37e59885f0c5" Jun 25 18:44:24.924758 containerd[1439]: time="2024-06-25T18:44:24.924727464Z" level=error msg="StopPodSandbox for \"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\" failed" error="failed to destroy network for sandbox \"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:24.925085 kubelet[2489]: E0625 18:44:24.924985 2489 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Jun 25 18:44:24.925085 kubelet[2489]: E0625 18:44:24.925015 2489 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773"} Jun 25 18:44:24.925085 kubelet[2489]: E0625 18:44:24.925046 2489 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9373fe30-4e66-4a3f-b97d-c1995dacde38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:44:24.925085 kubelet[2489]: E0625 18:44:24.925070 2489 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9373fe30-4e66-4a3f-b97d-c1995dacde38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f6744788b-kzz4x" podUID="9373fe30-4e66-4a3f-b97d-c1995dacde38" Jun 25 18:44:24.928375 containerd[1439]: time="2024-06-25T18:44:24.928333914Z" level=error msg="StopPodSandbox for \"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\" failed" error="failed to destroy network for sandbox \"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:24.928525 kubelet[2489]: E0625 18:44:24.928500 2489 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Jun 25 18:44:24.928577 kubelet[2489]: E0625 18:44:24.928529 2489 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b"} Jun 25 18:44:24.928577 kubelet[2489]: E0625 18:44:24.928557 2489 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"320816e0-861f-4b3b-bda1-d52532a4e96c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:44:24.928680 kubelet[2489]: E0625 18:44:24.928584 2489 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"320816e0-861f-4b3b-bda1-d52532a4e96c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-cgvbt" podUID="320816e0-861f-4b3b-bda1-d52532a4e96c" Jun 25 18:44:25.247223 systemd[1]: Started sshd@7-10.0.0.123:22-10.0.0.1:42804.service - OpenSSH per-connection server daemon (10.0.0.1:42804). Jun 25 18:44:25.289238 sshd[3849]: Accepted publickey for core from 10.0.0.1 port 42804 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:44:25.290591 sshd[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:25.293913 systemd-logind[1417]: New session 8 of user core. Jun 25 18:44:25.303781 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:44:25.321726 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9-shm.mount: Deactivated successfully. Jun 25 18:44:25.410874 sshd[3849]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:25.413034 systemd[1]: sshd@7-10.0.0.123:22-10.0.0.1:42804.service: Deactivated successfully. Jun 25 18:44:25.415249 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:44:25.416089 systemd-logind[1417]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:44:25.416885 systemd-logind[1417]: Removed session 8. Jun 25 18:44:27.482044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2495411296.mount: Deactivated successfully. Jun 25 18:44:27.705446 containerd[1439]: time="2024-06-25T18:44:27.704950825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:27.706008 containerd[1439]: time="2024-06-25T18:44:27.705963477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jun 25 18:44:27.712162 containerd[1439]: time="2024-06-25T18:44:27.712061834Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:27.712850 containerd[1439]: time="2024-06-25T18:44:27.712734603Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 2.834504898s" Jun 25 18:44:27.712850 containerd[1439]: time="2024-06-25T18:44:27.712768203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jun 25 18:44:27.713619 containerd[1439]: time="2024-06-25T18:44:27.713255929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:27.728775 containerd[1439]: time="2024-06-25T18:44:27.728741604Z" level=info msg="CreateContainer within sandbox \"384cc7443c09b6f410b5a6f3b1c249076880c99151df79388bcb40d7790dcabe\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 18:44:27.765966 containerd[1439]: time="2024-06-25T18:44:27.764681257Z" level=info msg="CreateContainer within sandbox \"384cc7443c09b6f410b5a6f3b1c249076880c99151df79388bcb40d7790dcabe\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c5b1c8639340f3987a96cbbb3b38507fa3838f5898d88cf3fb4d39996215d1d3\"" Jun 25 18:44:27.765966 containerd[1439]: time="2024-06-25T18:44:27.765311745Z" level=info msg="StartContainer for \"c5b1c8639340f3987a96cbbb3b38507fa3838f5898d88cf3fb4d39996215d1d3\"" Jun 25 18:44:27.815141 systemd[1]: Started cri-containerd-c5b1c8639340f3987a96cbbb3b38507fa3838f5898d88cf3fb4d39996215d1d3.scope - libcontainer container c5b1c8639340f3987a96cbbb3b38507fa3838f5898d88cf3fb4d39996215d1d3. Jun 25 18:44:27.898623 containerd[1439]: time="2024-06-25T18:44:27.898582904Z" level=info msg="StartContainer for \"c5b1c8639340f3987a96cbbb3b38507fa3838f5898d88cf3fb4d39996215d1d3\" returns successfully" Jun 25 18:44:27.903436 kubelet[2489]: E0625 18:44:27.903405 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:27.924367 kubelet[2489]: I0625 18:44:27.924198 2489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-q2hcf" podStartSLOduration=2.069320782 podCreationTimestamp="2024-06-25 18:44:16 +0000 UTC" firstStartedPulling="2024-06-25 18:44:17.858245523 +0000 UTC m=+27.180215706" lastFinishedPulling="2024-06-25 18:44:27.713065927 +0000 UTC m=+37.035036150" observedRunningTime="2024-06-25 18:44:27.923515418 +0000 UTC m=+37.245485641" watchObservedRunningTime="2024-06-25 18:44:27.924141226 +0000 UTC m=+37.246111449" Jun 25 18:44:28.037441 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 18:44:28.044491 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 18:44:28.905661 kubelet[2489]: E0625 18:44:28.905592 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:30.427693 systemd[1]: Started sshd@8-10.0.0.123:22-10.0.0.1:55564.service - OpenSSH per-connection server daemon (10.0.0.1:55564). Jun 25 18:44:30.475988 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 55564 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:44:30.479490 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:30.484812 systemd-logind[1417]: New session 9 of user core. Jun 25 18:44:30.490810 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:44:30.623944 sshd[4090]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:30.627291 systemd[1]: sshd@8-10.0.0.123:22-10.0.0.1:55564.service: Deactivated successfully. Jun 25 18:44:30.629036 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:44:30.631351 systemd-logind[1417]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:44:30.633300 systemd-logind[1417]: Removed session 9. Jun 25 18:44:31.202420 kubelet[2489]: I0625 18:44:31.202373 2489 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:44:31.203024 kubelet[2489]: E0625 18:44:31.202988 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:31.753766 systemd-networkd[1379]: vxlan.calico: Link UP Jun 25 18:44:31.753785 systemd-networkd[1379]: vxlan.calico: Gained carrier Jun 25 18:44:31.911851 kubelet[2489]: E0625 18:44:31.911819 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:33.024894 systemd-networkd[1379]: vxlan.calico: Gained IPv6LL Jun 25 18:44:35.642142 systemd[1]: Started sshd@9-10.0.0.123:22-10.0.0.1:55574.service - OpenSSH per-connection server daemon (10.0.0.1:55574). Jun 25 18:44:35.695492 sshd[4263]: Accepted publickey for core from 10.0.0.1 port 55574 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:44:35.697063 sshd[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:35.702141 systemd-logind[1417]: New session 10 of user core. Jun 25 18:44:35.712131 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:44:35.760536 containerd[1439]: time="2024-06-25T18:44:35.760492533Z" level=info msg="StopPodSandbox for \"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\"" Jun 25 18:44:35.849751 sshd[4263]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:35.859194 systemd[1]: sshd@9-10.0.0.123:22-10.0.0.1:55574.service: Deactivated successfully. Jun 25 18:44:35.862203 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:44:35.863870 systemd-logind[1417]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:44:35.871468 systemd[1]: Started sshd@10-10.0.0.123:22-10.0.0.1:55590.service - OpenSSH per-connection server daemon (10.0.0.1:55590). Jun 25 18:44:35.873934 systemd-logind[1417]: Removed session 10. Jun 25 18:44:35.906018 sshd[4314]: Accepted publickey for core from 10.0.0.1 port 55590 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:44:35.907435 sshd[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:35.912003 systemd-logind[1417]: New session 11 of user core. Jun 25 18:44:35.918799 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:44:35.934504 containerd[1439]: 2024-06-25 18:44:35.826 [INFO][4289] k8s.go 608: Cleaning up netns ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Jun 25 18:44:35.934504 containerd[1439]: 2024-06-25 18:44:35.826 [INFO][4289] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" iface="eth0" netns="/var/run/netns/cni-26730f42-8fc2-e0b7-3735-1f8fe65d5c98" Jun 25 18:44:35.934504 containerd[1439]: 2024-06-25 18:44:35.826 [INFO][4289] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" iface="eth0" netns="/var/run/netns/cni-26730f42-8fc2-e0b7-3735-1f8fe65d5c98" Jun 25 18:44:35.934504 containerd[1439]: 2024-06-25 18:44:35.827 [INFO][4289] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" iface="eth0" netns="/var/run/netns/cni-26730f42-8fc2-e0b7-3735-1f8fe65d5c98" Jun 25 18:44:35.934504 containerd[1439]: 2024-06-25 18:44:35.827 [INFO][4289] k8s.go 615: Releasing IP address(es) ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Jun 25 18:44:35.934504 containerd[1439]: 2024-06-25 18:44:35.827 [INFO][4289] utils.go 188: Calico CNI releasing IP address ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Jun 25 18:44:35.934504 containerd[1439]: 2024-06-25 18:44:35.919 [INFO][4306] ipam_plugin.go 411: Releasing address using handleID ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" HandleID="k8s-pod-network.800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Workload="localhost-k8s-coredns--5dd5756b68--cgvbt-eth0" Jun 25 18:44:35.934504 containerd[1439]: 2024-06-25 18:44:35.919 [INFO][4306] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:44:35.934504 containerd[1439]: 2024-06-25 18:44:35.919 [INFO][4306] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:44:35.934504 containerd[1439]: 2024-06-25 18:44:35.929 [WARNING][4306] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" HandleID="k8s-pod-network.800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Workload="localhost-k8s-coredns--5dd5756b68--cgvbt-eth0" Jun 25 18:44:35.934504 containerd[1439]: 2024-06-25 18:44:35.929 [INFO][4306] ipam_plugin.go 439: Releasing address using workloadID ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" HandleID="k8s-pod-network.800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Workload="localhost-k8s-coredns--5dd5756b68--cgvbt-eth0" Jun 25 18:44:35.934504 containerd[1439]: 2024-06-25 18:44:35.931 [INFO][4306] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:44:35.934504 containerd[1439]: 2024-06-25 18:44:35.932 [INFO][4289] k8s.go 621: Teardown processing complete. ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Jun 25 18:44:35.936553 systemd[1]: run-netns-cni\x2d26730f42\x2d8fc2\x2de0b7\x2d3735\x2d1f8fe65d5c98.mount: Deactivated successfully. Jun 25 18:44:35.936670 containerd[1439]: time="2024-06-25T18:44:35.936564972Z" level=info msg="TearDown network for sandbox \"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\" successfully" Jun 25 18:44:35.936670 containerd[1439]: time="2024-06-25T18:44:35.936596652Z" level=info msg="StopPodSandbox for \"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\" returns successfully" Jun 25 18:44:35.936978 kubelet[2489]: E0625 18:44:35.936948 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:35.937383 containerd[1439]: time="2024-06-25T18:44:35.937341180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-cgvbt,Uid:320816e0-861f-4b3b-bda1-d52532a4e96c,Namespace:kube-system,Attempt:1,}" Jun 25 18:44:36.075692 systemd-networkd[1379]: cali56f4cce878a: Link UP Jun 25 18:44:36.076498 systemd-networkd[1379]: cali56f4cce878a: Gained carrier Jun 25 18:44:36.088502 containerd[1439]: 2024-06-25 18:44:35.984 [INFO][4321] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--cgvbt-eth0 coredns-5dd5756b68- kube-system 320816e0-861f-4b3b-bda1-d52532a4e96c 871 0 2024-06-25 18:44:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-cgvbt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali56f4cce878a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874" Namespace="kube-system" Pod="coredns-5dd5756b68-cgvbt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--cgvbt-" Jun 25 18:44:36.088502 containerd[1439]: 2024-06-25 18:44:35.984 [INFO][4321] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874" Namespace="kube-system" Pod="coredns-5dd5756b68-cgvbt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--cgvbt-eth0" Jun 25 18:44:36.088502 containerd[1439]: 2024-06-25 18:44:36.016 [INFO][4340] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874" HandleID="k8s-pod-network.ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874" Workload="localhost-k8s-coredns--5dd5756b68--cgvbt-eth0" Jun 25 18:44:36.088502 containerd[1439]: 2024-06-25 18:44:36.030 [INFO][4340] ipam_plugin.go 264: Auto assigning IP ContainerID="ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874" HandleID="k8s-pod-network.ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874" Workload="localhost-k8s-coredns--5dd5756b68--cgvbt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000129f20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-cgvbt", "timestamp":"2024-06-25 18:44:36.016744766 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:44:36.088502 containerd[1439]: 2024-06-25 18:44:36.030 [INFO][4340] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:44:36.088502 containerd[1439]: 2024-06-25 18:44:36.030 [INFO][4340] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:44:36.088502 containerd[1439]: 2024-06-25 18:44:36.030 [INFO][4340] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 18:44:36.088502 containerd[1439]: 2024-06-25 18:44:36.032 [INFO][4340] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874" host="localhost" Jun 25 18:44:36.088502 containerd[1439]: 2024-06-25 18:44:36.039 [INFO][4340] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 18:44:36.088502 containerd[1439]: 2024-06-25 18:44:36.044 [INFO][4340] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 18:44:36.088502 containerd[1439]: 2024-06-25 18:44:36.046 [INFO][4340] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 18:44:36.088502 containerd[1439]: 2024-06-25 18:44:36.051 [INFO][4340] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 18:44:36.088502 containerd[1439]: 2024-06-25 18:44:36.051 [INFO][4340] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874" host="localhost" Jun 25 18:44:36.088502 containerd[1439]: 2024-06-25 18:44:36.057 [INFO][4340] ipam.go 1685: Creating new handle: k8s-pod-network.ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874 Jun 25 18:44:36.088502 containerd[1439]: 2024-06-25 18:44:36.061 [INFO][4340] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874" host="localhost" Jun 25 18:44:36.088502 containerd[1439]: 2024-06-25 18:44:36.067 [INFO][4340] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874" host="localhost" Jun 25 18:44:36.088502 containerd[1439]: 2024-06-25 18:44:36.067 [INFO][4340] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874" host="localhost" Jun 25 18:44:36.088502 containerd[1439]: 2024-06-25 18:44:36.067 [INFO][4340] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:44:36.088502 containerd[1439]: 2024-06-25 18:44:36.068 [INFO][4340] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874" HandleID="k8s-pod-network.ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874" Workload="localhost-k8s-coredns--5dd5756b68--cgvbt-eth0" Jun 25 18:44:36.089114 containerd[1439]: 2024-06-25 18:44:36.072 [INFO][4321] k8s.go 386: Populated endpoint ContainerID="ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874" Namespace="kube-system" Pod="coredns-5dd5756b68-cgvbt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--cgvbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--cgvbt-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"320816e0-861f-4b3b-bda1-d52532a4e96c", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-cgvbt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56f4cce878a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:44:36.089114 containerd[1439]: 2024-06-25 18:44:36.072 [INFO][4321] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874" Namespace="kube-system" Pod="coredns-5dd5756b68-cgvbt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--cgvbt-eth0" Jun 25 18:44:36.089114 containerd[1439]: 2024-06-25 18:44:36.072 [INFO][4321] dataplane_linux.go 68: Setting the host side veth name to cali56f4cce878a ContainerID="ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874" Namespace="kube-system" Pod="coredns-5dd5756b68-cgvbt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--cgvbt-eth0" Jun 25 18:44:36.089114 containerd[1439]: 2024-06-25 18:44:36.074 [INFO][4321] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874" Namespace="kube-system" Pod="coredns-5dd5756b68-cgvbt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--cgvbt-eth0" Jun 25 18:44:36.089114 containerd[1439]: 2024-06-25 18:44:36.074 [INFO][4321] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874" Namespace="kube-system" Pod="coredns-5dd5756b68-cgvbt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--cgvbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--cgvbt-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"320816e0-861f-4b3b-bda1-d52532a4e96c", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874", Pod:"coredns-5dd5756b68-cgvbt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56f4cce878a", MAC:"a2:dd:ab:c7:a7:f9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:44:36.089114 containerd[1439]: 2024-06-25 18:44:36.082 [INFO][4321] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874" Namespace="kube-system" Pod="coredns-5dd5756b68-cgvbt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--cgvbt-eth0" Jun 25 18:44:36.124795 containerd[1439]: time="2024-06-25T18:44:36.123959785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:36.124795 containerd[1439]: time="2024-06-25T18:44:36.124019586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:36.124795 containerd[1439]: time="2024-06-25T18:44:36.124043226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:36.124795 containerd[1439]: time="2024-06-25T18:44:36.124061586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:36.161790 systemd[1]: Started cri-containerd-ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874.scope - libcontainer container ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874. Jun 25 18:44:36.176906 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:44:36.193420 containerd[1439]: time="2024-06-25T18:44:36.193382376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-cgvbt,Uid:320816e0-861f-4b3b-bda1-d52532a4e96c,Namespace:kube-system,Attempt:1,} returns sandbox id \"ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874\"" Jun 25 18:44:36.194504 kubelet[2489]: E0625 18:44:36.194481 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:36.198103 containerd[1439]: time="2024-06-25T18:44:36.198065504Z" level=info msg="CreateContainer within sandbox \"ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:44:36.211084 sshd[4314]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:36.215208 containerd[1439]: time="2024-06-25T18:44:36.215046398Z" level=info msg="CreateContainer within sandbox \"ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e5be64fb8471c431997eb88824448a1ffadbc00079a6700110943c7751f2a313\"" Jun 25 18:44:36.215921 containerd[1439]: time="2024-06-25T18:44:36.215887567Z" level=info msg="StartContainer for \"e5be64fb8471c431997eb88824448a1ffadbc00079a6700110943c7751f2a313\"" Jun 25 18:44:36.219428 systemd[1]: sshd@10-10.0.0.123:22-10.0.0.1:55590.service: Deactivated successfully. Jun 25 18:44:36.222099 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:44:36.225221 systemd-logind[1417]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:44:36.235538 systemd[1]: Started sshd@11-10.0.0.123:22-10.0.0.1:55592.service - OpenSSH per-connection server daemon (10.0.0.1:55592). Jun 25 18:44:36.239234 systemd-logind[1417]: Removed session 11. Jun 25 18:44:36.252785 systemd[1]: Started cri-containerd-e5be64fb8471c431997eb88824448a1ffadbc00079a6700110943c7751f2a313.scope - libcontainer container e5be64fb8471c431997eb88824448a1ffadbc00079a6700110943c7751f2a313. Jun 25 18:44:36.274938 sshd[4412]: Accepted publickey for core from 10.0.0.1 port 55592 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:44:36.276248 sshd[4412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:36.278804 containerd[1439]: time="2024-06-25T18:44:36.278154645Z" level=info msg="StartContainer for \"e5be64fb8471c431997eb88824448a1ffadbc00079a6700110943c7751f2a313\" returns successfully" Jun 25 18:44:36.284132 systemd-logind[1417]: New session 12 of user core. Jun 25 18:44:36.299800 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:44:36.418340 sshd[4412]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:36.421731 systemd[1]: sshd@11-10.0.0.123:22-10.0.0.1:55592.service: Deactivated successfully. Jun 25 18:44:36.423420 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:44:36.424011 systemd-logind[1417]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:44:36.424791 systemd-logind[1417]: Removed session 12. Jun 25 18:44:36.924343 kubelet[2489]: E0625 18:44:36.924308 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:36.950495 kubelet[2489]: I0625 18:44:36.950449 2489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-cgvbt" podStartSLOduration=33.950398253 podCreationTimestamp="2024-06-25 18:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:36.934936495 +0000 UTC m=+46.256906838" watchObservedRunningTime="2024-06-25 18:44:36.950398253 +0000 UTC m=+46.272368476" Jun 25 18:44:37.888908 systemd-networkd[1379]: cali56f4cce878a: Gained IPv6LL Jun 25 18:44:37.925533 kubelet[2489]: E0625 18:44:37.925506 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:38.927824 kubelet[2489]: E0625 18:44:38.927692 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:39.760426 containerd[1439]: time="2024-06-25T18:44:39.760354739Z" level=info msg="StopPodSandbox for \"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\"" Jun 25 18:44:39.760807 containerd[1439]: time="2024-06-25T18:44:39.760365940Z" level=info msg="StopPodSandbox for \"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\"" Jun 25 18:44:39.761014 containerd[1439]: time="2024-06-25T18:44:39.760365940Z" level=info msg="StopPodSandbox for \"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\"" Jun 25 18:44:39.861910 containerd[1439]: 2024-06-25 18:44:39.816 [INFO][4523] k8s.go 608: Cleaning up netns ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Jun 25 18:44:39.861910 containerd[1439]: 2024-06-25 18:44:39.817 [INFO][4523] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" iface="eth0" netns="/var/run/netns/cni-a9050451-a8a8-7a31-5b21-8912e217234c" Jun 25 18:44:39.861910 containerd[1439]: 2024-06-25 18:44:39.817 [INFO][4523] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" iface="eth0" netns="/var/run/netns/cni-a9050451-a8a8-7a31-5b21-8912e217234c" Jun 25 18:44:39.861910 containerd[1439]: 2024-06-25 18:44:39.817 [INFO][4523] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" iface="eth0" netns="/var/run/netns/cni-a9050451-a8a8-7a31-5b21-8912e217234c" Jun 25 18:44:39.861910 containerd[1439]: 2024-06-25 18:44:39.818 [INFO][4523] k8s.go 615: Releasing IP address(es) ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Jun 25 18:44:39.861910 containerd[1439]: 2024-06-25 18:44:39.818 [INFO][4523] utils.go 188: Calico CNI releasing IP address ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Jun 25 18:44:39.861910 containerd[1439]: 2024-06-25 18:44:39.848 [INFO][4543] ipam_plugin.go 411: Releasing address using handleID ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" HandleID="k8s-pod-network.d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Workload="localhost-k8s-coredns--5dd5756b68--lwd6d-eth0" Jun 25 18:44:39.861910 containerd[1439]: 2024-06-25 18:44:39.848 [INFO][4543] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:44:39.861910 containerd[1439]: 2024-06-25 18:44:39.848 [INFO][4543] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:44:39.861910 containerd[1439]: 2024-06-25 18:44:39.857 [WARNING][4543] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" HandleID="k8s-pod-network.d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Workload="localhost-k8s-coredns--5dd5756b68--lwd6d-eth0" Jun 25 18:44:39.861910 containerd[1439]: 2024-06-25 18:44:39.857 [INFO][4543] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" HandleID="k8s-pod-network.d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Workload="localhost-k8s-coredns--5dd5756b68--lwd6d-eth0" Jun 25 18:44:39.861910 containerd[1439]: 2024-06-25 18:44:39.858 [INFO][4543] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:44:39.861910 containerd[1439]: 2024-06-25 18:44:39.860 [INFO][4523] k8s.go 621: Teardown processing complete. ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Jun 25 18:44:39.863137 containerd[1439]: time="2024-06-25T18:44:39.862924857Z" level=info msg="TearDown network for sandbox \"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\" successfully" Jun 25 18:44:39.863541 containerd[1439]: time="2024-06-25T18:44:39.863307940Z" level=info msg="StopPodSandbox for \"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\" returns successfully" Jun 25 18:44:39.864927 kubelet[2489]: E0625 18:44:39.864886 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:39.865352 systemd[1]: run-netns-cni\x2da9050451\x2da8a8\x2d7a31\x2d5b21\x2d8912e217234c.mount: Deactivated successfully. Jun 25 18:44:39.865728 containerd[1439]: time="2024-06-25T18:44:39.865537602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lwd6d,Uid:a41210ea-56a1-4e5b-869d-62550a89978d,Namespace:kube-system,Attempt:1,}" Jun 25 18:44:39.876817 containerd[1439]: 2024-06-25 18:44:39.820 [INFO][4524] k8s.go 608: Cleaning up netns ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Jun 25 18:44:39.876817 containerd[1439]: 2024-06-25 18:44:39.820 [INFO][4524] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" iface="eth0" netns="/var/run/netns/cni-caeea87e-12fc-dfff-9042-dcd4ee38fec3" Jun 25 18:44:39.876817 containerd[1439]: 2024-06-25 18:44:39.821 [INFO][4524] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" iface="eth0" netns="/var/run/netns/cni-caeea87e-12fc-dfff-9042-dcd4ee38fec3" Jun 25 18:44:39.876817 containerd[1439]: 2024-06-25 18:44:39.821 [INFO][4524] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" iface="eth0" netns="/var/run/netns/cni-caeea87e-12fc-dfff-9042-dcd4ee38fec3" Jun 25 18:44:39.876817 containerd[1439]: 2024-06-25 18:44:39.821 [INFO][4524] k8s.go 615: Releasing IP address(es) ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Jun 25 18:44:39.876817 containerd[1439]: 2024-06-25 18:44:39.821 [INFO][4524] utils.go 188: Calico CNI releasing IP address ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Jun 25 18:44:39.876817 containerd[1439]: 2024-06-25 18:44:39.848 [INFO][4552] ipam_plugin.go 411: Releasing address using handleID ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" HandleID="k8s-pod-network.efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Workload="localhost-k8s-csi--node--driver--g4ws7-eth0" Jun 25 18:44:39.876817 containerd[1439]: 2024-06-25 18:44:39.849 [INFO][4552] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:44:39.876817 containerd[1439]: 2024-06-25 18:44:39.858 [INFO][4552] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:44:39.876817 containerd[1439]: 2024-06-25 18:44:39.870 [WARNING][4552] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" HandleID="k8s-pod-network.efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Workload="localhost-k8s-csi--node--driver--g4ws7-eth0" Jun 25 18:44:39.876817 containerd[1439]: 2024-06-25 18:44:39.870 [INFO][4552] ipam_plugin.go 439: Releasing address using workloadID ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" HandleID="k8s-pod-network.efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Workload="localhost-k8s-csi--node--driver--g4ws7-eth0" Jun 25 18:44:39.876817 containerd[1439]: 2024-06-25 18:44:39.871 [INFO][4552] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:44:39.876817 containerd[1439]: 2024-06-25 18:44:39.874 [INFO][4524] k8s.go 621: Teardown processing complete. ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Jun 25 18:44:39.879748 containerd[1439]: time="2024-06-25T18:44:39.877693840Z" level=info msg="TearDown network for sandbox \"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\" successfully" Jun 25 18:44:39.879748 containerd[1439]: time="2024-06-25T18:44:39.877726040Z" level=info msg="StopPodSandbox for \"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\" returns successfully" Jun 25 18:44:39.881439 containerd[1439]: time="2024-06-25T18:44:39.880982272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g4ws7,Uid:42da1b33-d6af-464d-8bc6-37e59885f0c5,Namespace:calico-system,Attempt:1,}" Jun 25 18:44:39.884616 systemd[1]: run-netns-cni\x2dcaeea87e\x2d12fc\x2ddfff\x2d9042\x2ddcd4ee38fec3.mount: Deactivated successfully. Jun 25 18:44:39.895957 containerd[1439]: 2024-06-25 18:44:39.817 [INFO][4509] k8s.go 608: Cleaning up netns ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Jun 25 18:44:39.895957 containerd[1439]: 2024-06-25 18:44:39.817 [INFO][4509] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" iface="eth0" netns="/var/run/netns/cni-08a235d9-8c2e-83e8-32f4-22dbefc4637f" Jun 25 18:44:39.895957 containerd[1439]: 2024-06-25 18:44:39.817 [INFO][4509] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" iface="eth0" netns="/var/run/netns/cni-08a235d9-8c2e-83e8-32f4-22dbefc4637f" Jun 25 18:44:39.895957 containerd[1439]: 2024-06-25 18:44:39.817 [INFO][4509] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" iface="eth0" netns="/var/run/netns/cni-08a235d9-8c2e-83e8-32f4-22dbefc4637f" Jun 25 18:44:39.895957 containerd[1439]: 2024-06-25 18:44:39.817 [INFO][4509] k8s.go 615: Releasing IP address(es) ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Jun 25 18:44:39.895957 containerd[1439]: 2024-06-25 18:44:39.817 [INFO][4509] utils.go 188: Calico CNI releasing IP address ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Jun 25 18:44:39.895957 containerd[1439]: 2024-06-25 18:44:39.852 [INFO][4542] ipam_plugin.go 411: Releasing address using handleID ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" HandleID="k8s-pod-network.bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Workload="localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0" Jun 25 18:44:39.895957 containerd[1439]: 2024-06-25 18:44:39.852 [INFO][4542] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:44:39.895957 containerd[1439]: 2024-06-25 18:44:39.871 [INFO][4542] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:44:39.895957 containerd[1439]: 2024-06-25 18:44:39.884 [WARNING][4542] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" HandleID="k8s-pod-network.bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Workload="localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0" Jun 25 18:44:39.895957 containerd[1439]: 2024-06-25 18:44:39.884 [INFO][4542] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" HandleID="k8s-pod-network.bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Workload="localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0" Jun 25 18:44:39.895957 containerd[1439]: 2024-06-25 18:44:39.886 [INFO][4542] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:44:39.895957 containerd[1439]: 2024-06-25 18:44:39.890 [INFO][4509] k8s.go 621: Teardown processing complete. ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Jun 25 18:44:39.896562 containerd[1439]: time="2024-06-25T18:44:39.896463383Z" level=info msg="TearDown network for sandbox \"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\" successfully" Jun 25 18:44:39.896562 containerd[1439]: time="2024-06-25T18:44:39.896504983Z" level=info msg="StopPodSandbox for \"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\" returns successfully" Jun 25 18:44:39.898096 containerd[1439]: time="2024-06-25T18:44:39.898066518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f6744788b-kzz4x,Uid:9373fe30-4e66-4a3f-b97d-c1995dacde38,Namespace:calico-system,Attempt:1,}" Jun 25 18:44:40.012330 systemd-networkd[1379]: cali2ea737bfd34: Link UP Jun 25 18:44:40.013038 systemd-networkd[1379]: cali2ea737bfd34: Gained carrier Jun 25 18:44:40.026088 containerd[1439]: 2024-06-25 18:44:39.920 [INFO][4566] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--lwd6d-eth0 coredns-5dd5756b68- kube-system a41210ea-56a1-4e5b-869d-62550a89978d 926 0 2024-06-25 18:44:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-lwd6d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2ea737bfd34 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9" Namespace="kube-system" Pod="coredns-5dd5756b68-lwd6d" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--lwd6d-" Jun 25 18:44:40.026088 containerd[1439]: 2024-06-25 18:44:39.921 [INFO][4566] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9" Namespace="kube-system" Pod="coredns-5dd5756b68-lwd6d" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--lwd6d-eth0" Jun 25 18:44:40.026088 containerd[1439]: 2024-06-25 18:44:39.956 [INFO][4607] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9" HandleID="k8s-pod-network.b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9" Workload="localhost-k8s-coredns--5dd5756b68--lwd6d-eth0" Jun 25 18:44:40.026088 containerd[1439]: 2024-06-25 18:44:39.972 [INFO][4607] ipam_plugin.go 264: Auto assigning IP ContainerID="b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9" HandleID="k8s-pod-network.b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9" Workload="localhost-k8s-coredns--5dd5756b68--lwd6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000306c60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-lwd6d", "timestamp":"2024-06-25 18:44:39.956592927 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:44:40.026088 containerd[1439]: 2024-06-25 18:44:39.972 [INFO][4607] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:44:40.026088 containerd[1439]: 2024-06-25 18:44:39.972 [INFO][4607] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:44:40.026088 containerd[1439]: 2024-06-25 18:44:39.972 [INFO][4607] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 18:44:40.026088 containerd[1439]: 2024-06-25 18:44:39.974 [INFO][4607] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9" host="localhost" Jun 25 18:44:40.026088 containerd[1439]: 2024-06-25 18:44:39.984 [INFO][4607] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 18:44:40.026088 containerd[1439]: 2024-06-25 18:44:39.988 [INFO][4607] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 18:44:40.026088 containerd[1439]: 2024-06-25 18:44:39.990 [INFO][4607] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 18:44:40.026088 containerd[1439]: 2024-06-25 18:44:39.993 [INFO][4607] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 18:44:40.026088 containerd[1439]: 2024-06-25 18:44:39.993 [INFO][4607] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9" host="localhost" Jun 25 18:44:40.026088 containerd[1439]: 2024-06-25 18:44:39.998 [INFO][4607] ipam.go 1685: Creating new handle: k8s-pod-network.b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9 Jun 25 18:44:40.026088 containerd[1439]: 2024-06-25 18:44:40.001 [INFO][4607] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9" host="localhost" Jun 25 18:44:40.026088 containerd[1439]: 2024-06-25 18:44:40.007 [INFO][4607] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9" host="localhost" Jun 25 18:44:40.026088 containerd[1439]: 2024-06-25 18:44:40.007 [INFO][4607] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9" host="localhost" Jun 25 18:44:40.026088 containerd[1439]: 2024-06-25 18:44:40.007 [INFO][4607] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:44:40.026088 containerd[1439]: 2024-06-25 18:44:40.007 [INFO][4607] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9" HandleID="k8s-pod-network.b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9" Workload="localhost-k8s-coredns--5dd5756b68--lwd6d-eth0" Jun 25 18:44:40.027906 containerd[1439]: 2024-06-25 18:44:40.009 [INFO][4566] k8s.go 386: Populated endpoint ContainerID="b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9" Namespace="kube-system" Pod="coredns-5dd5756b68-lwd6d" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--lwd6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--lwd6d-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"a41210ea-56a1-4e5b-869d-62550a89978d", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-lwd6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ea737bfd34", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:44:40.027906 containerd[1439]: 2024-06-25 18:44:40.010 [INFO][4566] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9" Namespace="kube-system" Pod="coredns-5dd5756b68-lwd6d" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--lwd6d-eth0" Jun 25 18:44:40.027906 containerd[1439]: 2024-06-25 18:44:40.010 [INFO][4566] dataplane_linux.go 68: Setting the host side veth name to cali2ea737bfd34 ContainerID="b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9" Namespace="kube-system" Pod="coredns-5dd5756b68-lwd6d" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--lwd6d-eth0" Jun 25 18:44:40.027906 containerd[1439]: 2024-06-25 18:44:40.013 [INFO][4566] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9" Namespace="kube-system" Pod="coredns-5dd5756b68-lwd6d" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--lwd6d-eth0" Jun 25 18:44:40.027906 containerd[1439]: 2024-06-25 18:44:40.014 [INFO][4566] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9" Namespace="kube-system" Pod="coredns-5dd5756b68-lwd6d" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--lwd6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--lwd6d-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"a41210ea-56a1-4e5b-869d-62550a89978d", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9", Pod:"coredns-5dd5756b68-lwd6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ea737bfd34", MAC:"ba:da:42:09:c7:0b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:44:40.027906 containerd[1439]: 2024-06-25 18:44:40.021 [INFO][4566] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9" Namespace="kube-system" Pod="coredns-5dd5756b68-lwd6d" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--lwd6d-eth0" Jun 25 18:44:40.046697 systemd-networkd[1379]: cali73ad0c87038: Link UP Jun 25 18:44:40.048713 systemd-networkd[1379]: cali73ad0c87038: Gained carrier Jun 25 18:44:40.068878 containerd[1439]: 2024-06-25 18:44:39.950 [INFO][4598] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0 calico-kube-controllers-f6744788b- calico-system 9373fe30-4e66-4a3f-b97d-c1995dacde38 927 0 2024-06-25 18:44:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f6744788b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-f6744788b-kzz4x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali73ad0c87038 [] []}} ContainerID="82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163" Namespace="calico-system" Pod="calico-kube-controllers-f6744788b-kzz4x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-" Jun 25 18:44:40.068878 containerd[1439]: 2024-06-25 18:44:39.950 [INFO][4598] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163" Namespace="calico-system" Pod="calico-kube-controllers-f6744788b-kzz4x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0" Jun 25 18:44:40.068878 containerd[1439]: 2024-06-25 18:44:39.991 [INFO][4623] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163" HandleID="k8s-pod-network.82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163" Workload="localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0" Jun 25 18:44:40.068878 containerd[1439]: 2024-06-25 18:44:40.007 [INFO][4623] ipam_plugin.go 264: Auto assigning IP ContainerID="82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163" HandleID="k8s-pod-network.82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163" Workload="localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000372470), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-f6744788b-kzz4x", "timestamp":"2024-06-25 18:44:39.991487626 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:44:40.068878 containerd[1439]: 2024-06-25 18:44:40.007 [INFO][4623] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:44:40.068878 containerd[1439]: 2024-06-25 18:44:40.007 [INFO][4623] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:44:40.068878 containerd[1439]: 2024-06-25 18:44:40.007 [INFO][4623] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 18:44:40.068878 containerd[1439]: 2024-06-25 18:44:40.009 [INFO][4623] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163" host="localhost" Jun 25 18:44:40.068878 containerd[1439]: 2024-06-25 18:44:40.014 [INFO][4623] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 18:44:40.068878 containerd[1439]: 2024-06-25 18:44:40.022 [INFO][4623] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 18:44:40.068878 containerd[1439]: 2024-06-25 18:44:40.025 [INFO][4623] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 18:44:40.068878 containerd[1439]: 2024-06-25 18:44:40.027 [INFO][4623] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 18:44:40.068878 containerd[1439]: 2024-06-25 18:44:40.027 [INFO][4623] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163" host="localhost" Jun 25 18:44:40.068878 containerd[1439]: 2024-06-25 18:44:40.030 [INFO][4623] ipam.go 1685: Creating new handle: k8s-pod-network.82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163 Jun 25 18:44:40.068878 containerd[1439]: 2024-06-25 18:44:40.034 [INFO][4623] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163" host="localhost" Jun 25 18:44:40.068878 containerd[1439]: 2024-06-25 18:44:40.042 [INFO][4623] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163" host="localhost" Jun 25 18:44:40.068878 containerd[1439]: 2024-06-25 18:44:40.042 [INFO][4623] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163" host="localhost" Jun 25 18:44:40.068878 containerd[1439]: 2024-06-25 18:44:40.042 [INFO][4623] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:44:40.068878 containerd[1439]: 2024-06-25 18:44:40.042 [INFO][4623] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163" HandleID="k8s-pod-network.82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163" Workload="localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0" Jun 25 18:44:40.069470 containerd[1439]: 2024-06-25 18:44:40.044 [INFO][4598] k8s.go 386: Populated endpoint ContainerID="82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163" Namespace="calico-system" Pod="calico-kube-controllers-f6744788b-kzz4x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0", GenerateName:"calico-kube-controllers-f6744788b-", Namespace:"calico-system", SelfLink:"", UID:"9373fe30-4e66-4a3f-b97d-c1995dacde38", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f6744788b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-f6744788b-kzz4x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73ad0c87038", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:44:40.069470 containerd[1439]: 2024-06-25 18:44:40.044 [INFO][4598] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163" Namespace="calico-system" Pod="calico-kube-controllers-f6744788b-kzz4x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0" Jun 25 18:44:40.069470 containerd[1439]: 2024-06-25 18:44:40.044 [INFO][4598] dataplane_linux.go 68: Setting the host side veth name to cali73ad0c87038 ContainerID="82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163" Namespace="calico-system" Pod="calico-kube-controllers-f6744788b-kzz4x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0" Jun 25 18:44:40.069470 containerd[1439]: 2024-06-25 18:44:40.046 [INFO][4598] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163" Namespace="calico-system" Pod="calico-kube-controllers-f6744788b-kzz4x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0" Jun 25 18:44:40.069470 containerd[1439]: 2024-06-25 18:44:40.049 [INFO][4598] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163" Namespace="calico-system" Pod="calico-kube-controllers-f6744788b-kzz4x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0", GenerateName:"calico-kube-controllers-f6744788b-", Namespace:"calico-system", SelfLink:"", UID:"9373fe30-4e66-4a3f-b97d-c1995dacde38", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f6744788b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163", Pod:"calico-kube-controllers-f6744788b-kzz4x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73ad0c87038", MAC:"a6:d0:25:56:81:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:44:40.069470 containerd[1439]: 2024-06-25 18:44:40.060 [INFO][4598] k8s.go 500: Wrote updated endpoint to datastore ContainerID="82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163" Namespace="calico-system" Pod="calico-kube-controllers-f6744788b-kzz4x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0" Jun 25 18:44:40.082680 containerd[1439]: time="2024-06-25T18:44:40.080422959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:40.082680 containerd[1439]: time="2024-06-25T18:44:40.080473079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:40.082680 containerd[1439]: time="2024-06-25T18:44:40.080485999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:40.082680 containerd[1439]: time="2024-06-25T18:44:40.080495239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:40.091236 systemd-networkd[1379]: cali5c573c78a9f: Link UP Jun 25 18:44:40.091510 systemd-networkd[1379]: cali5c573c78a9f: Gained carrier Jun 25 18:44:40.103903 systemd[1]: Started cri-containerd-b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9.scope - libcontainer container b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9. Jun 25 18:44:40.104890 containerd[1439]: 2024-06-25 18:44:39.933 [INFO][4578] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--g4ws7-eth0 csi-node-driver- calico-system 42da1b33-d6af-464d-8bc6-37e59885f0c5 928 0 2024-06-25 18:44:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-g4ws7 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali5c573c78a9f [] []}} ContainerID="2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668" Namespace="calico-system" Pod="csi-node-driver-g4ws7" WorkloadEndpoint="localhost-k8s-csi--node--driver--g4ws7-" Jun 25 18:44:40.104890 containerd[1439]: 2024-06-25 18:44:39.933 [INFO][4578] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668" Namespace="calico-system" Pod="csi-node-driver-g4ws7" WorkloadEndpoint="localhost-k8s-csi--node--driver--g4ws7-eth0" Jun 25 18:44:40.104890 containerd[1439]: 2024-06-25 18:44:39.962 [INFO][4613] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668" HandleID="k8s-pod-network.2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668" Workload="localhost-k8s-csi--node--driver--g4ws7-eth0" Jun 25 18:44:40.104890 containerd[1439]: 2024-06-25 18:44:39.972 [INFO][4613] ipam_plugin.go 264: Auto assigning IP ContainerID="2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668" HandleID="k8s-pod-network.2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668" Workload="localhost-k8s-csi--node--driver--g4ws7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400062b320), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-g4ws7", "timestamp":"2024-06-25 18:44:39.962349543 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:44:40.104890 containerd[1439]: 2024-06-25 18:44:39.972 [INFO][4613] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:44:40.104890 containerd[1439]: 2024-06-25 18:44:40.042 [INFO][4613] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:44:40.104890 containerd[1439]: 2024-06-25 18:44:40.042 [INFO][4613] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 18:44:40.104890 containerd[1439]: 2024-06-25 18:44:40.044 [INFO][4613] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668" host="localhost" Jun 25 18:44:40.104890 containerd[1439]: 2024-06-25 18:44:40.051 [INFO][4613] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 18:44:40.104890 containerd[1439]: 2024-06-25 18:44:40.055 [INFO][4613] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 18:44:40.104890 containerd[1439]: 2024-06-25 18:44:40.059 [INFO][4613] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 18:44:40.104890 containerd[1439]: 2024-06-25 18:44:40.064 [INFO][4613] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 18:44:40.104890 containerd[1439]: 2024-06-25 18:44:40.064 [INFO][4613] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668" host="localhost" Jun 25 18:44:40.104890 containerd[1439]: 2024-06-25 18:44:40.072 [INFO][4613] ipam.go 1685: Creating new handle: k8s-pod-network.2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668 Jun 25 18:44:40.104890 containerd[1439]: 2024-06-25 18:44:40.077 [INFO][4613] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668" host="localhost" Jun 25 18:44:40.104890 containerd[1439]: 2024-06-25 18:44:40.084 [INFO][4613] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668" host="localhost" Jun 25 18:44:40.104890 containerd[1439]: 2024-06-25 18:44:40.084 [INFO][4613] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668" host="localhost" Jun 25 18:44:40.104890 containerd[1439]: 2024-06-25 18:44:40.084 [INFO][4613] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:44:40.104890 containerd[1439]: 2024-06-25 18:44:40.084 [INFO][4613] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668" HandleID="k8s-pod-network.2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668" Workload="localhost-k8s-csi--node--driver--g4ws7-eth0" Jun 25 18:44:40.106511 containerd[1439]: 2024-06-25 18:44:40.088 [INFO][4578] k8s.go 386: Populated endpoint ContainerID="2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668" Namespace="calico-system" Pod="csi-node-driver-g4ws7" WorkloadEndpoint="localhost-k8s-csi--node--driver--g4ws7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g4ws7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"42da1b33-d6af-464d-8bc6-37e59885f0c5", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-g4ws7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5c573c78a9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:44:40.106511 containerd[1439]: 2024-06-25 18:44:40.088 [INFO][4578] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668" Namespace="calico-system" Pod="csi-node-driver-g4ws7" WorkloadEndpoint="localhost-k8s-csi--node--driver--g4ws7-eth0" Jun 25 18:44:40.106511 containerd[1439]: 2024-06-25 18:44:40.088 [INFO][4578] dataplane_linux.go 68: Setting the host side veth name to cali5c573c78a9f ContainerID="2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668" Namespace="calico-system" Pod="csi-node-driver-g4ws7" WorkloadEndpoint="localhost-k8s-csi--node--driver--g4ws7-eth0" Jun 25 18:44:40.106511 containerd[1439]: 2024-06-25 18:44:40.091 [INFO][4578] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668" Namespace="calico-system" Pod="csi-node-driver-g4ws7" WorkloadEndpoint="localhost-k8s-csi--node--driver--g4ws7-eth0" Jun 25 18:44:40.106511 containerd[1439]: 2024-06-25 18:44:40.091 [INFO][4578] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668" Namespace="calico-system" Pod="csi-node-driver-g4ws7" WorkloadEndpoint="localhost-k8s-csi--node--driver--g4ws7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g4ws7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"42da1b33-d6af-464d-8bc6-37e59885f0c5", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668", Pod:"csi-node-driver-g4ws7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5c573c78a9f", MAC:"0e:0c:15:e6:b0:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:44:40.106511 containerd[1439]: 2024-06-25 18:44:40.100 [INFO][4578] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668" Namespace="calico-system" Pod="csi-node-driver-g4ws7" WorkloadEndpoint="localhost-k8s-csi--node--driver--g4ws7-eth0" Jun 25 18:44:40.106511 containerd[1439]: time="2024-06-25T18:44:40.105883962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:40.106511 containerd[1439]: time="2024-06-25T18:44:40.105936283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:40.106511 containerd[1439]: time="2024-06-25T18:44:40.105959803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:40.106511 containerd[1439]: time="2024-06-25T18:44:40.105976483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:40.121418 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:44:40.132015 containerd[1439]: time="2024-06-25T18:44:40.131845131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:40.132015 containerd[1439]: time="2024-06-25T18:44:40.131904251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:40.132015 containerd[1439]: time="2024-06-25T18:44:40.131922091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:40.132015 containerd[1439]: time="2024-06-25T18:44:40.131935532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:40.135815 systemd[1]: Started cri-containerd-82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163.scope - libcontainer container 82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163. Jun 25 18:44:40.151779 containerd[1439]: time="2024-06-25T18:44:40.150346188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lwd6d,Uid:a41210ea-56a1-4e5b-869d-62550a89978d,Namespace:kube-system,Attempt:1,} returns sandbox id \"b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9\"" Jun 25 18:44:40.151867 kubelet[2489]: E0625 18:44:40.151463 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:40.156442 containerd[1439]: time="2024-06-25T18:44:40.154230945Z" level=info msg="CreateContainer within sandbox \"b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:44:40.155803 systemd[1]: Started cri-containerd-2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668.scope - libcontainer container 2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668. Jun 25 18:44:40.164492 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:44:40.168839 containerd[1439]: time="2024-06-25T18:44:40.168802604Z" level=info msg="CreateContainer within sandbox \"b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ddbdae2254c7b4c53d3ffb08b89f1da9ef16a4f822af14002b6ad78e6c15ecf\"" Jun 25 18:44:40.169952 containerd[1439]: time="2024-06-25T18:44:40.169923695Z" level=info msg="StartContainer for \"6ddbdae2254c7b4c53d3ffb08b89f1da9ef16a4f822af14002b6ad78e6c15ecf\"" Jun 25 18:44:40.182405 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:44:40.195515 containerd[1439]: time="2024-06-25T18:44:40.195085416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f6744788b-kzz4x,Uid:9373fe30-4e66-4a3f-b97d-c1995dacde38,Namespace:calico-system,Attempt:1,} returns sandbox id \"82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163\"" Jun 25 18:44:40.199339 containerd[1439]: time="2024-06-25T18:44:40.197382598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 18:44:40.200887 containerd[1439]: time="2024-06-25T18:44:40.200855871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g4ws7,Uid:42da1b33-d6af-464d-8bc6-37e59885f0c5,Namespace:calico-system,Attempt:1,} returns sandbox id \"2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668\"" Jun 25 18:44:40.204861 systemd[1]: Started cri-containerd-6ddbdae2254c7b4c53d3ffb08b89f1da9ef16a4f822af14002b6ad78e6c15ecf.scope - libcontainer container 6ddbdae2254c7b4c53d3ffb08b89f1da9ef16a4f822af14002b6ad78e6c15ecf. Jun 25 18:44:40.229484 containerd[1439]: time="2024-06-25T18:44:40.229439744Z" level=info msg="StartContainer for \"6ddbdae2254c7b4c53d3ffb08b89f1da9ef16a4f822af14002b6ad78e6c15ecf\" returns successfully" Jun 25 18:44:40.868305 systemd[1]: run-netns-cni\x2d08a235d9\x2d8c2e\x2d83e8\x2d32f4\x2d22dbefc4637f.mount: Deactivated successfully. Jun 25 18:44:40.931739 kubelet[2489]: E0625 18:44:40.931576 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:40.942746 kubelet[2489]: I0625 18:44:40.941831 2489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-lwd6d" podStartSLOduration=37.94179688 podCreationTimestamp="2024-06-25 18:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:40.941010913 +0000 UTC m=+50.262981096" watchObservedRunningTime="2024-06-25 18:44:40.94179688 +0000 UTC m=+50.263767103" Jun 25 18:44:41.350867 containerd[1439]: time="2024-06-25T18:44:41.350816183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:41.352362 containerd[1439]: time="2024-06-25T18:44:41.352331117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jun 25 18:44:41.353433 containerd[1439]: time="2024-06-25T18:44:41.353401927Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:41.355295 containerd[1439]: time="2024-06-25T18:44:41.355262665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:41.356146 containerd[1439]: time="2024-06-25T18:44:41.356112793Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 1.158683115s" Jun 25 18:44:41.356201 containerd[1439]: time="2024-06-25T18:44:41.356155713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jun 25 18:44:41.358003 containerd[1439]: time="2024-06-25T18:44:41.357735968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 18:44:41.363995 containerd[1439]: time="2024-06-25T18:44:41.363961467Z" level=info msg="CreateContainer within sandbox \"82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 18:44:41.375891 containerd[1439]: time="2024-06-25T18:44:41.375838379Z" level=info msg="CreateContainer within sandbox \"82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4357b97812f02e7e44708394bf6806a87226024e37d6af5c0e55a4b46983a2cf\"" Jun 25 18:44:41.376623 containerd[1439]: time="2024-06-25T18:44:41.376353904Z" level=info msg="StartContainer for \"4357b97812f02e7e44708394bf6806a87226024e37d6af5c0e55a4b46983a2cf\"" Jun 25 18:44:41.403801 systemd[1]: Started cri-containerd-4357b97812f02e7e44708394bf6806a87226024e37d6af5c0e55a4b46983a2cf.scope - libcontainer container 4357b97812f02e7e44708394bf6806a87226024e37d6af5c0e55a4b46983a2cf. Jun 25 18:44:41.409792 systemd-networkd[1379]: cali5c573c78a9f: Gained IPv6LL Jun 25 18:44:41.428832 systemd[1]: Started sshd@12-10.0.0.123:22-10.0.0.1:43646.service - OpenSSH per-connection server daemon (10.0.0.1:43646). Jun 25 18:44:41.447356 containerd[1439]: time="2024-06-25T18:44:41.447305772Z" level=info msg="StartContainer for \"4357b97812f02e7e44708394bf6806a87226024e37d6af5c0e55a4b46983a2cf\" returns successfully" Jun 25 18:44:41.522575 sshd[4880]: Accepted publickey for core from 10.0.0.1 port 43646 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:44:41.524519 sshd[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:41.529614 systemd-logind[1417]: New session 13 of user core. Jun 25 18:44:41.542813 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:44:41.753025 sshd[4880]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:41.762503 systemd[1]: sshd@12-10.0.0.123:22-10.0.0.1:43646.service: Deactivated successfully. Jun 25 18:44:41.766886 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:44:41.767668 systemd-logind[1417]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:44:41.777196 systemd[1]: Started sshd@13-10.0.0.123:22-10.0.0.1:43660.service - OpenSSH per-connection server daemon (10.0.0.1:43660). Jun 25 18:44:41.778211 systemd-logind[1417]: Removed session 13. Jun 25 18:44:41.793840 systemd-networkd[1379]: cali2ea737bfd34: Gained IPv6LL Jun 25 18:44:41.813205 sshd[4907]: Accepted publickey for core from 10.0.0.1 port 43660 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:44:41.814594 sshd[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:41.823810 systemd-logind[1417]: New session 14 of user core. Jun 25 18:44:41.831796 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:44:41.942972 kubelet[2489]: E0625 18:44:41.942369 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:41.961833 kubelet[2489]: I0625 18:44:41.959371 2489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-f6744788b-kzz4x" podStartSLOduration=29.799293389 podCreationTimestamp="2024-06-25 18:44:11 +0000 UTC" firstStartedPulling="2024-06-25 18:44:40.196491549 +0000 UTC m=+49.518461772" lastFinishedPulling="2024-06-25 18:44:41.356527637 +0000 UTC m=+50.678497860" observedRunningTime="2024-06-25 18:44:41.954882636 +0000 UTC m=+51.276852899" watchObservedRunningTime="2024-06-25 18:44:41.959329477 +0000 UTC m=+51.281299700" Jun 25 18:44:41.984852 systemd-networkd[1379]: cali73ad0c87038: Gained IPv6LL Jun 25 18:44:42.194524 sshd[4907]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:42.204356 systemd[1]: sshd@13-10.0.0.123:22-10.0.0.1:43660.service: Deactivated successfully. Jun 25 18:44:42.206936 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:44:42.209149 systemd-logind[1417]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:44:42.216327 systemd[1]: Started sshd@14-10.0.0.123:22-10.0.0.1:43672.service - OpenSSH per-connection server daemon (10.0.0.1:43672). Jun 25 18:44:42.218257 systemd-logind[1417]: Removed session 14. Jun 25 18:44:42.286574 sshd[4953]: Accepted publickey for core from 10.0.0.1 port 43672 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:44:42.288864 sshd[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:42.296439 systemd-logind[1417]: New session 15 of user core. Jun 25 18:44:42.301870 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:44:42.396970 containerd[1439]: time="2024-06-25T18:44:42.396922868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:42.398821 containerd[1439]: time="2024-06-25T18:44:42.397700675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jun 25 18:44:42.398821 containerd[1439]: time="2024-06-25T18:44:42.398481642Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:42.401103 containerd[1439]: time="2024-06-25T18:44:42.401074106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:42.401700 containerd[1439]: time="2024-06-25T18:44:42.401613391Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.043844343s" Jun 25 18:44:42.401700 containerd[1439]: time="2024-06-25T18:44:42.401662832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jun 25 18:44:42.403826 containerd[1439]: time="2024-06-25T18:44:42.403706251Z" level=info msg="CreateContainer within sandbox \"2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 18:44:42.421061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2109966053.mount: Deactivated successfully. Jun 25 18:44:42.424172 containerd[1439]: time="2024-06-25T18:44:42.424135880Z" level=info msg="CreateContainer within sandbox \"2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5a476b83e47d7efefbccd851fbb45919b19c258b3e9fd4d6af18d67698f760b6\"" Jun 25 18:44:42.426249 containerd[1439]: time="2024-06-25T18:44:42.424906728Z" level=info msg="StartContainer for \"5a476b83e47d7efefbccd851fbb45919b19c258b3e9fd4d6af18d67698f760b6\"" Jun 25 18:44:42.449822 systemd[1]: Started cri-containerd-5a476b83e47d7efefbccd851fbb45919b19c258b3e9fd4d6af18d67698f760b6.scope - libcontainer container 5a476b83e47d7efefbccd851fbb45919b19c258b3e9fd4d6af18d67698f760b6. Jun 25 18:44:42.475509 containerd[1439]: time="2024-06-25T18:44:42.475468757Z" level=info msg="StartContainer for \"5a476b83e47d7efefbccd851fbb45919b19c258b3e9fd4d6af18d67698f760b6\" returns successfully" Jun 25 18:44:42.477288 containerd[1439]: time="2024-06-25T18:44:42.476971771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 18:44:42.947815 kubelet[2489]: E0625 18:44:42.947743 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:43.140417 sshd[4953]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:43.154129 systemd[1]: sshd@14-10.0.0.123:22-10.0.0.1:43672.service: Deactivated successfully. Jun 25 18:44:43.157025 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:44:43.159924 systemd-logind[1417]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:44:43.167210 systemd[1]: Started sshd@15-10.0.0.123:22-10.0.0.1:43676.service - OpenSSH per-connection server daemon (10.0.0.1:43676). Jun 25 18:44:43.169041 systemd-logind[1417]: Removed session 15. Jun 25 18:44:43.208357 sshd[5012]: Accepted publickey for core from 10.0.0.1 port 43676 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:44:43.209868 sshd[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:43.213351 systemd-logind[1417]: New session 16 of user core. Jun 25 18:44:43.219789 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:44:43.456212 containerd[1439]: time="2024-06-25T18:44:43.455248320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:43.456212 containerd[1439]: time="2024-06-25T18:44:43.456161489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jun 25 18:44:43.456887 containerd[1439]: time="2024-06-25T18:44:43.456860255Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:43.459230 containerd[1439]: time="2024-06-25T18:44:43.459137876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:43.460793 containerd[1439]: time="2024-06-25T18:44:43.460760491Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 983.7566ms" Jun 25 18:44:43.460901 containerd[1439]: time="2024-06-25T18:44:43.460884492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jun 25 18:44:43.462951 containerd[1439]: time="2024-06-25T18:44:43.462922631Z" level=info msg="CreateContainer within sandbox \"2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 18:44:43.486486 containerd[1439]: time="2024-06-25T18:44:43.486427206Z" level=info msg="CreateContainer within sandbox \"2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"be81385452a405501961bff173235e8ab116d83f21f22168fa6bb93e72616547\"" Jun 25 18:44:43.487140 containerd[1439]: time="2024-06-25T18:44:43.487103852Z" level=info msg="StartContainer for \"be81385452a405501961bff173235e8ab116d83f21f22168fa6bb93e72616547\"" Jun 25 18:44:43.526871 systemd[1]: Started cri-containerd-be81385452a405501961bff173235e8ab116d83f21f22168fa6bb93e72616547.scope - libcontainer container be81385452a405501961bff173235e8ab116d83f21f22168fa6bb93e72616547. Jun 25 18:44:43.581453 containerd[1439]: time="2024-06-25T18:44:43.581042993Z" level=info msg="StartContainer for \"be81385452a405501961bff173235e8ab116d83f21f22168fa6bb93e72616547\" returns successfully" Jun 25 18:44:43.587765 sshd[5012]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:43.596944 systemd[1]: sshd@15-10.0.0.123:22-10.0.0.1:43676.service: Deactivated successfully. Jun 25 18:44:43.600894 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:44:43.603036 systemd-logind[1417]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:44:43.606326 systemd-logind[1417]: Removed session 16. Jun 25 18:44:43.619057 systemd[1]: Started sshd@16-10.0.0.123:22-10.0.0.1:43690.service - OpenSSH per-connection server daemon (10.0.0.1:43690). Jun 25 18:44:43.659571 sshd[5064]: Accepted publickey for core from 10.0.0.1 port 43690 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:44:43.661119 sshd[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:43.665214 systemd-logind[1417]: New session 17 of user core. Jun 25 18:44:43.678083 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:44:43.808671 sshd[5064]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:43.815140 systemd[1]: sshd@16-10.0.0.123:22-10.0.0.1:43690.service: Deactivated successfully. Jun 25 18:44:43.817845 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:44:43.819703 systemd-logind[1417]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:44:43.821051 systemd-logind[1417]: Removed session 17. Jun 25 18:44:43.844989 kubelet[2489]: I0625 18:44:43.844886 2489 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 18:44:43.844989 kubelet[2489]: I0625 18:44:43.844924 2489 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 18:44:43.962465 kubelet[2489]: I0625 18:44:43.962423 2489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-g4ws7" podStartSLOduration=30.703631958 podCreationTimestamp="2024-06-25 18:44:10 +0000 UTC" firstStartedPulling="2024-06-25 18:44:40.202527047 +0000 UTC m=+49.524497270" lastFinishedPulling="2024-06-25 18:44:43.461242855 +0000 UTC m=+52.783213078" observedRunningTime="2024-06-25 18:44:43.961749961 +0000 UTC m=+53.283720184" watchObservedRunningTime="2024-06-25 18:44:43.962347766 +0000 UTC m=+53.284317989" Jun 25 18:44:48.831899 systemd[1]: Started sshd@17-10.0.0.123:22-10.0.0.1:43694.service - OpenSSH per-connection server daemon (10.0.0.1:43694). Jun 25 18:44:48.865601 sshd[5087]: Accepted publickey for core from 10.0.0.1 port 43694 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:44:48.868041 sshd[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:48.873412 systemd-logind[1417]: New session 18 of user core. Jun 25 18:44:48.882888 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:44:48.987527 sshd[5087]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:48.990975 systemd[1]: sshd@17-10.0.0.123:22-10.0.0.1:43694.service: Deactivated successfully. Jun 25 18:44:48.991064 systemd-logind[1417]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:44:48.992901 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:44:48.994056 systemd-logind[1417]: Removed session 18. Jun 25 18:44:50.752066 containerd[1439]: time="2024-06-25T18:44:50.752027805Z" level=info msg="StopPodSandbox for \"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\"" Jun 25 18:44:50.815734 containerd[1439]: 2024-06-25 18:44:50.786 [WARNING][5117] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0", GenerateName:"calico-kube-controllers-f6744788b-", Namespace:"calico-system", SelfLink:"", UID:"9373fe30-4e66-4a3f-b97d-c1995dacde38", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f6744788b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163", Pod:"calico-kube-controllers-f6744788b-kzz4x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73ad0c87038", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:44:50.815734 containerd[1439]: 2024-06-25 18:44:50.786 [INFO][5117] k8s.go 608: Cleaning up netns ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Jun 25 18:44:50.815734 containerd[1439]: 2024-06-25 18:44:50.786 [INFO][5117] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" iface="eth0" netns="" Jun 25 18:44:50.815734 containerd[1439]: 2024-06-25 18:44:50.786 [INFO][5117] k8s.go 615: Releasing IP address(es) ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Jun 25 18:44:50.815734 containerd[1439]: 2024-06-25 18:44:50.786 [INFO][5117] utils.go 188: Calico CNI releasing IP address ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Jun 25 18:44:50.815734 containerd[1439]: 2024-06-25 18:44:50.802 [INFO][5128] ipam_plugin.go 411: Releasing address using handleID ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" HandleID="k8s-pod-network.bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Workload="localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0" Jun 25 18:44:50.815734 containerd[1439]: 2024-06-25 18:44:50.802 [INFO][5128] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:44:50.815734 containerd[1439]: 2024-06-25 18:44:50.802 [INFO][5128] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:44:50.815734 containerd[1439]: 2024-06-25 18:44:50.810 [WARNING][5128] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" HandleID="k8s-pod-network.bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Workload="localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0" Jun 25 18:44:50.815734 containerd[1439]: 2024-06-25 18:44:50.811 [INFO][5128] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" HandleID="k8s-pod-network.bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Workload="localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0" Jun 25 18:44:50.815734 containerd[1439]: 2024-06-25 18:44:50.812 [INFO][5128] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:44:50.815734 containerd[1439]: 2024-06-25 18:44:50.814 [INFO][5117] k8s.go 621: Teardown processing complete. ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Jun 25 18:44:50.815734 containerd[1439]: time="2024-06-25T18:44:50.815613864Z" level=info msg="TearDown network for sandbox \"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\" successfully" Jun 25 18:44:50.815734 containerd[1439]: time="2024-06-25T18:44:50.815653224Z" level=info msg="StopPodSandbox for \"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\" returns successfully" Jun 25 18:44:50.816361 containerd[1439]: time="2024-06-25T18:44:50.816331470Z" level=info msg="RemovePodSandbox for \"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\"" Jun 25 18:44:50.816420 containerd[1439]: time="2024-06-25T18:44:50.816376870Z" level=info msg="Forcibly stopping sandbox \"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\"" Jun 25 18:44:50.884183 containerd[1439]: 2024-06-25 18:44:50.852 [WARNING][5151] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0", GenerateName:"calico-kube-controllers-f6744788b-", Namespace:"calico-system", SelfLink:"", UID:"9373fe30-4e66-4a3f-b97d-c1995dacde38", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f6744788b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82ca72fd4c91f6bd47fb8a2a55503be9f349ab93928249b43bb3c645ea2eb163", Pod:"calico-kube-controllers-f6744788b-kzz4x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73ad0c87038", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:44:50.884183 containerd[1439]: 2024-06-25 18:44:50.853 [INFO][5151] k8s.go 608: Cleaning up netns ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Jun 25 18:44:50.884183 containerd[1439]: 2024-06-25 18:44:50.853 [INFO][5151] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" iface="eth0" netns="" Jun 25 18:44:50.884183 containerd[1439]: 2024-06-25 18:44:50.853 [INFO][5151] k8s.go 615: Releasing IP address(es) ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Jun 25 18:44:50.884183 containerd[1439]: 2024-06-25 18:44:50.853 [INFO][5151] utils.go 188: Calico CNI releasing IP address ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Jun 25 18:44:50.884183 containerd[1439]: 2024-06-25 18:44:50.870 [INFO][5158] ipam_plugin.go 411: Releasing address using handleID ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" HandleID="k8s-pod-network.bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Workload="localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0" Jun 25 18:44:50.884183 containerd[1439]: 2024-06-25 18:44:50.870 [INFO][5158] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:44:50.884183 containerd[1439]: 2024-06-25 18:44:50.871 [INFO][5158] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:44:50.884183 containerd[1439]: 2024-06-25 18:44:50.879 [WARNING][5158] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" HandleID="k8s-pod-network.bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Workload="localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0" Jun 25 18:44:50.884183 containerd[1439]: 2024-06-25 18:44:50.879 [INFO][5158] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" HandleID="k8s-pod-network.bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Workload="localhost-k8s-calico--kube--controllers--f6744788b--kzz4x-eth0" Jun 25 18:44:50.884183 containerd[1439]: 2024-06-25 18:44:50.880 [INFO][5158] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:44:50.884183 containerd[1439]: 2024-06-25 18:44:50.882 [INFO][5151] k8s.go 621: Teardown processing complete. ContainerID="bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773" Jun 25 18:44:50.885697 containerd[1439]: time="2024-06-25T18:44:50.884144284Z" level=info msg="TearDown network for sandbox \"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\" successfully" Jun 25 18:44:50.890604 containerd[1439]: time="2024-06-25T18:44:50.890561339Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:44:50.890853 containerd[1439]: time="2024-06-25T18:44:50.890831341Z" level=info msg="RemovePodSandbox \"bbd0400555ea94c09712de1d1c5be3fd55e024786b8463bc2ad49dd6e2ac9773\" returns successfully" Jun 25 18:44:50.891411 containerd[1439]: time="2024-06-25T18:44:50.891382786Z" level=info msg="StopPodSandbox for \"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\"" Jun 25 18:44:50.966177 containerd[1439]: 2024-06-25 18:44:50.925 [WARNING][5181] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g4ws7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"42da1b33-d6af-464d-8bc6-37e59885f0c5", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668", Pod:"csi-node-driver-g4ws7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5c573c78a9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:44:50.966177 containerd[1439]: 2024-06-25 18:44:50.925 [INFO][5181] k8s.go 608: Cleaning up netns ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Jun 25 18:44:50.966177 containerd[1439]: 2024-06-25 18:44:50.925 [INFO][5181] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" iface="eth0" netns="" Jun 25 18:44:50.966177 containerd[1439]: 2024-06-25 18:44:50.925 [INFO][5181] k8s.go 615: Releasing IP address(es) ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Jun 25 18:44:50.966177 containerd[1439]: 2024-06-25 18:44:50.925 [INFO][5181] utils.go 188: Calico CNI releasing IP address ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Jun 25 18:44:50.966177 containerd[1439]: 2024-06-25 18:44:50.952 [INFO][5188] ipam_plugin.go 411: Releasing address using handleID ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" HandleID="k8s-pod-network.efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Workload="localhost-k8s-csi--node--driver--g4ws7-eth0" Jun 25 18:44:50.966177 containerd[1439]: 2024-06-25 18:44:50.953 [INFO][5188] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:44:50.966177 containerd[1439]: 2024-06-25 18:44:50.953 [INFO][5188] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:44:50.966177 containerd[1439]: 2024-06-25 18:44:50.961 [WARNING][5188] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" HandleID="k8s-pod-network.efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Workload="localhost-k8s-csi--node--driver--g4ws7-eth0" Jun 25 18:44:50.966177 containerd[1439]: 2024-06-25 18:44:50.961 [INFO][5188] ipam_plugin.go 439: Releasing address using workloadID ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" HandleID="k8s-pod-network.efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Workload="localhost-k8s-csi--node--driver--g4ws7-eth0" Jun 25 18:44:50.966177 containerd[1439]: 2024-06-25 18:44:50.962 [INFO][5188] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:44:50.966177 containerd[1439]: 2024-06-25 18:44:50.964 [INFO][5181] k8s.go 621: Teardown processing complete. ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Jun 25 18:44:50.966555 containerd[1439]: time="2024-06-25T18:44:50.966221860Z" level=info msg="TearDown network for sandbox \"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\" successfully" Jun 25 18:44:50.966555 containerd[1439]: time="2024-06-25T18:44:50.966245180Z" level=info msg="StopPodSandbox for \"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\" returns successfully" Jun 25 18:44:50.966555 containerd[1439]: time="2024-06-25T18:44:50.966483782Z" level=info msg="RemovePodSandbox for \"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\"" Jun 25 18:44:50.966555 containerd[1439]: time="2024-06-25T18:44:50.966505622Z" level=info msg="Forcibly stopping sandbox \"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\"" Jun 25 18:44:51.028897 containerd[1439]: 2024-06-25 18:44:50.999 [WARNING][5210] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g4ws7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"42da1b33-d6af-464d-8bc6-37e59885f0c5", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2fcdfa8ffe19b63b9fee1eeeb097589b5569e6c967dcd763cdb536f1e4c76668", Pod:"csi-node-driver-g4ws7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5c573c78a9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:44:51.028897 containerd[1439]: 2024-06-25 18:44:51.000 [INFO][5210] k8s.go 608: Cleaning up netns ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Jun 25 18:44:51.028897 containerd[1439]: 2024-06-25 18:44:51.000 [INFO][5210] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" iface="eth0" netns="" Jun 25 18:44:51.028897 containerd[1439]: 2024-06-25 18:44:51.000 [INFO][5210] k8s.go 615: Releasing IP address(es) ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Jun 25 18:44:51.028897 containerd[1439]: 2024-06-25 18:44:51.000 [INFO][5210] utils.go 188: Calico CNI releasing IP address ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Jun 25 18:44:51.028897 containerd[1439]: 2024-06-25 18:44:51.016 [INFO][5218] ipam_plugin.go 411: Releasing address using handleID ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" HandleID="k8s-pod-network.efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Workload="localhost-k8s-csi--node--driver--g4ws7-eth0" Jun 25 18:44:51.028897 containerd[1439]: 2024-06-25 18:44:51.016 [INFO][5218] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:44:51.028897 containerd[1439]: 2024-06-25 18:44:51.016 [INFO][5218] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:44:51.028897 containerd[1439]: 2024-06-25 18:44:51.024 [WARNING][5218] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" HandleID="k8s-pod-network.efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Workload="localhost-k8s-csi--node--driver--g4ws7-eth0" Jun 25 18:44:51.028897 containerd[1439]: 2024-06-25 18:44:51.024 [INFO][5218] ipam_plugin.go 439: Releasing address using workloadID ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" HandleID="k8s-pod-network.efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Workload="localhost-k8s-csi--node--driver--g4ws7-eth0" Jun 25 18:44:51.028897 containerd[1439]: 2024-06-25 18:44:51.026 [INFO][5218] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:44:51.028897 containerd[1439]: 2024-06-25 18:44:51.027 [INFO][5210] k8s.go 621: Teardown processing complete. ContainerID="efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9" Jun 25 18:44:51.028897 containerd[1439]: time="2024-06-25T18:44:51.028875828Z" level=info msg="TearDown network for sandbox \"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\" successfully" Jun 25 18:44:51.031731 containerd[1439]: time="2024-06-25T18:44:51.031694212Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:44:51.031799 containerd[1439]: time="2024-06-25T18:44:51.031752012Z" level=info msg="RemovePodSandbox \"efe063675a67f93e1fd0dbcd3074ad27dc1502698e1a8f3ebf963f032e6c0aa9\" returns successfully" Jun 25 18:44:51.032253 containerd[1439]: time="2024-06-25T18:44:51.032215456Z" level=info msg="StopPodSandbox for \"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\"" Jun 25 18:44:51.094886 containerd[1439]: 2024-06-25 18:44:51.064 [WARNING][5241] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--cgvbt-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"320816e0-861f-4b3b-bda1-d52532a4e96c", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874", Pod:"coredns-5dd5756b68-cgvbt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56f4cce878a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:44:51.094886 containerd[1439]: 2024-06-25 18:44:51.064 [INFO][5241] k8s.go 608: Cleaning up netns ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Jun 25 18:44:51.094886 containerd[1439]: 2024-06-25 18:44:51.064 [INFO][5241] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" iface="eth0" netns="" Jun 25 18:44:51.094886 containerd[1439]: 2024-06-25 18:44:51.064 [INFO][5241] k8s.go 615: Releasing IP address(es) ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Jun 25 18:44:51.094886 containerd[1439]: 2024-06-25 18:44:51.064 [INFO][5241] utils.go 188: Calico CNI releasing IP address ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Jun 25 18:44:51.094886 containerd[1439]: 2024-06-25 18:44:51.081 [INFO][5250] ipam_plugin.go 411: Releasing address using handleID ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" HandleID="k8s-pod-network.800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Workload="localhost-k8s-coredns--5dd5756b68--cgvbt-eth0" Jun 25 18:44:51.094886 containerd[1439]: 2024-06-25 18:44:51.081 [INFO][5250] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:44:51.094886 containerd[1439]: 2024-06-25 18:44:51.081 [INFO][5250] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:44:51.094886 containerd[1439]: 2024-06-25 18:44:51.090 [WARNING][5250] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" HandleID="k8s-pod-network.800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Workload="localhost-k8s-coredns--5dd5756b68--cgvbt-eth0" Jun 25 18:44:51.094886 containerd[1439]: 2024-06-25 18:44:51.090 [INFO][5250] ipam_plugin.go 439: Releasing address using workloadID ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" HandleID="k8s-pod-network.800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Workload="localhost-k8s-coredns--5dd5756b68--cgvbt-eth0" Jun 25 18:44:51.094886 containerd[1439]: 2024-06-25 18:44:51.091 [INFO][5250] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:44:51.094886 containerd[1439]: 2024-06-25 18:44:51.093 [INFO][5241] k8s.go 621: Teardown processing complete. ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Jun 25 18:44:51.095278 containerd[1439]: time="2024-06-25T18:44:51.094931383Z" level=info msg="TearDown network for sandbox \"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\" successfully" Jun 25 18:44:51.095278 containerd[1439]: time="2024-06-25T18:44:51.094954543Z" level=info msg="StopPodSandbox for \"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\" returns successfully" Jun 25 18:44:51.095449 containerd[1439]: time="2024-06-25T18:44:51.095410307Z" level=info msg="RemovePodSandbox for \"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\"" Jun 25 18:44:51.095489 containerd[1439]: time="2024-06-25T18:44:51.095446267Z" level=info msg="Forcibly stopping sandbox \"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\"" Jun 25 18:44:51.158721 containerd[1439]: 2024-06-25 18:44:51.127 [WARNING][5272] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--cgvbt-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"320816e0-861f-4b3b-bda1-d52532a4e96c", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ac25640a813e83ee280ab9d4717015676150e513ce0d980f509d2e11931c6874", Pod:"coredns-5dd5756b68-cgvbt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56f4cce878a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:44:51.158721 containerd[1439]: 2024-06-25 18:44:51.127 [INFO][5272] k8s.go 608: Cleaning up netns ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Jun 25 18:44:51.158721 containerd[1439]: 2024-06-25 18:44:51.127 [INFO][5272] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" iface="eth0" netns="" Jun 25 18:44:51.158721 containerd[1439]: 2024-06-25 18:44:51.128 [INFO][5272] k8s.go 615: Releasing IP address(es) ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Jun 25 18:44:51.158721 containerd[1439]: 2024-06-25 18:44:51.128 [INFO][5272] utils.go 188: Calico CNI releasing IP address ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Jun 25 18:44:51.158721 containerd[1439]: 2024-06-25 18:44:51.146 [INFO][5280] ipam_plugin.go 411: Releasing address using handleID ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" HandleID="k8s-pod-network.800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Workload="localhost-k8s-coredns--5dd5756b68--cgvbt-eth0" Jun 25 18:44:51.158721 containerd[1439]: 2024-06-25 18:44:51.146 [INFO][5280] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:44:51.158721 containerd[1439]: 2024-06-25 18:44:51.146 [INFO][5280] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:44:51.158721 containerd[1439]: 2024-06-25 18:44:51.154 [WARNING][5280] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" HandleID="k8s-pod-network.800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Workload="localhost-k8s-coredns--5dd5756b68--cgvbt-eth0" Jun 25 18:44:51.158721 containerd[1439]: 2024-06-25 18:44:51.154 [INFO][5280] ipam_plugin.go 439: Releasing address using workloadID ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" HandleID="k8s-pod-network.800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Workload="localhost-k8s-coredns--5dd5756b68--cgvbt-eth0" Jun 25 18:44:51.158721 containerd[1439]: 2024-06-25 18:44:51.155 [INFO][5280] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:44:51.158721 containerd[1439]: 2024-06-25 18:44:51.157 [INFO][5272] k8s.go 621: Teardown processing complete. ContainerID="800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b" Jun 25 18:44:51.159096 containerd[1439]: time="2024-06-25T18:44:51.158761879Z" level=info msg="TearDown network for sandbox \"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\" successfully" Jun 25 18:44:51.161479 containerd[1439]: time="2024-06-25T18:44:51.161441621Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:44:51.161542 containerd[1439]: time="2024-06-25T18:44:51.161506782Z" level=info msg="RemovePodSandbox \"800d5b22b6f09c0f3ef4f10ac2717f3fd6010f5b9a21b0b366a950ef3bd7ae0b\" returns successfully" Jun 25 18:44:51.162049 containerd[1439]: time="2024-06-25T18:44:51.162027306Z" level=info msg="StopPodSandbox for \"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\"" Jun 25 18:44:51.225555 containerd[1439]: 2024-06-25 18:44:51.193 [WARNING][5303] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--lwd6d-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"a41210ea-56a1-4e5b-869d-62550a89978d", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9", Pod:"coredns-5dd5756b68-lwd6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ea737bfd34", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:44:51.225555 containerd[1439]: 2024-06-25 18:44:51.193 [INFO][5303] k8s.go 608: Cleaning up netns ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Jun 25 18:44:51.225555 containerd[1439]: 2024-06-25 18:44:51.193 [INFO][5303] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" iface="eth0" netns="" Jun 25 18:44:51.225555 containerd[1439]: 2024-06-25 18:44:51.193 [INFO][5303] k8s.go 615: Releasing IP address(es) ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Jun 25 18:44:51.225555 containerd[1439]: 2024-06-25 18:44:51.193 [INFO][5303] utils.go 188: Calico CNI releasing IP address ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Jun 25 18:44:51.225555 containerd[1439]: 2024-06-25 18:44:51.211 [INFO][5311] ipam_plugin.go 411: Releasing address using handleID ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" HandleID="k8s-pod-network.d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Workload="localhost-k8s-coredns--5dd5756b68--lwd6d-eth0" Jun 25 18:44:51.225555 containerd[1439]: 2024-06-25 18:44:51.211 [INFO][5311] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:44:51.225555 containerd[1439]: 2024-06-25 18:44:51.211 [INFO][5311] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:44:51.225555 containerd[1439]: 2024-06-25 18:44:51.219 [WARNING][5311] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" HandleID="k8s-pod-network.d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Workload="localhost-k8s-coredns--5dd5756b68--lwd6d-eth0" Jun 25 18:44:51.225555 containerd[1439]: 2024-06-25 18:44:51.219 [INFO][5311] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" HandleID="k8s-pod-network.d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Workload="localhost-k8s-coredns--5dd5756b68--lwd6d-eth0" Jun 25 18:44:51.225555 containerd[1439]: 2024-06-25 18:44:51.222 [INFO][5311] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:44:51.225555 containerd[1439]: 2024-06-25 18:44:51.224 [INFO][5303] k8s.go 621: Teardown processing complete. ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Jun 25 18:44:51.225944 containerd[1439]: time="2024-06-25T18:44:51.225584320Z" level=info msg="TearDown network for sandbox \"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\" successfully" Jun 25 18:44:51.225944 containerd[1439]: time="2024-06-25T18:44:51.225606960Z" level=info msg="StopPodSandbox for \"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\" returns successfully" Jun 25 18:44:51.226110 containerd[1439]: time="2024-06-25T18:44:51.226083004Z" level=info msg="RemovePodSandbox for \"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\"" Jun 25 18:44:51.226157 containerd[1439]: time="2024-06-25T18:44:51.226119084Z" level=info msg="Forcibly stopping sandbox \"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\"" Jun 25 18:44:51.289405 containerd[1439]: 2024-06-25 18:44:51.257 [WARNING][5333] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--lwd6d-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"a41210ea-56a1-4e5b-869d-62550a89978d", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9605de794d0d38cb6cde21490df39b62d40e642544842885b6f22f5b4374dc9", Pod:"coredns-5dd5756b68-lwd6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ea737bfd34", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:44:51.289405 containerd[1439]: 2024-06-25 18:44:51.257 [INFO][5333] k8s.go 608: Cleaning up netns ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Jun 25 18:44:51.289405 containerd[1439]: 2024-06-25 18:44:51.257 [INFO][5333] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" iface="eth0" netns="" Jun 25 18:44:51.289405 containerd[1439]: 2024-06-25 18:44:51.257 [INFO][5333] k8s.go 615: Releasing IP address(es) ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Jun 25 18:44:51.289405 containerd[1439]: 2024-06-25 18:44:51.257 [INFO][5333] utils.go 188: Calico CNI releasing IP address ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Jun 25 18:44:51.289405 containerd[1439]: 2024-06-25 18:44:51.276 [INFO][5340] ipam_plugin.go 411: Releasing address using handleID ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" HandleID="k8s-pod-network.d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Workload="localhost-k8s-coredns--5dd5756b68--lwd6d-eth0" Jun 25 18:44:51.289405 containerd[1439]: 2024-06-25 18:44:51.276 [INFO][5340] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:44:51.289405 containerd[1439]: 2024-06-25 18:44:51.276 [INFO][5340] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:44:51.289405 containerd[1439]: 2024-06-25 18:44:51.284 [WARNING][5340] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" HandleID="k8s-pod-network.d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Workload="localhost-k8s-coredns--5dd5756b68--lwd6d-eth0" Jun 25 18:44:51.289405 containerd[1439]: 2024-06-25 18:44:51.284 [INFO][5340] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" HandleID="k8s-pod-network.d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Workload="localhost-k8s-coredns--5dd5756b68--lwd6d-eth0" Jun 25 18:44:51.289405 containerd[1439]: 2024-06-25 18:44:51.285 [INFO][5340] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:44:51.289405 containerd[1439]: 2024-06-25 18:44:51.287 [INFO][5333] k8s.go 621: Teardown processing complete. ContainerID="d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540" Jun 25 18:44:51.289405 containerd[1439]: time="2024-06-25T18:44:51.289363935Z" level=info msg="TearDown network for sandbox \"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\" successfully" Jun 25 18:44:51.291944 containerd[1439]: time="2024-06-25T18:44:51.291907797Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:44:51.292021 containerd[1439]: time="2024-06-25T18:44:51.291969117Z" level=info msg="RemovePodSandbox \"d72d2278fe82317dd383f3b51096ca3e4a51f645e5902abff064a3936f452540\" returns successfully" Jun 25 18:44:51.292405 containerd[1439]: time="2024-06-25T18:44:51.292372640Z" level=info msg="StopPodSandbox for \"6c893c8a1ac496261937a5e6a0bee5805d9ff2d9c672ecb42c06ac4ec465da2e\"" Jun 25 18:44:51.292486 containerd[1439]: time="2024-06-25T18:44:51.292443281Z" level=info msg="TearDown network for sandbox \"6c893c8a1ac496261937a5e6a0bee5805d9ff2d9c672ecb42c06ac4ec465da2e\" successfully" Jun 25 18:44:51.292514 containerd[1439]: time="2024-06-25T18:44:51.292485881Z" level=info msg="StopPodSandbox for \"6c893c8a1ac496261937a5e6a0bee5805d9ff2d9c672ecb42c06ac4ec465da2e\" returns successfully" Jun 25 18:44:51.292856 containerd[1439]: time="2024-06-25T18:44:51.292838564Z" level=info msg="RemovePodSandbox for \"6c893c8a1ac496261937a5e6a0bee5805d9ff2d9c672ecb42c06ac4ec465da2e\"" Jun 25 18:44:51.292888 containerd[1439]: time="2024-06-25T18:44:51.292861165Z" level=info msg="Forcibly stopping sandbox \"6c893c8a1ac496261937a5e6a0bee5805d9ff2d9c672ecb42c06ac4ec465da2e\"" Jun 25 18:44:51.292935 containerd[1439]: time="2024-06-25T18:44:51.292923245Z" level=info msg="TearDown network for sandbox \"6c893c8a1ac496261937a5e6a0bee5805d9ff2d9c672ecb42c06ac4ec465da2e\" successfully" Jun 25 18:44:51.295396 containerd[1439]: time="2024-06-25T18:44:51.295362106Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c893c8a1ac496261937a5e6a0bee5805d9ff2d9c672ecb42c06ac4ec465da2e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:44:51.295470 containerd[1439]: time="2024-06-25T18:44:51.295423306Z" level=info msg="RemovePodSandbox \"6c893c8a1ac496261937a5e6a0bee5805d9ff2d9c672ecb42c06ac4ec465da2e\" returns successfully" Jun 25 18:44:51.295716 containerd[1439]: time="2024-06-25T18:44:51.295691068Z" level=info msg="StopPodSandbox for \"dddf10f3abc3ce8f8490cdec5798f68f022d53d6f1b491bf2857331205c5744c\"" Jun 25 18:44:51.295798 containerd[1439]: time="2024-06-25T18:44:51.295765069Z" level=info msg="TearDown network for sandbox \"dddf10f3abc3ce8f8490cdec5798f68f022d53d6f1b491bf2857331205c5744c\" successfully" Jun 25 18:44:51.295820 containerd[1439]: time="2024-06-25T18:44:51.295798829Z" level=info msg="StopPodSandbox for \"dddf10f3abc3ce8f8490cdec5798f68f022d53d6f1b491bf2857331205c5744c\" returns successfully" Jun 25 18:44:51.296143 containerd[1439]: time="2024-06-25T18:44:51.296121352Z" level=info msg="RemovePodSandbox for \"dddf10f3abc3ce8f8490cdec5798f68f022d53d6f1b491bf2857331205c5744c\"" Jun 25 18:44:51.296179 containerd[1439]: time="2024-06-25T18:44:51.296150232Z" level=info msg="Forcibly stopping sandbox \"dddf10f3abc3ce8f8490cdec5798f68f022d53d6f1b491bf2857331205c5744c\"" Jun 25 18:44:51.296242 containerd[1439]: time="2024-06-25T18:44:51.296228273Z" level=info msg="TearDown network for sandbox \"dddf10f3abc3ce8f8490cdec5798f68f022d53d6f1b491bf2857331205c5744c\" successfully" Jun 25 18:44:51.298527 containerd[1439]: time="2024-06-25T18:44:51.298492532Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dddf10f3abc3ce8f8490cdec5798f68f022d53d6f1b491bf2857331205c5744c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:44:51.298586 containerd[1439]: time="2024-06-25T18:44:51.298540892Z" level=info msg="RemovePodSandbox \"dddf10f3abc3ce8f8490cdec5798f68f022d53d6f1b491bf2857331205c5744c\" returns successfully" Jun 25 18:44:51.385055 kubelet[2489]: E0625 18:44:51.385023 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:54.001544 systemd[1]: Started sshd@18-10.0.0.123:22-10.0.0.1:46828.service - OpenSSH per-connection server daemon (10.0.0.1:46828). Jun 25 18:44:54.053019 sshd[5382]: Accepted publickey for core from 10.0.0.1 port 46828 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:44:54.054810 sshd[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:54.066484 systemd-logind[1417]: New session 19 of user core. Jun 25 18:44:54.076227 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:44:54.201656 systemd[1]: run-containerd-runc-k8s.io-4357b97812f02e7e44708394bf6806a87226024e37d6af5c0e55a4b46983a2cf-runc.mvQGV4.mount: Deactivated successfully. Jun 25 18:44:54.230855 sshd[5382]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:54.235834 systemd[1]: sshd@18-10.0.0.123:22-10.0.0.1:46828.service: Deactivated successfully. Jun 25 18:44:54.237770 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:44:54.239041 systemd-logind[1417]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:44:54.244852 systemd-logind[1417]: Removed session 19. Jun 25 18:44:55.028483 kubelet[2489]: I0625 18:44:55.028440 2489 topology_manager.go:215] "Topology Admit Handler" podUID="9b523394-875a-4fdb-a0b9-ba631f8cc41d" podNamespace="calico-apiserver" podName="calico-apiserver-769df6456-zjrz4" Jun 25 18:44:55.039508 systemd[1]: Created slice kubepods-besteffort-pod9b523394_875a_4fdb_a0b9_ba631f8cc41d.slice - libcontainer container kubepods-besteffort-pod9b523394_875a_4fdb_a0b9_ba631f8cc41d.slice. Jun 25 18:44:55.141394 kubelet[2489]: I0625 18:44:55.141352 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmwjg\" (UniqueName: \"kubernetes.io/projected/9b523394-875a-4fdb-a0b9-ba631f8cc41d-kube-api-access-zmwjg\") pod \"calico-apiserver-769df6456-zjrz4\" (UID: \"9b523394-875a-4fdb-a0b9-ba631f8cc41d\") " pod="calico-apiserver/calico-apiserver-769df6456-zjrz4" Jun 25 18:44:55.141394 kubelet[2489]: I0625 18:44:55.141402 2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9b523394-875a-4fdb-a0b9-ba631f8cc41d-calico-apiserver-certs\") pod \"calico-apiserver-769df6456-zjrz4\" (UID: \"9b523394-875a-4fdb-a0b9-ba631f8cc41d\") " pod="calico-apiserver/calico-apiserver-769df6456-zjrz4" Jun 25 18:44:55.343708 containerd[1439]: time="2024-06-25T18:44:55.343563172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769df6456-zjrz4,Uid:9b523394-875a-4fdb-a0b9-ba631f8cc41d,Namespace:calico-apiserver,Attempt:0,}" Jun 25 18:44:55.471385 systemd-networkd[1379]: caliba5b5bb7d6d: Link UP Jun 25 18:44:55.471702 systemd-networkd[1379]: caliba5b5bb7d6d: Gained carrier Jun 25 18:44:55.483592 containerd[1439]: 2024-06-25 18:44:55.395 [INFO][5420] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--769df6456--zjrz4-eth0 calico-apiserver-769df6456- calico-apiserver 9b523394-875a-4fdb-a0b9-ba631f8cc41d 1130 0 2024-06-25 18:44:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:769df6456 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-769df6456-zjrz4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliba5b5bb7d6d [] []}} ContainerID="ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410" Namespace="calico-apiserver" Pod="calico-apiserver-769df6456-zjrz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--769df6456--zjrz4-" Jun 25 18:44:55.483592 containerd[1439]: 2024-06-25 18:44:55.395 [INFO][5420] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410" Namespace="calico-apiserver" Pod="calico-apiserver-769df6456-zjrz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--769df6456--zjrz4-eth0" Jun 25 18:44:55.483592 containerd[1439]: 2024-06-25 18:44:55.421 [INFO][5434] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410" HandleID="k8s-pod-network.ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410" Workload="localhost-k8s-calico--apiserver--769df6456--zjrz4-eth0" Jun 25 18:44:55.483592 containerd[1439]: 2024-06-25 18:44:55.433 [INFO][5434] ipam_plugin.go 264: Auto assigning IP ContainerID="ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410" HandleID="k8s-pod-network.ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410" Workload="localhost-k8s-calico--apiserver--769df6456--zjrz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c6d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-769df6456-zjrz4", "timestamp":"2024-06-25 18:44:55.421876187 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:44:55.483592 containerd[1439]: 2024-06-25 18:44:55.434 [INFO][5434] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:44:55.483592 containerd[1439]: 2024-06-25 18:44:55.434 [INFO][5434] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:44:55.483592 containerd[1439]: 2024-06-25 18:44:55.434 [INFO][5434] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 18:44:55.483592 containerd[1439]: 2024-06-25 18:44:55.437 [INFO][5434] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410" host="localhost" Jun 25 18:44:55.483592 containerd[1439]: 2024-06-25 18:44:55.442 [INFO][5434] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 18:44:55.483592 containerd[1439]: 2024-06-25 18:44:55.448 [INFO][5434] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 18:44:55.483592 containerd[1439]: 2024-06-25 18:44:55.450 [INFO][5434] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 18:44:55.483592 containerd[1439]: 2024-06-25 18:44:55.453 [INFO][5434] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 18:44:55.483592 containerd[1439]: 2024-06-25 18:44:55.453 [INFO][5434] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410" host="localhost" Jun 25 18:44:55.483592 containerd[1439]: 2024-06-25 18:44:55.454 [INFO][5434] ipam.go 1685: Creating new handle: k8s-pod-network.ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410 Jun 25 18:44:55.483592 containerd[1439]: 2024-06-25 18:44:55.458 [INFO][5434] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410" host="localhost" Jun 25 18:44:55.483592 containerd[1439]: 2024-06-25 18:44:55.463 [INFO][5434] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410" host="localhost" Jun 25 18:44:55.483592 containerd[1439]: 2024-06-25 18:44:55.463 [INFO][5434] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410" host="localhost" Jun 25 18:44:55.483592 containerd[1439]: 2024-06-25 18:44:55.463 [INFO][5434] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:44:55.483592 containerd[1439]: 2024-06-25 18:44:55.463 [INFO][5434] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410" HandleID="k8s-pod-network.ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410" Workload="localhost-k8s-calico--apiserver--769df6456--zjrz4-eth0" Jun 25 18:44:55.484750 containerd[1439]: 2024-06-25 18:44:55.468 [INFO][5420] k8s.go 386: Populated endpoint ContainerID="ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410" Namespace="calico-apiserver" Pod="calico-apiserver-769df6456-zjrz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--769df6456--zjrz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--769df6456--zjrz4-eth0", GenerateName:"calico-apiserver-769df6456-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b523394-875a-4fdb-a0b9-ba631f8cc41d", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769df6456", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-769df6456-zjrz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba5b5bb7d6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:44:55.484750 containerd[1439]: 2024-06-25 18:44:55.468 [INFO][5420] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410" Namespace="calico-apiserver" Pod="calico-apiserver-769df6456-zjrz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--769df6456--zjrz4-eth0" Jun 25 18:44:55.484750 containerd[1439]: 2024-06-25 18:44:55.468 [INFO][5420] dataplane_linux.go 68: Setting the host side veth name to caliba5b5bb7d6d ContainerID="ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410" Namespace="calico-apiserver" Pod="calico-apiserver-769df6456-zjrz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--769df6456--zjrz4-eth0" Jun 25 18:44:55.484750 containerd[1439]: 2024-06-25 18:44:55.472 [INFO][5420] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410" Namespace="calico-apiserver" Pod="calico-apiserver-769df6456-zjrz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--769df6456--zjrz4-eth0" Jun 25 18:44:55.484750 containerd[1439]: 2024-06-25 18:44:55.472 [INFO][5420] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410" Namespace="calico-apiserver" Pod="calico-apiserver-769df6456-zjrz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--769df6456--zjrz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--769df6456--zjrz4-eth0", GenerateName:"calico-apiserver-769df6456-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b523394-875a-4fdb-a0b9-ba631f8cc41d", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769df6456", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410", Pod:"calico-apiserver-769df6456-zjrz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba5b5bb7d6d", MAC:"e6:43:1f:9c:93:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:44:55.484750 containerd[1439]: 2024-06-25 18:44:55.481 [INFO][5420] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410" Namespace="calico-apiserver" Pod="calico-apiserver-769df6456-zjrz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--769df6456--zjrz4-eth0" Jun 25 18:44:55.508442 containerd[1439]: time="2024-06-25T18:44:55.508336751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:55.508442 containerd[1439]: time="2024-06-25T18:44:55.508397712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:55.508442 containerd[1439]: time="2024-06-25T18:44:55.508416232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:55.508442 containerd[1439]: time="2024-06-25T18:44:55.508429352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:55.523864 systemd[1]: Started cri-containerd-ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410.scope - libcontainer container ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410. Jun 25 18:44:55.533471 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:44:55.552005 containerd[1439]: time="2024-06-25T18:44:55.551967338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769df6456-zjrz4,Uid:9b523394-875a-4fdb-a0b9-ba631f8cc41d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410\"" Jun 25 18:44:55.554683 containerd[1439]: time="2024-06-25T18:44:55.554329930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 18:44:57.193123 containerd[1439]: time="2024-06-25T18:44:57.193078597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:57.194238 containerd[1439]: time="2024-06-25T18:44:57.194203612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jun 25 18:44:57.195153 containerd[1439]: time="2024-06-25T18:44:57.195133904Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:57.197172 containerd[1439]: time="2024-06-25T18:44:57.197121930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:57.197930 containerd[1439]: time="2024-06-25T18:44:57.197891860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 1.64352481s" Jun 25 18:44:57.197930 containerd[1439]: time="2024-06-25T18:44:57.197928861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jun 25 18:44:57.200633 containerd[1439]: time="2024-06-25T18:44:57.200605536Z" level=info msg="CreateContainer within sandbox \"ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 18:44:57.212489 containerd[1439]: time="2024-06-25T18:44:57.212447331Z" level=info msg="CreateContainer within sandbox \"ee0a21b88941223c6aaeef5fced7797011a3cfd0317f8b2c358e476c3f5cc410\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"71a73956efb456b555460450cf6b048884a98ea2d095784db2369a865fae64b8\"" Jun 25 18:44:57.213072 containerd[1439]: time="2024-06-25T18:44:57.212957138Z" level=info msg="StartContainer for \"71a73956efb456b555460450cf6b048884a98ea2d095784db2369a865fae64b8\"" Jun 25 18:44:57.218423 systemd-networkd[1379]: caliba5b5bb7d6d: Gained IPv6LL Jun 25 18:44:57.240822 systemd[1]: Started cri-containerd-71a73956efb456b555460450cf6b048884a98ea2d095784db2369a865fae64b8.scope - libcontainer container 71a73956efb456b555460450cf6b048884a98ea2d095784db2369a865fae64b8. Jun 25 18:44:57.280285 containerd[1439]: time="2024-06-25T18:44:57.280242701Z" level=info msg="StartContainer for \"71a73956efb456b555460450cf6b048884a98ea2d095784db2369a865fae64b8\" returns successfully" Jun 25 18:44:57.993106 kubelet[2489]: I0625 18:44:57.993049 2489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-769df6456-zjrz4" podStartSLOduration=1.348896519 podCreationTimestamp="2024-06-25 18:44:55 +0000 UTC" firstStartedPulling="2024-06-25 18:44:55.554030966 +0000 UTC m=+64.876001189" lastFinishedPulling="2024-06-25 18:44:57.198143904 +0000 UTC m=+66.520114127" observedRunningTime="2024-06-25 18:44:57.991574358 +0000 UTC m=+67.313544581" watchObservedRunningTime="2024-06-25 18:44:57.993009457 +0000 UTC m=+67.314979680" Jun 25 18:44:59.253983 systemd[1]: Started sshd@19-10.0.0.123:22-10.0.0.1:40966.service - OpenSSH per-connection server daemon (10.0.0.1:40966). Jun 25 18:44:59.300628 sshd[5550]: Accepted publickey for core from 10.0.0.1 port 40966 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:44:59.302143 sshd[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:59.306730 systemd-logind[1417]: New session 20 of user core. Jun 25 18:44:59.316841 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:44:59.442768 sshd[5550]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:59.446953 systemd[1]: sshd@19-10.0.0.123:22-10.0.0.1:40966.service: Deactivated successfully. Jun 25 18:44:59.448873 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:44:59.451707 systemd-logind[1417]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:44:59.453544 systemd-logind[1417]: Removed session 20.