Nov 12 17:56:58.900953 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 12 17:56:58.900973 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Nov 12 16:24:35 -00 2024 Nov 12 17:56:58.900982 kernel: KASLR enabled Nov 12 17:56:58.900988 kernel: efi: EFI v2.7 by EDK II Nov 12 17:56:58.900994 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Nov 12 17:56:58.901000 kernel: random: crng init done Nov 12 17:56:58.901007 kernel: ACPI: Early table checksum verification disabled Nov 12 17:56:58.901013 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Nov 12 17:56:58.901019 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 12 17:56:58.901026 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:56:58.901033 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:56:58.901039 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:56:58.901045 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:56:58.901051 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:56:58.901058 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:56:58.901066 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:56:58.901072 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:56:58.901079 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 17:56:58.901085 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 12 17:56:58.901091 kernel: NUMA: Failed to initialise from firmware Nov 12 17:56:58.901098 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 17:56:58.901104 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Nov 12 17:56:58.901110 kernel: Zone ranges: Nov 12 17:56:58.901116 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 17:56:58.901123 kernel: DMA32 empty Nov 12 17:56:58.901130 kernel: Normal empty Nov 12 17:56:58.901136 kernel: Movable zone start for each node Nov 12 17:56:58.901142 kernel: Early memory node ranges Nov 12 17:56:58.901149 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Nov 12 17:56:58.901155 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Nov 12 17:56:58.901179 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Nov 12 17:56:58.901186 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 12 17:56:58.901192 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 12 17:56:58.901199 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 12 17:56:58.901205 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 12 17:56:58.901211 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 12 17:56:58.901218 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 12 17:56:58.901226 kernel: psci: probing for conduit method from ACPI. Nov 12 17:56:58.901232 kernel: psci: PSCIv1.1 detected in firmware. Nov 12 17:56:58.901239 kernel: psci: Using standard PSCI v0.2 function IDs Nov 12 17:56:58.901247 kernel: psci: Trusted OS migration not required Nov 12 17:56:58.901254 kernel: psci: SMC Calling Convention v1.1 Nov 12 17:56:58.901261 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 12 17:56:58.901269 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Nov 12 17:56:58.901276 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Nov 12 17:56:58.901283 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 12 17:56:58.901290 kernel: Detected PIPT I-cache on CPU0 Nov 12 17:56:58.901297 kernel: CPU features: detected: GIC system register CPU interface Nov 12 17:56:58.901303 kernel: CPU features: detected: Hardware dirty bit management Nov 12 17:56:58.901310 kernel: CPU features: detected: Spectre-v4 Nov 12 17:56:58.901317 kernel: CPU features: detected: Spectre-BHB Nov 12 17:56:58.901323 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 12 17:56:58.901330 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 12 17:56:58.901338 kernel: CPU features: detected: ARM erratum 1418040 Nov 12 17:56:58.901345 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 12 17:56:58.901351 kernel: alternatives: applying boot alternatives Nov 12 17:56:58.901359 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8c276c03cfeb31103ba0b5f1af613bdc698463ad3d29e6750e34154929bf187e Nov 12 17:56:58.901367 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 17:56:58.901373 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 17:56:58.901380 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 17:56:58.901387 kernel: Fallback order for Node 0: 0 Nov 12 17:56:58.901394 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Nov 12 17:56:58.901400 kernel: Policy zone: DMA Nov 12 17:56:58.901407 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 17:56:58.901415 kernel: software IO TLB: area num 4. Nov 12 17:56:58.901422 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Nov 12 17:56:58.901429 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Nov 12 17:56:58.901436 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 17:56:58.901442 kernel: trace event string verifier disabled Nov 12 17:56:58.901449 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 17:56:58.901456 kernel: rcu: RCU event tracing is enabled. Nov 12 17:56:58.901463 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 17:56:58.901470 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 17:56:58.901477 kernel: Tracing variant of Tasks RCU enabled. Nov 12 17:56:58.901484 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 17:56:58.901491 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 17:56:58.901499 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 12 17:56:58.901506 kernel: GICv3: 256 SPIs implemented Nov 12 17:56:58.901513 kernel: GICv3: 0 Extended SPIs implemented Nov 12 17:56:58.901519 kernel: Root IRQ handler: gic_handle_irq Nov 12 17:56:58.901526 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 12 17:56:58.901533 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 12 17:56:58.901539 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 12 17:56:58.901546 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Nov 12 17:56:58.901554 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Nov 12 17:56:58.901560 kernel: GICv3: using LPI property table @0x00000000400f0000 Nov 12 17:56:58.901567 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Nov 12 17:56:58.901575 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 17:56:58.901582 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 17:56:58.901593 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 12 17:56:58.901601 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 12 17:56:58.901607 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 12 17:56:58.901614 kernel: arm-pv: using stolen time PV Nov 12 17:56:58.901622 kernel: Console: colour dummy device 80x25 Nov 12 17:56:58.901629 kernel: ACPI: Core revision 20230628 Nov 12 17:56:58.901636 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 12 17:56:58.901643 kernel: pid_max: default: 32768 minimum: 301 Nov 12 17:56:58.901651 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 17:56:58.901658 kernel: landlock: Up and running. Nov 12 17:56:58.901665 kernel: SELinux: Initializing. Nov 12 17:56:58.901672 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 17:56:58.901679 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 17:56:58.901687 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 17:56:58.901694 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 17:56:58.901700 kernel: rcu: Hierarchical SRCU implementation. Nov 12 17:56:58.901708 kernel: rcu: Max phase no-delay instances is 400. Nov 12 17:56:58.901716 kernel: Platform MSI: ITS@0x8080000 domain created Nov 12 17:56:58.901723 kernel: PCI/MSI: ITS@0x8080000 domain created Nov 12 17:56:58.901729 kernel: Remapping and enabling EFI services. Nov 12 17:56:58.901736 kernel: smp: Bringing up secondary CPUs ... Nov 12 17:56:58.901743 kernel: Detected PIPT I-cache on CPU1 Nov 12 17:56:58.901750 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 12 17:56:58.901757 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Nov 12 17:56:58.901764 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 17:56:58.901771 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 12 17:56:58.901778 kernel: Detected PIPT I-cache on CPU2 Nov 12 17:56:58.901787 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 12 17:56:58.901794 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Nov 12 17:56:58.901805 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 17:56:58.901813 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 12 17:56:58.901820 kernel: Detected PIPT I-cache on CPU3 Nov 12 17:56:58.901828 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 12 17:56:58.901835 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Nov 12 17:56:58.901842 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 12 17:56:58.901850 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 12 17:56:58.901858 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 17:56:58.901865 kernel: SMP: Total of 4 processors activated. Nov 12 17:56:58.901872 kernel: CPU features: detected: 32-bit EL0 Support Nov 12 17:56:58.901880 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 12 17:56:58.901887 kernel: CPU features: detected: Common not Private translations Nov 12 17:56:58.901894 kernel: CPU features: detected: CRC32 instructions Nov 12 17:56:58.901902 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 12 17:56:58.901909 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 12 17:56:58.901917 kernel: CPU features: detected: LSE atomic instructions Nov 12 17:56:58.901925 kernel: CPU features: detected: Privileged Access Never Nov 12 17:56:58.901932 kernel: CPU features: detected: RAS Extension Support Nov 12 17:56:58.901939 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 12 17:56:58.901947 kernel: CPU: All CPU(s) started at EL1 Nov 12 17:56:58.901954 kernel: alternatives: applying system-wide alternatives Nov 12 17:56:58.901961 kernel: devtmpfs: initialized Nov 12 17:56:58.901969 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 17:56:58.901976 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 17:56:58.901985 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 17:56:58.901992 kernel: SMBIOS 3.0.0 present. Nov 12 17:56:58.901999 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Nov 12 17:56:58.902006 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 17:56:58.902014 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 12 17:56:58.902021 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 12 17:56:58.902029 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 12 17:56:58.902036 kernel: audit: initializing netlink subsys (disabled) Nov 12 17:56:58.902044 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Nov 12 17:56:58.902052 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 17:56:58.902059 kernel: cpuidle: using governor menu Nov 12 17:56:58.902067 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 12 17:56:58.902074 kernel: ASID allocator initialised with 32768 entries Nov 12 17:56:58.902081 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 17:56:58.902089 kernel: Serial: AMBA PL011 UART driver Nov 12 17:56:58.902096 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 12 17:56:58.902103 kernel: Modules: 0 pages in range for non-PLT usage Nov 12 17:56:58.902111 kernel: Modules: 509040 pages in range for PLT usage Nov 12 17:56:58.902119 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 17:56:58.902127 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 17:56:58.902134 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 12 17:56:58.902141 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 12 17:56:58.902149 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 17:56:58.902156 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 17:56:58.902217 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 12 17:56:58.902225 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 12 17:56:58.902232 kernel: ACPI: Added _OSI(Module Device) Nov 12 17:56:58.902242 kernel: ACPI: Added _OSI(Processor Device) Nov 12 17:56:58.902249 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 17:56:58.902256 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 17:56:58.902263 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 17:56:58.902270 kernel: ACPI: Interpreter enabled Nov 12 17:56:58.902278 kernel: ACPI: Using GIC for interrupt routing Nov 12 17:56:58.902285 kernel: ACPI: MCFG table detected, 1 entries Nov 12 17:56:58.902292 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 12 17:56:58.902299 kernel: printk: console [ttyAMA0] enabled Nov 12 17:56:58.902308 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 17:56:58.902433 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 17:56:58.902504 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 12 17:56:58.902568 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 12 17:56:58.902646 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 12 17:56:58.902711 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 12 17:56:58.902720 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 12 17:56:58.902731 kernel: PCI host bridge to bus 0000:00 Nov 12 17:56:58.902815 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 12 17:56:58.902875 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 12 17:56:58.902934 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 12 17:56:58.902991 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 17:56:58.903067 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Nov 12 17:56:58.903141 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 17:56:58.903223 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Nov 12 17:56:58.903289 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Nov 12 17:56:58.903352 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Nov 12 17:56:58.903417 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Nov 12 17:56:58.903481 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Nov 12 17:56:58.903545 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Nov 12 17:56:58.903617 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 12 17:56:58.903679 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 12 17:56:58.903735 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 12 17:56:58.903745 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 12 17:56:58.903753 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 12 17:56:58.903760 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 12 17:56:58.903768 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 12 17:56:58.903775 kernel: iommu: Default domain type: Translated Nov 12 17:56:58.903782 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 12 17:56:58.903791 kernel: efivars: Registered efivars operations Nov 12 17:56:58.903798 kernel: vgaarb: loaded Nov 12 17:56:58.903806 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 12 17:56:58.903813 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 17:56:58.903820 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 17:56:58.903828 kernel: pnp: PnP ACPI init Nov 12 17:56:58.903900 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 12 17:56:58.903910 kernel: pnp: PnP ACPI: found 1 devices Nov 12 17:56:58.903919 kernel: NET: Registered PF_INET protocol family Nov 12 17:56:58.903927 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 17:56:58.903934 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 17:56:58.903941 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 17:56:58.903949 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 17:56:58.903956 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 17:56:58.903964 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 17:56:58.903971 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 17:56:58.903979 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 17:56:58.903987 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 17:56:58.903994 kernel: PCI: CLS 0 bytes, default 64 Nov 12 17:56:58.904002 kernel: kvm [1]: HYP mode not available Nov 12 17:56:58.904020 kernel: Initialise system trusted keyrings Nov 12 17:56:58.904027 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 17:56:58.904034 kernel: Key type asymmetric registered Nov 12 17:56:58.904042 kernel: Asymmetric key parser 'x509' registered Nov 12 17:56:58.904049 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 12 17:56:58.904056 kernel: io scheduler mq-deadline registered Nov 12 17:56:58.904065 kernel: io scheduler kyber registered Nov 12 17:56:58.904072 kernel: io scheduler bfq registered Nov 12 17:56:58.904080 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 12 17:56:58.904087 kernel: ACPI: button: Power Button [PWRB] Nov 12 17:56:58.904095 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 12 17:56:58.904192 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 12 17:56:58.904204 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 17:56:58.904211 kernel: thunder_xcv, ver 1.0 Nov 12 17:56:58.904219 kernel: thunder_bgx, ver 1.0 Nov 12 17:56:58.904229 kernel: nicpf, ver 1.0 Nov 12 17:56:58.904236 kernel: nicvf, ver 1.0 Nov 12 17:56:58.904316 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 12 17:56:58.904379 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-11-12T17:56:58 UTC (1731434218) Nov 12 17:56:58.904388 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 12 17:56:58.904396 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Nov 12 17:56:58.904403 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 12 17:56:58.904411 kernel: watchdog: Hard watchdog permanently disabled Nov 12 17:56:58.904420 kernel: NET: Registered PF_INET6 protocol family Nov 12 17:56:58.904428 kernel: Segment Routing with IPv6 Nov 12 17:56:58.904435 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 17:56:58.904442 kernel: NET: Registered PF_PACKET protocol family Nov 12 17:56:58.904449 kernel: Key type dns_resolver registered Nov 12 17:56:58.904457 kernel: registered taskstats version 1 Nov 12 17:56:58.904464 kernel: Loading compiled-in X.509 certificates Nov 12 17:56:58.904471 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 277bea35d8d47c9841f307ab609d4271c3622dcb' Nov 12 17:56:58.904478 kernel: Key type .fscrypt registered Nov 12 17:56:58.904487 kernel: Key type fscrypt-provisioning registered Nov 12 17:56:58.904494 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 17:56:58.904501 kernel: ima: Allocated hash algorithm: sha1 Nov 12 17:56:58.904509 kernel: ima: No architecture policies found Nov 12 17:56:58.904516 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 12 17:56:58.904523 kernel: clk: Disabling unused clocks Nov 12 17:56:58.904531 kernel: Freeing unused kernel memory: 39360K Nov 12 17:56:58.904538 kernel: Run /init as init process Nov 12 17:56:58.904545 kernel: with arguments: Nov 12 17:56:58.904554 kernel: /init Nov 12 17:56:58.904561 kernel: with environment: Nov 12 17:56:58.904568 kernel: HOME=/ Nov 12 17:56:58.904575 kernel: TERM=linux Nov 12 17:56:58.904583 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 17:56:58.904598 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 17:56:58.904608 systemd[1]: Detected virtualization kvm. Nov 12 17:56:58.904616 systemd[1]: Detected architecture arm64. Nov 12 17:56:58.904626 systemd[1]: Running in initrd. Nov 12 17:56:58.904634 systemd[1]: No hostname configured, using default hostname. Nov 12 17:56:58.904641 systemd[1]: Hostname set to . Nov 12 17:56:58.904650 systemd[1]: Initializing machine ID from VM UUID. Nov 12 17:56:58.904658 systemd[1]: Queued start job for default target initrd.target. Nov 12 17:56:58.904666 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 17:56:58.904674 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 17:56:58.904682 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 17:56:58.904692 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 17:56:58.904700 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 17:56:58.904708 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 17:56:58.904722 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 17:56:58.904730 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 17:56:58.904738 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 17:56:58.904748 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 17:56:58.904756 systemd[1]: Reached target paths.target - Path Units. Nov 12 17:56:58.904764 systemd[1]: Reached target slices.target - Slice Units. Nov 12 17:56:58.904772 systemd[1]: Reached target swap.target - Swaps. Nov 12 17:56:58.904782 systemd[1]: Reached target timers.target - Timer Units. Nov 12 17:56:58.904790 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 17:56:58.904798 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 17:56:58.904807 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 17:56:58.904817 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 17:56:58.904828 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 17:56:58.904836 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 17:56:58.904844 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 17:56:58.904853 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 17:56:58.904861 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 17:56:58.904869 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 17:56:58.904877 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 17:56:58.904886 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 17:56:58.904894 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 17:56:58.904904 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 17:56:58.904913 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:56:58.904924 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 17:56:58.904932 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 17:56:58.904942 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 17:56:58.904954 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 17:56:58.904962 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:56:58.904971 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 17:56:58.904979 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 17:56:58.905002 systemd-journald[238]: Collecting audit messages is disabled. Nov 12 17:56:58.905023 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 17:56:58.905032 systemd-journald[238]: Journal started Nov 12 17:56:58.905050 systemd-journald[238]: Runtime Journal (/run/log/journal/88bb36f75067491f913f25561f26ae4f) is 5.9M, max 47.3M, 41.4M free. Nov 12 17:56:58.893769 systemd-modules-load[239]: Inserted module 'overlay' Nov 12 17:56:58.906488 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 17:56:58.911190 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 17:56:58.911084 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 17:56:58.913308 systemd-modules-load[239]: Inserted module 'br_netfilter' Nov 12 17:56:58.914207 kernel: Bridge firewalling registered Nov 12 17:56:58.915192 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 17:56:58.916226 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 17:56:58.919320 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 17:56:58.921847 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 17:56:58.923899 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:56:58.926703 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 17:56:58.932459 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:56:58.935106 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 17:56:58.939263 dracut-cmdline[274]: dracut-dracut-053 Nov 12 17:56:58.942181 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8c276c03cfeb31103ba0b5f1af613bdc698463ad3d29e6750e34154929bf187e Nov 12 17:56:58.958730 systemd-resolved[282]: Positive Trust Anchors: Nov 12 17:56:58.958747 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 17:56:58.958779 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 17:56:58.963548 systemd-resolved[282]: Defaulting to hostname 'linux'. Nov 12 17:56:58.966325 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 17:56:58.967422 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 17:56:59.002187 kernel: SCSI subsystem initialized Nov 12 17:56:59.007183 kernel: Loading iSCSI transport class v2.0-870. Nov 12 17:56:59.016223 kernel: iscsi: registered transport (tcp) Nov 12 17:56:59.031184 kernel: iscsi: registered transport (qla4xxx) Nov 12 17:56:59.031207 kernel: QLogic iSCSI HBA Driver Nov 12 17:56:59.072281 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 17:56:59.083359 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 17:56:59.100878 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 17:56:59.100917 kernel: device-mapper: uevent: version 1.0.3 Nov 12 17:56:59.100938 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 17:56:59.145181 kernel: raid6: neonx8 gen() 15711 MB/s Nov 12 17:56:59.162179 kernel: raid6: neonx4 gen() 15568 MB/s Nov 12 17:56:59.179169 kernel: raid6: neonx2 gen() 13158 MB/s Nov 12 17:56:59.196172 kernel: raid6: neonx1 gen() 10409 MB/s Nov 12 17:56:59.213174 kernel: raid6: int64x8 gen() 6925 MB/s Nov 12 17:56:59.230170 kernel: raid6: int64x4 gen() 7297 MB/s Nov 12 17:56:59.247170 kernel: raid6: int64x2 gen() 6090 MB/s Nov 12 17:56:59.264171 kernel: raid6: int64x1 gen() 5028 MB/s Nov 12 17:56:59.264184 kernel: raid6: using algorithm neonx8 gen() 15711 MB/s Nov 12 17:56:59.281187 kernel: raid6: .... xor() 11885 MB/s, rmw enabled Nov 12 17:56:59.281213 kernel: raid6: using neon recovery algorithm Nov 12 17:56:59.286178 kernel: xor: measuring software checksum speed Nov 12 17:56:59.286192 kernel: 8regs : 19177 MB/sec Nov 12 17:56:59.287542 kernel: 32regs : 18288 MB/sec Nov 12 17:56:59.287569 kernel: arm64_neon : 26014 MB/sec Nov 12 17:56:59.287596 kernel: xor: using function: arm64_neon (26014 MB/sec) Nov 12 17:56:59.338141 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 17:56:59.349210 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 17:56:59.365305 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 17:56:59.376058 systemd-udevd[460]: Using default interface naming scheme 'v255'. Nov 12 17:56:59.379122 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 17:56:59.382252 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 17:56:59.397428 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Nov 12 17:56:59.421971 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 17:56:59.437281 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 17:56:59.475027 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 17:56:59.486323 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 17:56:59.498349 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 17:56:59.499483 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 17:56:59.500870 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 17:56:59.502454 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 17:56:59.509288 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 17:56:59.517875 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 17:56:59.520181 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 12 17:56:59.534251 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 17:56:59.534398 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 17:56:59.534411 kernel: GPT:9289727 != 19775487 Nov 12 17:56:59.534420 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 17:56:59.534430 kernel: GPT:9289727 != 19775487 Nov 12 17:56:59.534444 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 17:56:59.534454 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 17:56:59.530505 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 17:56:59.530620 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:56:59.532037 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 17:56:59.533656 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 17:56:59.533793 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:56:59.534789 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:56:59.546482 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:56:59.549408 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (503) Nov 12 17:56:59.549434 kernel: BTRFS: device fsid 93a9d474-e751-47b7-a65f-e39ca9abd47a devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (505) Nov 12 17:56:59.557994 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 17:56:59.559939 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:56:59.570697 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 17:56:59.575782 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 17:56:59.579389 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 17:56:59.580261 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 17:56:59.591286 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 17:56:59.592722 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 17:56:59.599418 disk-uuid[550]: Primary Header is updated. Nov 12 17:56:59.599418 disk-uuid[550]: Secondary Entries is updated. Nov 12 17:56:59.599418 disk-uuid[550]: Secondary Header is updated. Nov 12 17:56:59.604181 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 17:56:59.615639 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:57:00.619939 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 17:57:00.620004 disk-uuid[552]: The operation has completed successfully. Nov 12 17:57:00.644836 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 17:57:00.644951 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 17:57:00.656349 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 17:57:00.660026 sh[575]: Success Nov 12 17:57:00.675819 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 12 17:57:00.704570 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 17:57:00.716399 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 17:57:00.720197 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 17:57:00.727728 kernel: BTRFS info (device dm-0): first mount of filesystem 93a9d474-e751-47b7-a65f-e39ca9abd47a Nov 12 17:57:00.727758 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:57:00.727769 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 17:57:00.728495 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 17:57:00.729602 kernel: BTRFS info (device dm-0): using free space tree Nov 12 17:57:00.732817 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 17:57:00.733902 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 17:57:00.742325 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 17:57:00.743634 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 17:57:00.750686 kernel: BTRFS info (device vda6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:57:00.750725 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:57:00.750736 kernel: BTRFS info (device vda6): using free space tree Nov 12 17:57:00.753243 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 17:57:00.759281 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 17:57:00.760550 kernel: BTRFS info (device vda6): last unmount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:57:00.766566 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 17:57:00.772336 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 17:57:00.837292 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 17:57:00.846301 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 17:57:00.867211 systemd-networkd[768]: lo: Link UP Nov 12 17:57:00.867224 systemd-networkd[768]: lo: Gained carrier Nov 12 17:57:00.867913 systemd-networkd[768]: Enumeration completed Nov 12 17:57:00.868432 ignition[663]: Ignition 2.19.0 Nov 12 17:57:00.868221 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 17:57:00.868439 ignition[663]: Stage: fetch-offline Nov 12 17:57:00.868642 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:57:00.868472 ignition[663]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:57:00.868646 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 17:57:00.868480 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:57:00.869710 systemd-networkd[768]: eth0: Link UP Nov 12 17:57:00.868630 ignition[663]: parsed url from cmdline: "" Nov 12 17:57:00.869713 systemd-networkd[768]: eth0: Gained carrier Nov 12 17:57:00.868634 ignition[663]: no config URL provided Nov 12 17:57:00.869720 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:57:00.868638 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 17:57:00.870238 systemd[1]: Reached target network.target - Network. Nov 12 17:57:00.868645 ignition[663]: no config at "/usr/lib/ignition/user.ign" Nov 12 17:57:00.886197 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 17:57:00.868664 ignition[663]: op(1): [started] loading QEMU firmware config module Nov 12 17:57:00.868669 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 17:57:00.876532 ignition[663]: op(1): [finished] loading QEMU firmware config module Nov 12 17:57:00.922988 ignition[663]: parsing config with SHA512: 2a34350a2a1c3b6f260a7cc06c10619a5c060426b8eb31777675d38dbda6d432faaeb004c87569274ac56e09af8dccede76f5a8a970c44411bafec98c39790f5 Nov 12 17:57:00.930205 unknown[663]: fetched base config from "system" Nov 12 17:57:00.930217 unknown[663]: fetched user config from "qemu" Nov 12 17:57:00.930671 ignition[663]: fetch-offline: fetch-offline passed Nov 12 17:57:00.932634 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 17:57:00.930737 ignition[663]: Ignition finished successfully Nov 12 17:57:00.934454 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 17:57:00.941380 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 17:57:00.951467 ignition[774]: Ignition 2.19.0 Nov 12 17:57:00.951477 ignition[774]: Stage: kargs Nov 12 17:57:00.951640 ignition[774]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:57:00.951649 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:57:00.952457 ignition[774]: kargs: kargs passed Nov 12 17:57:00.954941 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 17:57:00.952497 ignition[774]: Ignition finished successfully Nov 12 17:57:00.956877 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 17:57:00.968909 ignition[783]: Ignition 2.19.0 Nov 12 17:57:00.968919 ignition[783]: Stage: disks Nov 12 17:57:00.969067 ignition[783]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:57:00.969077 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:57:00.969891 ignition[783]: disks: disks passed Nov 12 17:57:00.972236 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 17:57:00.969932 ignition[783]: Ignition finished successfully Nov 12 17:57:00.973761 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 17:57:00.974844 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 17:57:00.976433 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 17:57:00.977707 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 17:57:00.979283 systemd[1]: Reached target basic.target - Basic System. Nov 12 17:57:00.985292 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 17:57:00.994570 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 17:57:00.997942 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 17:57:01.000098 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 17:57:01.045177 kernel: EXT4-fs (vda9): mounted filesystem b3af0fd7-3c7c-4cdc-9b88-dae3d10ea922 r/w with ordered data mode. Quota mode: none. Nov 12 17:57:01.045708 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 17:57:01.046735 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 17:57:01.059244 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 17:57:01.060760 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 17:57:01.061944 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 17:57:01.061983 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 17:57:01.071138 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Nov 12 17:57:01.071171 kernel: BTRFS info (device vda6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:57:01.071191 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:57:01.071202 kernel: BTRFS info (device vda6): using free space tree Nov 12 17:57:01.071211 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 17:57:01.062006 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 17:57:01.068875 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 17:57:01.072560 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 17:57:01.074312 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 17:57:01.116138 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 17:57:01.120221 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Nov 12 17:57:01.124646 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 17:57:01.128272 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 17:57:01.192192 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 17:57:01.201262 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 17:57:01.203605 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 17:57:01.208175 kernel: BTRFS info (device vda6): last unmount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:57:01.223147 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 17:57:01.225495 ignition[914]: INFO : Ignition 2.19.0 Nov 12 17:57:01.225495 ignition[914]: INFO : Stage: mount Nov 12 17:57:01.226753 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 17:57:01.226753 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:57:01.226753 ignition[914]: INFO : mount: mount passed Nov 12 17:57:01.226753 ignition[914]: INFO : Ignition finished successfully Nov 12 17:57:01.228048 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 17:57:01.238294 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 17:57:01.727012 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 17:57:01.742324 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 17:57:01.748542 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (928) Nov 12 17:57:01.748579 kernel: BTRFS info (device vda6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:57:01.748591 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:57:01.749210 kernel: BTRFS info (device vda6): using free space tree Nov 12 17:57:01.752181 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 17:57:01.752633 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 17:57:01.767671 ignition[945]: INFO : Ignition 2.19.0 Nov 12 17:57:01.767671 ignition[945]: INFO : Stage: files Nov 12 17:57:01.768903 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 17:57:01.768903 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:57:01.768903 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Nov 12 17:57:01.771335 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 17:57:01.771335 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 17:57:01.773732 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 17:57:01.774765 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 17:57:01.774765 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 17:57:01.774250 unknown[945]: wrote ssh authorized keys file for user: core Nov 12 17:57:01.777540 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 17:57:01.777540 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Nov 12 17:57:02.228035 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 17:57:02.335019 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 17:57:02.336482 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 17:57:02.336482 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 17:57:02.336482 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 17:57:02.336482 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 17:57:02.336482 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 17:57:02.336482 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 17:57:02.336482 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 17:57:02.336482 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 17:57:02.346294 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 17:57:02.346294 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 17:57:02.346294 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Nov 12 17:57:02.346294 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Nov 12 17:57:02.346294 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Nov 12 17:57:02.346294 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Nov 12 17:57:02.642841 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 17:57:02.694263 systemd-networkd[768]: eth0: Gained IPv6LL Nov 12 17:57:02.895422 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Nov 12 17:57:02.895422 ignition[945]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 17:57:02.898483 ignition[945]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 17:57:02.899887 ignition[945]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 17:57:02.899887 ignition[945]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 17:57:02.899887 ignition[945]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 12 17:57:02.899887 ignition[945]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 17:57:02.899887 ignition[945]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 17:57:02.899887 ignition[945]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 12 17:57:02.899887 ignition[945]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 17:57:02.929503 ignition[945]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 17:57:02.933118 ignition[945]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 17:57:02.935357 ignition[945]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 17:57:02.935357 ignition[945]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 12 17:57:02.935357 ignition[945]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 17:57:02.935357 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 17:57:02.935357 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 17:57:02.935357 ignition[945]: INFO : files: files passed Nov 12 17:57:02.935357 ignition[945]: INFO : Ignition finished successfully Nov 12 17:57:02.935757 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 17:57:02.951360 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 17:57:02.953973 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 17:57:02.959795 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 17:57:02.959892 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 17:57:02.963682 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 17:57:02.966551 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 17:57:02.966551 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 17:57:02.969739 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 17:57:02.970079 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 17:57:02.971962 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 17:57:02.983327 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 17:57:03.002555 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 17:57:03.002697 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 17:57:03.004852 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 17:57:03.006138 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 17:57:03.007734 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 17:57:03.008592 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 17:57:03.023321 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 17:57:03.025848 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 17:57:03.037723 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 17:57:03.038702 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 17:57:03.040733 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 17:57:03.042297 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 17:57:03.042412 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 17:57:03.044827 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 17:57:03.046563 systemd[1]: Stopped target basic.target - Basic System. Nov 12 17:57:03.047983 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 17:57:03.049458 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 17:57:03.051068 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 17:57:03.052778 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 17:57:03.054358 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 17:57:03.055897 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 17:57:03.057308 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 17:57:03.058784 systemd[1]: Stopped target swap.target - Swaps. Nov 12 17:57:03.060101 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 17:57:03.060237 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 17:57:03.062342 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 17:57:03.064043 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 17:57:03.065718 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 17:57:03.070239 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 17:57:03.071269 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 17:57:03.071379 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 17:57:03.073895 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 17:57:03.074005 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 17:57:03.075795 systemd[1]: Stopped target paths.target - Path Units. Nov 12 17:57:03.077145 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 17:57:03.082233 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 17:57:03.083222 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 17:57:03.085057 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 17:57:03.086379 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 17:57:03.086467 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 17:57:03.087758 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 17:57:03.087839 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 17:57:03.089103 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 17:57:03.089220 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 17:57:03.090707 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 17:57:03.090807 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 17:57:03.102316 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 17:57:03.102976 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 17:57:03.103096 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 17:57:03.105827 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 17:57:03.107195 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 17:57:03.107307 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 17:57:03.109063 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 17:57:03.109634 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 17:57:03.113746 ignition[999]: INFO : Ignition 2.19.0 Nov 12 17:57:03.113746 ignition[999]: INFO : Stage: umount Nov 12 17:57:03.113746 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 17:57:03.113746 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 17:57:03.117584 ignition[999]: INFO : umount: umount passed Nov 12 17:57:03.117584 ignition[999]: INFO : Ignition finished successfully Nov 12 17:57:03.116499 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 17:57:03.117084 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 17:57:03.118296 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 17:57:03.120903 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 17:57:03.120991 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 17:57:03.124438 systemd[1]: Stopped target network.target - Network. Nov 12 17:57:03.125387 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 17:57:03.125454 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 17:57:03.126911 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 17:57:03.126948 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 17:57:03.128559 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 17:57:03.128604 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 17:57:03.129983 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 17:57:03.130019 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 17:57:03.133202 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 17:57:03.134755 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 17:57:03.141209 systemd-networkd[768]: eth0: DHCPv6 lease lost Nov 12 17:57:03.142678 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 17:57:03.142790 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 17:57:03.144179 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 17:57:03.144213 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 17:57:03.154310 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 17:57:03.155038 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 17:57:03.155087 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 17:57:03.156884 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 17:57:03.160087 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 17:57:03.160201 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 17:57:03.163941 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 17:57:03.164019 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:57:03.165495 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 17:57:03.165542 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 17:57:03.167133 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 17:57:03.167189 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 17:57:03.176440 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 17:57:03.176597 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 17:57:03.178519 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 17:57:03.178618 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 17:57:03.180122 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 17:57:03.181518 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 17:57:03.183587 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 17:57:03.183633 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 17:57:03.184803 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 17:57:03.184837 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 17:57:03.186461 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 17:57:03.186499 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 17:57:03.189017 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 17:57:03.189057 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 17:57:03.191465 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 17:57:03.191504 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:57:03.193260 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 17:57:03.193301 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 17:57:03.207366 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 17:57:03.208269 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 17:57:03.208325 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 17:57:03.210197 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 17:57:03.210238 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:57:03.215066 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 17:57:03.216364 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 17:57:03.217376 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 17:57:03.219660 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 17:57:03.228734 systemd[1]: Switching root. Nov 12 17:57:03.256001 systemd-journald[238]: Journal stopped Nov 12 17:57:03.961004 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Nov 12 17:57:03.961056 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 17:57:03.961067 kernel: SELinux: policy capability open_perms=1 Nov 12 17:57:03.961077 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 17:57:03.961089 kernel: SELinux: policy capability always_check_network=0 Nov 12 17:57:03.961099 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 17:57:03.961108 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 17:57:03.961117 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 17:57:03.961127 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 17:57:03.961140 kernel: audit: type=1403 audit(1731434223.390:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 17:57:03.961152 systemd[1]: Successfully loaded SELinux policy in 30.591ms. Nov 12 17:57:03.961246 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.418ms. Nov 12 17:57:03.961260 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 17:57:03.961274 systemd[1]: Detected virtualization kvm. Nov 12 17:57:03.961284 systemd[1]: Detected architecture arm64. Nov 12 17:57:03.961294 systemd[1]: Detected first boot. Nov 12 17:57:03.961307 systemd[1]: Initializing machine ID from VM UUID. Nov 12 17:57:03.961318 zram_generator::config[1044]: No configuration found. Nov 12 17:57:03.961329 systemd[1]: Populated /etc with preset unit settings. Nov 12 17:57:03.961340 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 17:57:03.961351 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 17:57:03.961363 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 17:57:03.961375 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 17:57:03.961385 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 17:57:03.961396 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 17:57:03.961406 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 17:57:03.961420 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 17:57:03.961431 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 17:57:03.961442 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 17:57:03.961452 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 17:57:03.961464 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 17:57:03.961475 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 17:57:03.961486 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 17:57:03.961496 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 17:57:03.961506 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 17:57:03.961517 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 17:57:03.961527 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 12 17:57:03.961539 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 17:57:03.961550 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 17:57:03.961562 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 17:57:03.961582 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 17:57:03.961592 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 17:57:03.961603 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 17:57:03.961613 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 17:57:03.961624 systemd[1]: Reached target slices.target - Slice Units. Nov 12 17:57:03.961635 systemd[1]: Reached target swap.target - Swaps. Nov 12 17:57:03.961647 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 17:57:03.961658 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 17:57:03.961668 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 17:57:03.961678 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 17:57:03.961688 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 17:57:03.961699 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 17:57:03.961710 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 17:57:03.961720 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 17:57:03.961733 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 17:57:03.961745 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 17:57:03.961756 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 17:57:03.961766 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 17:57:03.961778 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 17:57:03.961788 systemd[1]: Reached target machines.target - Containers. Nov 12 17:57:03.961799 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 17:57:03.961809 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:57:03.961820 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 17:57:03.961830 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 17:57:03.961842 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 17:57:03.961853 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 17:57:03.961863 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 17:57:03.961873 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 17:57:03.961884 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 17:57:03.961894 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 17:57:03.961904 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 17:57:03.961915 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 17:57:03.961926 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 17:57:03.961937 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 17:57:03.961946 kernel: fuse: init (API version 7.39) Nov 12 17:57:03.961956 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 17:57:03.961967 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 17:57:03.961977 kernel: loop: module loaded Nov 12 17:57:03.961987 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 17:57:03.961998 kernel: ACPI: bus type drm_connector registered Nov 12 17:57:03.962008 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 17:57:03.962020 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 17:57:03.962030 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 17:57:03.962040 systemd[1]: Stopped verity-setup.service. Nov 12 17:57:03.962067 systemd-journald[1104]: Collecting audit messages is disabled. Nov 12 17:57:03.962089 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 17:57:03.962100 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 17:57:03.962110 systemd-journald[1104]: Journal started Nov 12 17:57:03.962132 systemd-journald[1104]: Runtime Journal (/run/log/journal/88bb36f75067491f913f25561f26ae4f) is 5.9M, max 47.3M, 41.4M free. Nov 12 17:57:03.762451 systemd[1]: Queued start job for default target multi-user.target. Nov 12 17:57:03.786612 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 17:57:03.786962 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 17:57:03.963200 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 17:57:03.965150 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 17:57:03.966101 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 17:57:03.967088 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 17:57:03.968105 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 17:57:03.970233 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 17:57:03.971373 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 17:57:03.971508 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 17:57:03.972896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 17:57:03.973054 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 17:57:03.974241 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 17:57:03.974377 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 17:57:03.975382 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 17:57:03.975511 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 17:57:03.977689 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 17:57:03.978852 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 17:57:03.978992 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 17:57:03.980046 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 17:57:03.980204 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 17:57:03.981340 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 17:57:03.982498 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 17:57:03.983674 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 17:57:03.994762 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 17:57:04.007349 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 17:57:04.009205 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 17:57:04.010024 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 17:57:04.010054 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 17:57:04.011781 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 17:57:04.013700 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 17:57:04.015473 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 17:57:04.016391 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:57:04.017744 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 17:57:04.019636 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 17:57:04.020576 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 17:57:04.024358 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 17:57:04.026005 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 17:57:04.027914 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 17:57:04.028668 systemd-journald[1104]: Time spent on flushing to /var/log/journal/88bb36f75067491f913f25561f26ae4f is 17.551ms for 853 entries. Nov 12 17:57:04.028668 systemd-journald[1104]: System Journal (/var/log/journal/88bb36f75067491f913f25561f26ae4f) is 8.0M, max 195.6M, 187.6M free. Nov 12 17:57:04.065229 systemd-journald[1104]: Received client request to flush runtime journal. Nov 12 17:57:04.065332 kernel: loop0: detected capacity change from 0 to 114328 Nov 12 17:57:04.065361 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 17:57:04.033238 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 17:57:04.036468 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 17:57:04.038835 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 17:57:04.040450 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 17:57:04.041537 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 17:57:04.043850 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 17:57:04.047575 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 17:57:04.051621 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 17:57:04.060590 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 17:57:04.063487 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 17:57:04.068814 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 17:57:04.080354 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:57:04.081709 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 17:57:04.084005 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 17:57:04.084856 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 17:57:04.087807 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 17:57:04.093286 kernel: loop1: detected capacity change from 0 to 114432 Nov 12 17:57:04.097118 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 17:57:04.116543 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Nov 12 17:57:04.116560 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Nov 12 17:57:04.120580 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 17:57:04.125188 kernel: loop2: detected capacity change from 0 to 194096 Nov 12 17:57:04.170203 kernel: loop3: detected capacity change from 0 to 114328 Nov 12 17:57:04.176174 kernel: loop4: detected capacity change from 0 to 114432 Nov 12 17:57:04.181181 kernel: loop5: detected capacity change from 0 to 194096 Nov 12 17:57:04.187524 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 17:57:04.187960 (sd-merge)[1180]: Merged extensions into '/usr'. Nov 12 17:57:04.191652 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 17:57:04.191774 systemd[1]: Reloading... Nov 12 17:57:04.242195 zram_generator::config[1210]: No configuration found. Nov 12 17:57:04.304588 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 17:57:04.354608 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:57:04.390190 systemd[1]: Reloading finished in 198 ms. Nov 12 17:57:04.419398 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 17:57:04.422190 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 17:57:04.433318 systemd[1]: Starting ensure-sysext.service... Nov 12 17:57:04.436256 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 17:57:04.451969 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Nov 12 17:57:04.451984 systemd[1]: Reloading... Nov 12 17:57:04.454776 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 17:57:04.455011 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 17:57:04.455761 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 17:57:04.455974 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Nov 12 17:57:04.456020 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Nov 12 17:57:04.458335 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 17:57:04.458348 systemd-tmpfiles[1242]: Skipping /boot Nov 12 17:57:04.465061 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 17:57:04.465075 systemd-tmpfiles[1242]: Skipping /boot Nov 12 17:57:04.501180 zram_generator::config[1272]: No configuration found. Nov 12 17:57:04.576715 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:57:04.611772 systemd[1]: Reloading finished in 159 ms. Nov 12 17:57:04.628912 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 17:57:04.639640 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 17:57:04.646480 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 17:57:04.648850 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 17:57:04.650987 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 17:57:04.657243 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 17:57:04.660041 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 17:57:04.664996 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 17:57:04.668137 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:57:04.670123 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 17:57:04.672521 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 17:57:04.679230 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 17:57:04.680131 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:57:04.684472 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 17:57:04.686390 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 17:57:04.687817 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 17:57:04.689323 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 17:57:04.690333 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Nov 12 17:57:04.691094 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 17:57:04.691347 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 17:57:04.692922 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 17:57:04.693126 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 17:57:04.701059 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:57:04.705311 augenrules[1334]: No rules Nov 12 17:57:04.707444 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 17:57:04.709419 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 17:57:04.712909 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 17:57:04.714298 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:57:04.718796 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 17:57:04.720030 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 17:57:04.723196 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 17:57:04.724372 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 17:57:04.727108 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 17:57:04.728586 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 17:57:04.728704 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 17:57:04.731114 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 17:57:04.731257 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 17:57:04.732500 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 17:57:04.747242 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1345) Nov 12 17:57:04.751348 systemd[1]: Finished ensure-sysext.service. Nov 12 17:57:04.758717 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 12 17:57:04.761265 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1345) Nov 12 17:57:04.765434 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 17:57:04.767237 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 17:57:04.771646 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:57:04.776249 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1358) Nov 12 17:57:04.776451 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 17:57:04.780178 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 17:57:04.781766 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 17:57:04.782711 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:57:04.784453 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 17:57:04.788337 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 17:57:04.789145 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 17:57:04.789715 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 17:57:04.791496 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 17:57:04.791637 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 17:57:04.792727 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 17:57:04.792849 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 17:57:04.794537 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 17:57:04.794689 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 17:57:04.805619 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 17:57:04.808583 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 17:57:04.809583 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 17:57:04.809646 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 17:57:04.826832 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 17:57:04.854640 systemd-resolved[1310]: Positive Trust Anchors: Nov 12 17:57:04.854657 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 17:57:04.854688 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 17:57:04.862299 systemd-resolved[1310]: Defaulting to hostname 'linux'. Nov 12 17:57:04.868741 systemd-networkd[1383]: lo: Link UP Nov 12 17:57:04.868748 systemd-networkd[1383]: lo: Gained carrier Nov 12 17:57:04.869285 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 17:57:04.869631 systemd-networkd[1383]: Enumeration completed Nov 12 17:57:04.870545 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 17:57:04.871875 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:57:04.871883 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 17:57:04.872536 systemd-networkd[1383]: eth0: Link UP Nov 12 17:57:04.872544 systemd-networkd[1383]: eth0: Gained carrier Nov 12 17:57:04.872559 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:57:04.872625 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 17:57:04.873625 systemd[1]: Reached target network.target - Network. Nov 12 17:57:04.874433 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 17:57:04.875457 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 17:57:04.886365 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 17:57:04.897231 systemd-networkd[1383]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 17:57:04.898067 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Nov 12 17:57:04.899720 systemd-timesyncd[1385]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 17:57:04.899840 systemd-timesyncd[1385]: Initial clock synchronization to Tue 2024-11-12 17:57:04.703763 UTC. Nov 12 17:57:04.917422 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:57:04.930570 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 17:57:04.933472 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 17:57:04.956709 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 17:57:04.961203 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:57:04.982137 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 17:57:04.983716 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 17:57:04.984604 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 17:57:04.985462 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 17:57:04.986359 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 17:57:04.987396 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 17:57:04.988388 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 17:57:04.989431 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 17:57:04.990317 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 17:57:04.990347 systemd[1]: Reached target paths.target - Path Units. Nov 12 17:57:04.990972 systemd[1]: Reached target timers.target - Timer Units. Nov 12 17:57:04.992477 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 17:57:04.994550 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 17:57:05.002954 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 17:57:05.004872 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 17:57:05.006177 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 17:57:05.007016 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 17:57:05.007735 systemd[1]: Reached target basic.target - Basic System. Nov 12 17:57:05.008432 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 17:57:05.008461 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 17:57:05.009361 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 17:57:05.011069 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 17:57:05.012445 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 17:57:05.014342 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 17:57:05.016057 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 17:57:05.016816 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 17:57:05.019434 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 17:57:05.022318 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 17:57:05.029344 jq[1412]: false Nov 12 17:57:05.028509 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 17:57:05.031871 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 17:57:05.035342 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 17:57:05.039937 extend-filesystems[1413]: Found loop3 Nov 12 17:57:05.039937 extend-filesystems[1413]: Found loop4 Nov 12 17:57:05.039937 extend-filesystems[1413]: Found loop5 Nov 12 17:57:05.039937 extend-filesystems[1413]: Found vda Nov 12 17:57:05.041407 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 17:57:05.049955 extend-filesystems[1413]: Found vda1 Nov 12 17:57:05.049955 extend-filesystems[1413]: Found vda2 Nov 12 17:57:05.049955 extend-filesystems[1413]: Found vda3 Nov 12 17:57:05.049955 extend-filesystems[1413]: Found usr Nov 12 17:57:05.049955 extend-filesystems[1413]: Found vda4 Nov 12 17:57:05.049955 extend-filesystems[1413]: Found vda6 Nov 12 17:57:05.049955 extend-filesystems[1413]: Found vda7 Nov 12 17:57:05.049955 extend-filesystems[1413]: Found vda9 Nov 12 17:57:05.049955 extend-filesystems[1413]: Checking size of /dev/vda9 Nov 12 17:57:05.066290 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1358) Nov 12 17:57:05.046289 dbus-daemon[1411]: [system] SELinux support is enabled Nov 12 17:57:05.041807 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 17:57:05.066640 extend-filesystems[1413]: Resized partition /dev/vda9 Nov 12 17:57:05.042601 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 17:57:05.068345 extend-filesystems[1433]: resize2fs 1.47.1 (20-May-2024) Nov 12 17:57:05.077471 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 17:57:05.046340 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 17:57:05.077705 jq[1429]: true Nov 12 17:57:05.048607 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 17:57:05.056213 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 17:57:05.060539 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 17:57:05.060687 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 17:57:05.060929 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 17:57:05.061082 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 17:57:05.076647 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 17:57:05.076832 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 17:57:05.089291 (ntainerd)[1440]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 17:57:05.099019 jq[1437]: true Nov 12 17:57:05.113490 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 17:57:05.113543 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 17:57:05.114569 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 17:57:05.114595 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 17:57:05.139368 tar[1436]: linux-arm64/helm Nov 12 17:57:05.142223 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 17:57:05.142608 update_engine[1426]: I20241112 17:57:05.141556 1426 main.cc:92] Flatcar Update Engine starting Nov 12 17:57:05.147661 systemd[1]: Started update-engine.service - Update Engine. Nov 12 17:57:05.153101 update_engine[1426]: I20241112 17:57:05.149111 1426 update_check_scheduler.cc:74] Next update check in 6m16s Nov 12 17:57:05.153470 systemd-logind[1424]: Watching system buttons on /dev/input/event0 (Power Button) Nov 12 17:57:05.156011 systemd-logind[1424]: New seat seat0. Nov 12 17:57:05.160239 extend-filesystems[1433]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 17:57:05.160239 extend-filesystems[1433]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 17:57:05.160239 extend-filesystems[1433]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 17:57:05.159404 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 17:57:05.167275 bash[1464]: Updated "/home/core/.ssh/authorized_keys" Nov 12 17:57:05.167356 extend-filesystems[1413]: Resized filesystem in /dev/vda9 Nov 12 17:57:05.160517 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 17:57:05.161504 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 17:57:05.162276 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 17:57:05.166544 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 17:57:05.171972 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 17:57:05.233852 locksmithd[1465]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 17:57:05.283967 sshd_keygen[1435]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 17:57:05.303586 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 17:57:05.314599 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 17:57:05.320322 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 17:57:05.320500 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 17:57:05.323069 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 17:57:05.339846 containerd[1440]: time="2024-11-12T17:57:05.339774622Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 17:57:05.344566 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 17:57:05.355685 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 17:57:05.357871 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 12 17:57:05.359074 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 17:57:05.364667 containerd[1440]: time="2024-11-12T17:57:05.364602767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:57:05.365928 containerd[1440]: time="2024-11-12T17:57:05.365883306Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:57:05.365928 containerd[1440]: time="2024-11-12T17:57:05.365918386Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 17:57:05.366004 containerd[1440]: time="2024-11-12T17:57:05.365935399Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 17:57:05.366120 containerd[1440]: time="2024-11-12T17:57:05.366090427Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 17:57:05.366120 containerd[1440]: time="2024-11-12T17:57:05.366115752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 17:57:05.366257 containerd[1440]: time="2024-11-12T17:57:05.366230316Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:57:05.366257 containerd[1440]: time="2024-11-12T17:57:05.366250685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:57:05.366453 containerd[1440]: time="2024-11-12T17:57:05.366424444Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:57:05.366453 containerd[1440]: time="2024-11-12T17:57:05.366446568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 17:57:05.366505 containerd[1440]: time="2024-11-12T17:57:05.366459562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:57:05.366505 containerd[1440]: time="2024-11-12T17:57:05.366468888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 17:57:05.366560 containerd[1440]: time="2024-11-12T17:57:05.366544003Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:57:05.366754 containerd[1440]: time="2024-11-12T17:57:05.366730014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:57:05.366847 containerd[1440]: time="2024-11-12T17:57:05.366830921Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:57:05.366868 containerd[1440]: time="2024-11-12T17:57:05.366848364Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 17:57:05.366942 containerd[1440]: time="2024-11-12T17:57:05.366926678Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 17:57:05.366982 containerd[1440]: time="2024-11-12T17:57:05.366970069Z" level=info msg="metadata content store policy set" policy=shared Nov 12 17:57:05.371840 containerd[1440]: time="2024-11-12T17:57:05.371808662Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 17:57:05.371903 containerd[1440]: time="2024-11-12T17:57:05.371853302Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 17:57:05.371903 containerd[1440]: time="2024-11-12T17:57:05.371868715Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 17:57:05.371903 containerd[1440]: time="2024-11-12T17:57:05.371884518Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 17:57:05.371903 containerd[1440]: time="2024-11-12T17:57:05.371898761Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 17:57:05.372050 containerd[1440]: time="2024-11-12T17:57:05.372027178Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 17:57:05.372331 containerd[1440]: time="2024-11-12T17:57:05.372313004Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 17:57:05.372441 containerd[1440]: time="2024-11-12T17:57:05.372424173Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 17:57:05.372465 containerd[1440]: time="2024-11-12T17:57:05.372446064Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 17:57:05.372550 containerd[1440]: time="2024-11-12T17:57:05.372539167Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 17:57:05.372569 containerd[1440]: time="2024-11-12T17:57:05.372555751Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 17:57:05.372600 containerd[1440]: time="2024-11-12T17:57:05.372569252Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 17:57:05.372600 containerd[1440]: time="2024-11-12T17:57:05.372582909Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 17:57:05.372600 containerd[1440]: time="2024-11-12T17:57:05.372596878Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 17:57:05.372659 containerd[1440]: time="2024-11-12T17:57:05.372611550Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 17:57:05.372659 containerd[1440]: time="2024-11-12T17:57:05.372624232Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 17:57:05.372659 containerd[1440]: time="2024-11-12T17:57:05.372637226Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 17:57:05.372659 containerd[1440]: time="2024-11-12T17:57:05.372648269Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 17:57:05.372724 containerd[1440]: time="2024-11-12T17:57:05.372666881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 17:57:05.372724 containerd[1440]: time="2024-11-12T17:57:05.372680422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 17:57:05.372724 containerd[1440]: time="2024-11-12T17:57:05.372692323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 17:57:05.372724 containerd[1440]: time="2024-11-12T17:57:05.372703834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 17:57:05.372724 containerd[1440]: time="2024-11-12T17:57:05.372715267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 17:57:05.372807 containerd[1440]: time="2024-11-12T17:57:05.372727832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 17:57:05.372807 containerd[1440]: time="2024-11-12T17:57:05.372738835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 17:57:05.372807 containerd[1440]: time="2024-11-12T17:57:05.372751010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 17:57:05.372807 containerd[1440]: time="2024-11-12T17:57:05.372767047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 17:57:05.372807 containerd[1440]: time="2024-11-12T17:57:05.372786323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 17:57:05.372807 containerd[1440]: time="2024-11-12T17:57:05.372798069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 17:57:05.372906 containerd[1440]: time="2024-11-12T17:57:05.372810243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 17:57:05.372906 containerd[1440]: time="2024-11-12T17:57:05.372822613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 17:57:05.372906 containerd[1440]: time="2024-11-12T17:57:05.372838299Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 17:57:05.372906 containerd[1440]: time="2024-11-12T17:57:05.372861750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 17:57:05.372906 containerd[1440]: time="2024-11-12T17:57:05.372874900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 17:57:05.372906 containerd[1440]: time="2024-11-12T17:57:05.372885436Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 17:57:05.373548 containerd[1440]: time="2024-11-12T17:57:05.373523227Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 17:57:05.373586 containerd[1440]: time="2024-11-12T17:57:05.373557800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 17:57:05.373586 containerd[1440]: time="2024-11-12T17:57:05.373569623Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 17:57:05.373586 containerd[1440]: time="2024-11-12T17:57:05.373583046Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 17:57:05.373639 containerd[1440]: time="2024-11-12T17:57:05.373592723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 17:57:05.373639 containerd[1440]: time="2024-11-12T17:57:05.373605483Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 17:57:05.373639 containerd[1440]: time="2024-11-12T17:57:05.373615823Z" level=info msg="NRI interface is disabled by configuration." Nov 12 17:57:05.373639 containerd[1440]: time="2024-11-12T17:57:05.373627061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 17:57:05.374022 containerd[1440]: time="2024-11-12T17:57:05.373961234Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 17:57:05.374022 containerd[1440]: time="2024-11-12T17:57:05.374025969Z" level=info msg="Connect containerd service" Nov 12 17:57:05.374156 containerd[1440]: time="2024-11-12T17:57:05.374054415Z" level=info msg="using legacy CRI server" Nov 12 17:57:05.374156 containerd[1440]: time="2024-11-12T17:57:05.374061517Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 17:57:05.374156 containerd[1440]: time="2024-11-12T17:57:05.374151147Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 17:57:05.374815 containerd[1440]: time="2024-11-12T17:57:05.374786246Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 17:57:05.375246 containerd[1440]: time="2024-11-12T17:57:05.375061029Z" level=info msg="Start subscribing containerd event" Nov 12 17:57:05.375246 containerd[1440]: time="2024-11-12T17:57:05.375115931Z" level=info msg="Start recovering state" Nov 12 17:57:05.375246 containerd[1440]: time="2024-11-12T17:57:05.375220350Z" level=info msg="Start event monitor" Nov 12 17:57:05.375360 containerd[1440]: time="2024-11-12T17:57:05.375234593Z" level=info msg="Start snapshots syncer" Nov 12 17:57:05.375430 containerd[1440]: time="2024-11-12T17:57:05.375405854Z" level=info msg="Start cni network conf syncer for default" Nov 12 17:57:05.375531 containerd[1440]: time="2024-11-12T17:57:05.375348299Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 17:57:05.375572 containerd[1440]: time="2024-11-12T17:57:05.375561195Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 17:57:05.375602 containerd[1440]: time="2024-11-12T17:57:05.375469380Z" level=info msg="Start streaming server" Nov 12 17:57:05.376192 containerd[1440]: time="2024-11-12T17:57:05.375679584Z" level=info msg="containerd successfully booted in 0.036756s" Nov 12 17:57:05.375847 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 17:57:05.492530 tar[1436]: linux-arm64/LICENSE Nov 12 17:57:05.492630 tar[1436]: linux-arm64/README.md Nov 12 17:57:05.505393 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 17:57:06.406268 systemd-networkd[1383]: eth0: Gained IPv6LL Nov 12 17:57:06.409946 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 17:57:06.411873 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 17:57:06.429419 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 17:57:06.439110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:57:06.441065 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 17:57:06.455522 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 17:57:06.456246 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 17:57:06.457983 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 17:57:06.461660 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 17:57:06.930154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:57:06.931543 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 17:57:06.934209 (kubelet)[1523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:57:06.936248 systemd[1]: Startup finished in 553ms (kernel) + 4.684s (initrd) + 3.579s (userspace) = 8.817s. Nov 12 17:57:07.400862 kubelet[1523]: E1112 17:57:07.400753 1523 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:57:07.403518 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:57:07.403668 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:57:10.833686 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 17:57:10.834726 systemd[1]: Started sshd@0-10.0.0.106:22-10.0.0.1:35988.service - OpenSSH per-connection server daemon (10.0.0.1:35988). Nov 12 17:57:10.882616 sshd[1538]: Accepted publickey for core from 10.0.0.1 port 35988 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:57:10.884270 sshd[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:57:10.891624 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 17:57:10.900391 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 17:57:10.901806 systemd-logind[1424]: New session 1 of user core. Nov 12 17:57:10.909121 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 17:57:10.912444 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 17:57:10.918802 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 17:57:11.004774 systemd[1542]: Queued start job for default target default.target. Nov 12 17:57:11.020120 systemd[1542]: Created slice app.slice - User Application Slice. Nov 12 17:57:11.020186 systemd[1542]: Reached target paths.target - Paths. Nov 12 17:57:11.020200 systemd[1542]: Reached target timers.target - Timers. Nov 12 17:57:11.021466 systemd[1542]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 17:57:11.031779 systemd[1542]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 17:57:11.031849 systemd[1542]: Reached target sockets.target - Sockets. Nov 12 17:57:11.031861 systemd[1542]: Reached target basic.target - Basic System. Nov 12 17:57:11.031911 systemd[1542]: Reached target default.target - Main User Target. Nov 12 17:57:11.031945 systemd[1542]: Startup finished in 107ms. Nov 12 17:57:11.032220 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 17:57:11.033561 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 17:57:11.094325 systemd[1]: Started sshd@1-10.0.0.106:22-10.0.0.1:35994.service - OpenSSH per-connection server daemon (10.0.0.1:35994). Nov 12 17:57:11.127376 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 35994 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:57:11.128582 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:57:11.132749 systemd-logind[1424]: New session 2 of user core. Nov 12 17:57:11.139313 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 17:57:11.190004 sshd[1553]: pam_unix(sshd:session): session closed for user core Nov 12 17:57:11.200606 systemd[1]: sshd@1-10.0.0.106:22-10.0.0.1:35994.service: Deactivated successfully. Nov 12 17:57:11.202549 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 17:57:11.204331 systemd-logind[1424]: Session 2 logged out. Waiting for processes to exit. Nov 12 17:57:11.214447 systemd[1]: Started sshd@2-10.0.0.106:22-10.0.0.1:35996.service - OpenSSH per-connection server daemon (10.0.0.1:35996). Nov 12 17:57:11.215370 systemd-logind[1424]: Removed session 2. Nov 12 17:57:11.244710 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 35996 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:57:11.245933 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:57:11.250004 systemd-logind[1424]: New session 3 of user core. Nov 12 17:57:11.256309 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 17:57:11.303809 sshd[1560]: pam_unix(sshd:session): session closed for user core Nov 12 17:57:11.312589 systemd[1]: sshd@2-10.0.0.106:22-10.0.0.1:35996.service: Deactivated successfully. Nov 12 17:57:11.315532 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 17:57:11.316762 systemd-logind[1424]: Session 3 logged out. Waiting for processes to exit. Nov 12 17:57:11.317952 systemd[1]: Started sshd@3-10.0.0.106:22-10.0.0.1:36000.service - OpenSSH per-connection server daemon (10.0.0.1:36000). Nov 12 17:57:11.318665 systemd-logind[1424]: Removed session 3. Nov 12 17:57:11.351963 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 36000 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:57:11.353193 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:57:11.356698 systemd-logind[1424]: New session 4 of user core. Nov 12 17:57:11.368356 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 17:57:11.419056 sshd[1567]: pam_unix(sshd:session): session closed for user core Nov 12 17:57:11.431456 systemd[1]: sshd@3-10.0.0.106:22-10.0.0.1:36000.service: Deactivated successfully. Nov 12 17:57:11.432812 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 17:57:11.434024 systemd-logind[1424]: Session 4 logged out. Waiting for processes to exit. Nov 12 17:57:11.435057 systemd[1]: Started sshd@4-10.0.0.106:22-10.0.0.1:36002.service - OpenSSH per-connection server daemon (10.0.0.1:36002). Nov 12 17:57:11.436463 systemd-logind[1424]: Removed session 4. Nov 12 17:57:11.468548 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 36002 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:57:11.469718 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:57:11.474196 systemd-logind[1424]: New session 5 of user core. Nov 12 17:57:11.483344 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 17:57:11.544805 sudo[1577]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 17:57:11.545083 sudo[1577]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:57:11.567007 sudo[1577]: pam_unix(sudo:session): session closed for user root Nov 12 17:57:11.568754 sshd[1574]: pam_unix(sshd:session): session closed for user core Nov 12 17:57:11.582612 systemd[1]: sshd@4-10.0.0.106:22-10.0.0.1:36002.service: Deactivated successfully. Nov 12 17:57:11.586203 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 17:57:11.587490 systemd-logind[1424]: Session 5 logged out. Waiting for processes to exit. Nov 12 17:57:11.597400 systemd[1]: Started sshd@5-10.0.0.106:22-10.0.0.1:36018.service - OpenSSH per-connection server daemon (10.0.0.1:36018). Nov 12 17:57:11.598347 systemd-logind[1424]: Removed session 5. Nov 12 17:57:11.627940 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 36018 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:57:11.629109 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:57:11.632856 systemd-logind[1424]: New session 6 of user core. Nov 12 17:57:11.642281 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 17:57:11.692068 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 17:57:11.692363 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:57:11.695276 sudo[1586]: pam_unix(sudo:session): session closed for user root Nov 12 17:57:11.699537 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 17:57:11.699794 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:57:11.719583 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 17:57:11.720602 auditctl[1589]: No rules Nov 12 17:57:11.720882 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 17:57:11.721055 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 17:57:11.723210 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 17:57:11.746144 augenrules[1607]: No rules Nov 12 17:57:11.749224 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 17:57:11.750297 sudo[1585]: pam_unix(sudo:session): session closed for user root Nov 12 17:57:11.751726 sshd[1582]: pam_unix(sshd:session): session closed for user core Nov 12 17:57:11.768488 systemd[1]: sshd@5-10.0.0.106:22-10.0.0.1:36018.service: Deactivated successfully. Nov 12 17:57:11.770029 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 17:57:11.771279 systemd-logind[1424]: Session 6 logged out. Waiting for processes to exit. Nov 12 17:57:11.772357 systemd[1]: Started sshd@6-10.0.0.106:22-10.0.0.1:36032.service - OpenSSH per-connection server daemon (10.0.0.1:36032). Nov 12 17:57:11.773025 systemd-logind[1424]: Removed session 6. Nov 12 17:57:11.806101 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 36032 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:57:11.807555 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:57:11.811471 systemd-logind[1424]: New session 7 of user core. Nov 12 17:57:11.818286 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 17:57:11.869056 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 17:57:11.869726 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:57:12.216390 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 17:57:12.216555 (dockerd)[1636]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 17:57:12.469887 dockerd[1636]: time="2024-11-12T17:57:12.469616192Z" level=info msg="Starting up" Nov 12 17:57:12.614276 dockerd[1636]: time="2024-11-12T17:57:12.614229097Z" level=info msg="Loading containers: start." Nov 12 17:57:12.696181 kernel: Initializing XFRM netlink socket Nov 12 17:57:12.755572 systemd-networkd[1383]: docker0: Link UP Nov 12 17:57:12.777476 dockerd[1636]: time="2024-11-12T17:57:12.777439132Z" level=info msg="Loading containers: done." Nov 12 17:57:12.791826 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2873889253-merged.mount: Deactivated successfully. Nov 12 17:57:12.793500 dockerd[1636]: time="2024-11-12T17:57:12.793304899Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 17:57:12.793500 dockerd[1636]: time="2024-11-12T17:57:12.793405403Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 17:57:12.793614 dockerd[1636]: time="2024-11-12T17:57:12.793513355Z" level=info msg="Daemon has completed initialization" Nov 12 17:57:12.819958 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 17:57:12.820629 dockerd[1636]: time="2024-11-12T17:57:12.819746233Z" level=info msg="API listen on /run/docker.sock" Nov 12 17:57:13.435670 containerd[1440]: time="2024-11-12T17:57:13.435618195Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.6\"" Nov 12 17:57:14.100598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1281722858.mount: Deactivated successfully. Nov 12 17:57:15.600505 containerd[1440]: time="2024-11-12T17:57:15.600460742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:15.601463 containerd[1440]: time="2024-11-12T17:57:15.601302962Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.6: active requests=0, bytes read=29864217" Nov 12 17:57:15.602066 containerd[1440]: time="2024-11-12T17:57:15.602040143Z" level=info msg="ImageCreate event name:\"sha256:6c71f76b696101728cbf70924bde859d444fb8016dfddc50303d11a31e8dae2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:15.607189 containerd[1440]: time="2024-11-12T17:57:15.606072640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:3a820898379831ecff7cf4ce4954bb7a6505988eefcef146fd1ee2f56a01cdbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:15.607259 containerd[1440]: time="2024-11-12T17:57:15.607219445Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.6\" with image id \"sha256:6c71f76b696101728cbf70924bde859d444fb8016dfddc50303d11a31e8dae2a\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:3a820898379831ecff7cf4ce4954bb7a6505988eefcef146fd1ee2f56a01cdbb\", size \"29861015\" in 2.171555977s" Nov 12 17:57:15.607259 containerd[1440]: time="2024-11-12T17:57:15.607247503Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.6\" returns image reference \"sha256:6c71f76b696101728cbf70924bde859d444fb8016dfddc50303d11a31e8dae2a\"" Nov 12 17:57:15.624735 containerd[1440]: time="2024-11-12T17:57:15.624705400Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.6\"" Nov 12 17:57:17.333583 containerd[1440]: time="2024-11-12T17:57:17.333518154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:17.336192 containerd[1440]: time="2024-11-12T17:57:17.335897561Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.6: active requests=0, bytes read=26901029" Nov 12 17:57:17.337019 containerd[1440]: time="2024-11-12T17:57:17.336982345Z" level=info msg="ImageCreate event name:\"sha256:b572f51d3f4ccb05e0b995272e61c33a99fdf709f605989ee64e93248e0ca60a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:17.339969 containerd[1440]: time="2024-11-12T17:57:17.339933081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a412c3cdf35d39c8d37748b457a486faae7c5f2ee1d1ba2059c709bc5534686\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:17.341116 containerd[1440]: time="2024-11-12T17:57:17.341073947Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.6\" with image id \"sha256:b572f51d3f4ccb05e0b995272e61c33a99fdf709f605989ee64e93248e0ca60a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a412c3cdf35d39c8d37748b457a486faae7c5f2ee1d1ba2059c709bc5534686\", size \"28303652\" in 1.716330415s" Nov 12 17:57:17.341184 containerd[1440]: time="2024-11-12T17:57:17.341113073Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.6\" returns image reference \"sha256:b572f51d3f4ccb05e0b995272e61c33a99fdf709f605989ee64e93248e0ca60a\"" Nov 12 17:57:17.360126 containerd[1440]: time="2024-11-12T17:57:17.360092369Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.6\"" Nov 12 17:57:17.504897 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 17:57:17.517369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:57:17.612058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:57:17.615669 (kubelet)[1872]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:57:17.652280 kubelet[1872]: E1112 17:57:17.652217 1872 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:57:17.655880 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:57:17.656053 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:57:18.434901 containerd[1440]: time="2024-11-12T17:57:18.434853374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:18.435438 containerd[1440]: time="2024-11-12T17:57:18.435273713Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.6: active requests=0, bytes read=16164694" Nov 12 17:57:18.436542 containerd[1440]: time="2024-11-12T17:57:18.436505972Z" level=info msg="ImageCreate event name:\"sha256:41769a7fc0b6741c0a2cc72b204685e278287051d0e65557d066a04781c38d95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:18.440259 containerd[1440]: time="2024-11-12T17:57:18.440205618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:948395c284d82c985f2dc0d99b5b51b3ca85eba97003babbc73834e0ab91fa59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:18.441375 containerd[1440]: time="2024-11-12T17:57:18.441342371Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.6\" with image id \"sha256:41769a7fc0b6741c0a2cc72b204685e278287051d0e65557d066a04781c38d95\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:948395c284d82c985f2dc0d99b5b51b3ca85eba97003babbc73834e0ab91fa59\", size \"17567335\" in 1.081211426s" Nov 12 17:57:18.441454 containerd[1440]: time="2024-11-12T17:57:18.441378853Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.6\" returns image reference \"sha256:41769a7fc0b6741c0a2cc72b204685e278287051d0e65557d066a04781c38d95\"" Nov 12 17:57:18.461543 containerd[1440]: time="2024-11-12T17:57:18.461141205Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.6\"" Nov 12 17:57:19.594528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount700652126.mount: Deactivated successfully. Nov 12 17:57:19.987591 containerd[1440]: time="2024-11-12T17:57:19.987454592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:19.988144 containerd[1440]: time="2024-11-12T17:57:19.988112977Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.6: active requests=0, bytes read=25660280" Nov 12 17:57:19.989202 containerd[1440]: time="2024-11-12T17:57:19.989153314Z" level=info msg="ImageCreate event name:\"sha256:95ea5eecb1c87350e3f1d3aa5e1e9aef277acc9b38dff12db3f7e97141ccb494\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:19.991649 containerd[1440]: time="2024-11-12T17:57:19.991619447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:aaf790f611159ab21713affc2c5676f742c9b31db26dd2e61e46c4257dd11b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:19.992375 containerd[1440]: time="2024-11-12T17:57:19.992338043Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.6\" with image id \"sha256:95ea5eecb1c87350e3f1d3aa5e1e9aef277acc9b38dff12db3f7e97141ccb494\", repo tag \"registry.k8s.io/kube-proxy:v1.30.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:aaf790f611159ab21713affc2c5676f742c9b31db26dd2e61e46c4257dd11b76\", size \"25659297\" in 1.531146253s" Nov 12 17:57:19.992421 containerd[1440]: time="2024-11-12T17:57:19.992374425Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.6\" returns image reference \"sha256:95ea5eecb1c87350e3f1d3aa5e1e9aef277acc9b38dff12db3f7e97141ccb494\"" Nov 12 17:57:20.009929 containerd[1440]: time="2024-11-12T17:57:20.009887397Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 17:57:20.595051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1649120150.mount: Deactivated successfully. Nov 12 17:57:21.179625 containerd[1440]: time="2024-11-12T17:57:21.179579819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:21.181620 containerd[1440]: time="2024-11-12T17:57:21.181582732Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Nov 12 17:57:21.182685 containerd[1440]: time="2024-11-12T17:57:21.182647086Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:21.186180 containerd[1440]: time="2024-11-12T17:57:21.186145583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:21.187603 containerd[1440]: time="2024-11-12T17:57:21.187372147Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.177445354s" Nov 12 17:57:21.187603 containerd[1440]: time="2024-11-12T17:57:21.187409040Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Nov 12 17:57:21.205263 containerd[1440]: time="2024-11-12T17:57:21.205240781Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 17:57:21.596038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1934052938.mount: Deactivated successfully. Nov 12 17:57:21.600972 containerd[1440]: time="2024-11-12T17:57:21.600931534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:21.601696 containerd[1440]: time="2024-11-12T17:57:21.601503435Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Nov 12 17:57:21.602456 containerd[1440]: time="2024-11-12T17:57:21.602423927Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:21.605293 containerd[1440]: time="2024-11-12T17:57:21.605250173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:21.606057 containerd[1440]: time="2024-11-12T17:57:21.606025844Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 400.658071ms" Nov 12 17:57:21.606110 containerd[1440]: time="2024-11-12T17:57:21.606060862Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Nov 12 17:57:21.623622 containerd[1440]: time="2024-11-12T17:57:21.623570817Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Nov 12 17:57:22.134002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount111970081.mount: Deactivated successfully. Nov 12 17:57:24.206249 containerd[1440]: time="2024-11-12T17:57:24.205119852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:24.206249 containerd[1440]: time="2024-11-12T17:57:24.206210656Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Nov 12 17:57:24.206827 containerd[1440]: time="2024-11-12T17:57:24.206792487Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:24.209850 containerd[1440]: time="2024-11-12T17:57:24.209813105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:24.211762 containerd[1440]: time="2024-11-12T17:57:24.211732900Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.588127456s" Nov 12 17:57:24.211875 containerd[1440]: time="2024-11-12T17:57:24.211855822Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Nov 12 17:57:27.754607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 17:57:27.768429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:57:27.892276 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:57:27.895874 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:57:27.902440 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:57:27.903113 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 17:57:27.903306 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:57:27.925691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:57:27.939730 systemd[1]: Reloading requested from client PID 2106 ('systemctl') (unit session-7.scope)... Nov 12 17:57:27.939745 systemd[1]: Reloading... Nov 12 17:57:28.007189 zram_generator::config[2146]: No configuration found. Nov 12 17:57:28.121509 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:57:28.173404 systemd[1]: Reloading finished in 233 ms. Nov 12 17:57:28.217715 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:57:28.220746 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 17:57:28.220987 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:57:28.224583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:57:28.319949 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:57:28.323932 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 17:57:28.364251 kubelet[2192]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:57:28.364251 kubelet[2192]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 17:57:28.364251 kubelet[2192]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:57:28.365389 kubelet[2192]: I1112 17:57:28.365118 2192 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 17:57:28.844689 kubelet[2192]: I1112 17:57:28.844644 2192 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Nov 12 17:57:28.844689 kubelet[2192]: I1112 17:57:28.844674 2192 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 17:57:28.844893 kubelet[2192]: I1112 17:57:28.844878 2192 server.go:927] "Client rotation is on, will bootstrap in background" Nov 12 17:57:28.888682 kubelet[2192]: E1112 17:57:28.888620 2192 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.106:6443: connect: connection refused Nov 12 17:57:28.888682 kubelet[2192]: I1112 17:57:28.888653 2192 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 17:57:28.899150 kubelet[2192]: I1112 17:57:28.899114 2192 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 17:57:28.900116 kubelet[2192]: I1112 17:57:28.900064 2192 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 17:57:28.900300 kubelet[2192]: I1112 17:57:28.900109 2192 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 17:57:28.900464 kubelet[2192]: I1112 17:57:28.900367 2192 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 17:57:28.900464 kubelet[2192]: I1112 17:57:28.900376 2192 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 17:57:28.900658 kubelet[2192]: I1112 17:57:28.900628 2192 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:57:28.901889 kubelet[2192]: I1112 17:57:28.901866 2192 kubelet.go:400] "Attempting to sync node with API server" Nov 12 17:57:28.901926 kubelet[2192]: I1112 17:57:28.901895 2192 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 17:57:28.902052 kubelet[2192]: W1112 17:57:28.902012 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Nov 12 17:57:28.902090 kubelet[2192]: E1112 17:57:28.902072 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Nov 12 17:57:28.902113 kubelet[2192]: I1112 17:57:28.902105 2192 kubelet.go:312] "Adding apiserver pod source" Nov 12 17:57:28.902262 kubelet[2192]: I1112 17:57:28.902251 2192 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 17:57:28.902685 kubelet[2192]: W1112 17:57:28.902633 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Nov 12 17:57:28.902725 kubelet[2192]: E1112 17:57:28.902685 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Nov 12 17:57:28.903368 kubelet[2192]: I1112 17:57:28.903347 2192 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 17:57:28.903709 kubelet[2192]: I1112 17:57:28.903694 2192 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 17:57:28.903815 kubelet[2192]: W1112 17:57:28.903803 2192 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 17:57:28.908788 kubelet[2192]: I1112 17:57:28.904625 2192 server.go:1264] "Started kubelet" Nov 12 17:57:28.908788 kubelet[2192]: I1112 17:57:28.907384 2192 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 17:57:28.908788 kubelet[2192]: I1112 17:57:28.907664 2192 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 17:57:28.908788 kubelet[2192]: I1112 17:57:28.907704 2192 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 17:57:28.908788 kubelet[2192]: I1112 17:57:28.908100 2192 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 17:57:28.908953 kubelet[2192]: I1112 17:57:28.908818 2192 server.go:455] "Adding debug handlers to kubelet server" Nov 12 17:57:28.912289 kubelet[2192]: E1112 17:57:28.912105 2192 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.106:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18074a469de7624c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 17:57:28.904598092 +0000 UTC m=+0.577553437,LastTimestamp:2024-11-12 17:57:28.904598092 +0000 UTC m=+0.577553437,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 17:57:28.912720 kubelet[2192]: E1112 17:57:28.912699 2192 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 17:57:28.913096 kubelet[2192]: I1112 17:57:28.913065 2192 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 17:57:28.913450 kubelet[2192]: I1112 17:57:28.913433 2192 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Nov 12 17:57:28.913550 kubelet[2192]: E1112 17:57:28.913516 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="200ms" Nov 12 17:57:28.913615 kubelet[2192]: E1112 17:57:28.913597 2192 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 17:57:28.913967 kubelet[2192]: W1112 17:57:28.913900 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Nov 12 17:57:28.914038 kubelet[2192]: E1112 17:57:28.913975 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Nov 12 17:57:28.914124 kubelet[2192]: I1112 17:57:28.914105 2192 factory.go:221] Registration of the systemd container factory successfully Nov 12 17:57:28.914213 kubelet[2192]: I1112 17:57:28.914195 2192 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 17:57:28.915128 kubelet[2192]: I1112 17:57:28.915053 2192 factory.go:221] Registration of the containerd container factory successfully Nov 12 17:57:28.915282 kubelet[2192]: I1112 17:57:28.915268 2192 reconciler.go:26] "Reconciler: start to sync state" Nov 12 17:57:28.925592 kubelet[2192]: I1112 17:57:28.925551 2192 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 17:57:28.926788 kubelet[2192]: I1112 17:57:28.926765 2192 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 17:57:28.926982 kubelet[2192]: I1112 17:57:28.926932 2192 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 17:57:28.926982 kubelet[2192]: I1112 17:57:28.926950 2192 kubelet.go:2337] "Starting kubelet main sync loop" Nov 12 17:57:28.927053 kubelet[2192]: E1112 17:57:28.926994 2192 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 17:57:28.927533 kubelet[2192]: W1112 17:57:28.927421 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Nov 12 17:57:28.927533 kubelet[2192]: E1112 17:57:28.927457 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Nov 12 17:57:28.931324 kubelet[2192]: I1112 17:57:28.931299 2192 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 17:57:28.931324 kubelet[2192]: I1112 17:57:28.931314 2192 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 17:57:28.931419 kubelet[2192]: I1112 17:57:28.931332 2192 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:57:28.990226 kubelet[2192]: I1112 17:57:28.990195 2192 policy_none.go:49] "None policy: Start" Nov 12 17:57:28.991066 kubelet[2192]: I1112 17:57:28.991022 2192 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 17:57:28.991066 kubelet[2192]: I1112 17:57:28.991051 2192 state_mem.go:35] "Initializing new in-memory state store" Nov 12 17:57:28.996880 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 17:57:29.008374 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 17:57:29.010804 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 17:57:29.014450 kubelet[2192]: I1112 17:57:29.014426 2192 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:57:29.014785 kubelet[2192]: E1112 17:57:29.014752 2192 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Nov 12 17:57:29.016980 kubelet[2192]: I1112 17:57:29.016954 2192 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 17:57:29.017209 kubelet[2192]: I1112 17:57:29.017141 2192 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 17:57:29.017324 kubelet[2192]: I1112 17:57:29.017302 2192 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 17:57:29.019202 kubelet[2192]: E1112 17:57:29.019175 2192 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 17:57:29.027615 kubelet[2192]: I1112 17:57:29.027554 2192 topology_manager.go:215] "Topology Admit Handler" podUID="8c596e283afcc45ee8ff2081cc599b92" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 17:57:29.028547 kubelet[2192]: I1112 17:57:29.028456 2192 topology_manager.go:215] "Topology Admit Handler" podUID="35a50a3f0f14abbdd3fae477f39e6e18" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 17:57:29.029521 kubelet[2192]: I1112 17:57:29.029478 2192 topology_manager.go:215] "Topology Admit Handler" podUID="c95384ce7f39fb5cff38cd36dacf8a69" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 17:57:29.035384 systemd[1]: Created slice kubepods-burstable-pod8c596e283afcc45ee8ff2081cc599b92.slice - libcontainer container kubepods-burstable-pod8c596e283afcc45ee8ff2081cc599b92.slice. Nov 12 17:57:29.049234 systemd[1]: Created slice kubepods-burstable-pod35a50a3f0f14abbdd3fae477f39e6e18.slice - libcontainer container kubepods-burstable-pod35a50a3f0f14abbdd3fae477f39e6e18.slice. Nov 12 17:57:29.065982 systemd[1]: Created slice kubepods-burstable-podc95384ce7f39fb5cff38cd36dacf8a69.slice - libcontainer container kubepods-burstable-podc95384ce7f39fb5cff38cd36dacf8a69.slice. Nov 12 17:57:29.114012 kubelet[2192]: E1112 17:57:29.113920 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="400ms" Nov 12 17:57:29.117237 kubelet[2192]: I1112 17:57:29.117113 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:57:29.117237 kubelet[2192]: I1112 17:57:29.117152 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:57:29.117237 kubelet[2192]: I1112 17:57:29.117181 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:57:29.117237 kubelet[2192]: I1112 17:57:29.117200 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:57:29.117409 kubelet[2192]: I1112 17:57:29.117256 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c95384ce7f39fb5cff38cd36dacf8a69-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c95384ce7f39fb5cff38cd36dacf8a69\") " pod="kube-system/kube-scheduler-localhost" Nov 12 17:57:29.117409 kubelet[2192]: I1112 17:57:29.117288 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c596e283afcc45ee8ff2081cc599b92-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8c596e283afcc45ee8ff2081cc599b92\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:57:29.117409 kubelet[2192]: I1112 17:57:29.117310 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c596e283afcc45ee8ff2081cc599b92-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8c596e283afcc45ee8ff2081cc599b92\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:57:29.117409 kubelet[2192]: I1112 17:57:29.117329 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c596e283afcc45ee8ff2081cc599b92-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8c596e283afcc45ee8ff2081cc599b92\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:57:29.117409 kubelet[2192]: I1112 17:57:29.117350 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:57:29.216459 kubelet[2192]: I1112 17:57:29.216428 2192 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:57:29.216796 kubelet[2192]: E1112 17:57:29.216753 2192 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Nov 12 17:57:29.346900 kubelet[2192]: E1112 17:57:29.346850 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:29.349490 containerd[1440]: time="2024-11-12T17:57:29.349447079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8c596e283afcc45ee8ff2081cc599b92,Namespace:kube-system,Attempt:0,}" Nov 12 17:57:29.364487 kubelet[2192]: E1112 17:57:29.364391 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:29.364948 containerd[1440]: time="2024-11-12T17:57:29.364909984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:35a50a3f0f14abbdd3fae477f39e6e18,Namespace:kube-system,Attempt:0,}" Nov 12 17:57:29.368507 kubelet[2192]: E1112 17:57:29.368469 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:29.368923 containerd[1440]: time="2024-11-12T17:57:29.368882714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c95384ce7f39fb5cff38cd36dacf8a69,Namespace:kube-system,Attempt:0,}" Nov 12 17:57:29.514356 kubelet[2192]: E1112 17:57:29.514309 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="800ms" Nov 12 17:57:29.618214 kubelet[2192]: I1112 17:57:29.618083 2192 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:57:29.618598 kubelet[2192]: E1112 17:57:29.618461 2192 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Nov 12 17:57:29.733596 kubelet[2192]: W1112 17:57:29.733538 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Nov 12 17:57:29.733596 kubelet[2192]: E1112 17:57:29.733582 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Nov 12 17:57:29.747930 kubelet[2192]: W1112 17:57:29.747886 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Nov 12 17:57:29.747930 kubelet[2192]: E1112 17:57:29.747936 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Nov 12 17:57:29.960471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2293481490.mount: Deactivated successfully. Nov 12 17:57:29.965796 containerd[1440]: time="2024-11-12T17:57:29.965725743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:57:29.967147 containerd[1440]: time="2024-11-12T17:57:29.967089667Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 17:57:29.968057 containerd[1440]: time="2024-11-12T17:57:29.967920920Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:57:29.968921 containerd[1440]: time="2024-11-12T17:57:29.968881205Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:57:29.969844 containerd[1440]: time="2024-11-12T17:57:29.969791220Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:57:29.969941 containerd[1440]: time="2024-11-12T17:57:29.969921731Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 17:57:29.970610 containerd[1440]: time="2024-11-12T17:57:29.970584672Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Nov 12 17:57:29.975180 containerd[1440]: time="2024-11-12T17:57:29.974955126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:57:29.975831 containerd[1440]: time="2024-11-12T17:57:29.975804721Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 626.277282ms" Nov 12 17:57:29.976594 containerd[1440]: time="2024-11-12T17:57:29.976549940Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 611.563072ms" Nov 12 17:57:29.977271 containerd[1440]: time="2024-11-12T17:57:29.977242891Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 608.295202ms" Nov 12 17:57:30.125952 containerd[1440]: time="2024-11-12T17:57:30.125743970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:57:30.125952 containerd[1440]: time="2024-11-12T17:57:30.125795126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:57:30.125952 containerd[1440]: time="2024-11-12T17:57:30.125810872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:57:30.125952 containerd[1440]: time="2024-11-12T17:57:30.125878533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:57:30.126313 containerd[1440]: time="2024-11-12T17:57:30.126256884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:57:30.126313 containerd[1440]: time="2024-11-12T17:57:30.126293612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:57:30.126313 containerd[1440]: time="2024-11-12T17:57:30.126304163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:57:30.126411 containerd[1440]: time="2024-11-12T17:57:30.126361633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:57:30.130728 containerd[1440]: time="2024-11-12T17:57:30.130650222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:57:30.130728 containerd[1440]: time="2024-11-12T17:57:30.130703816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:57:30.130728 containerd[1440]: time="2024-11-12T17:57:30.130718923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:57:30.130959 containerd[1440]: time="2024-11-12T17:57:30.130781548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:57:30.144413 systemd[1]: Started cri-containerd-7d44a1a3954b9d5282f9863eb1d63e84b6266f53bcf4b7efd627ed1a59445ec9.scope - libcontainer container 7d44a1a3954b9d5282f9863eb1d63e84b6266f53bcf4b7efd627ed1a59445ec9. Nov 12 17:57:30.149424 systemd[1]: Started cri-containerd-682ef5252b8223f2a2934b7c975cfd5572ef719dddee7893c4f109078da25128.scope - libcontainer container 682ef5252b8223f2a2934b7c975cfd5572ef719dddee7893c4f109078da25128. Nov 12 17:57:30.151111 systemd[1]: Started cri-containerd-95d0d09ba26fa48e09699ba7b89b56a13360d91243e3b04af3c770be9f6a4e47.scope - libcontainer container 95d0d09ba26fa48e09699ba7b89b56a13360d91243e3b04af3c770be9f6a4e47. Nov 12 17:57:30.172623 containerd[1440]: time="2024-11-12T17:57:30.172470042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8c596e283afcc45ee8ff2081cc599b92,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d44a1a3954b9d5282f9863eb1d63e84b6266f53bcf4b7efd627ed1a59445ec9\"" Nov 12 17:57:30.173989 kubelet[2192]: E1112 17:57:30.173959 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:30.177305 containerd[1440]: time="2024-11-12T17:57:30.177188298Z" level=info msg="CreateContainer within sandbox \"7d44a1a3954b9d5282f9863eb1d63e84b6266f53bcf4b7efd627ed1a59445ec9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 17:57:30.187577 containerd[1440]: time="2024-11-12T17:57:30.187316367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:35a50a3f0f14abbdd3fae477f39e6e18,Namespace:kube-system,Attempt:0,} returns sandbox id \"95d0d09ba26fa48e09699ba7b89b56a13360d91243e3b04af3c770be9f6a4e47\"" Nov 12 17:57:30.188145 kubelet[2192]: E1112 17:57:30.188116 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:30.190960 containerd[1440]: time="2024-11-12T17:57:30.190930823Z" level=info msg="CreateContainer within sandbox \"95d0d09ba26fa48e09699ba7b89b56a13360d91243e3b04af3c770be9f6a4e47\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 17:57:30.191785 containerd[1440]: time="2024-11-12T17:57:30.191617426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c95384ce7f39fb5cff38cd36dacf8a69,Namespace:kube-system,Attempt:0,} returns sandbox id \"682ef5252b8223f2a2934b7c975cfd5572ef719dddee7893c4f109078da25128\"" Nov 12 17:57:30.192347 kubelet[2192]: E1112 17:57:30.192320 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:30.193828 containerd[1440]: time="2024-11-12T17:57:30.193790815Z" level=info msg="CreateContainer within sandbox \"682ef5252b8223f2a2934b7c975cfd5572ef719dddee7893c4f109078da25128\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 17:57:30.200925 containerd[1440]: time="2024-11-12T17:57:30.200874133Z" level=info msg="CreateContainer within sandbox \"7d44a1a3954b9d5282f9863eb1d63e84b6266f53bcf4b7efd627ed1a59445ec9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e162444029fae0cedcdafe8dd3edff830d2e42ebd8416794db922d317abc2d8b\"" Nov 12 17:57:30.201700 containerd[1440]: time="2024-11-12T17:57:30.201608614Z" level=info msg="StartContainer for \"e162444029fae0cedcdafe8dd3edff830d2e42ebd8416794db922d317abc2d8b\"" Nov 12 17:57:30.205622 containerd[1440]: time="2024-11-12T17:57:30.205526766Z" level=info msg="CreateContainer within sandbox \"95d0d09ba26fa48e09699ba7b89b56a13360d91243e3b04af3c770be9f6a4e47\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"925d696c29a5bc2758ceb898cce07b1e94b20c001b10b9566f2b48b47f37c19a\"" Nov 12 17:57:30.205935 containerd[1440]: time="2024-11-12T17:57:30.205913110Z" level=info msg="StartContainer for \"925d696c29a5bc2758ceb898cce07b1e94b20c001b10b9566f2b48b47f37c19a\"" Nov 12 17:57:30.208129 containerd[1440]: time="2024-11-12T17:57:30.208097889Z" level=info msg="CreateContainer within sandbox \"682ef5252b8223f2a2934b7c975cfd5572ef719dddee7893c4f109078da25128\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e9e7eb4cf19e8be8f8a1b2cd4a1e389e6e5cb5604a2d9c52525738e3fc717e60\"" Nov 12 17:57:30.209532 containerd[1440]: time="2024-11-12T17:57:30.208516405Z" level=info msg="StartContainer for \"e9e7eb4cf19e8be8f8a1b2cd4a1e389e6e5cb5604a2d9c52525738e3fc717e60\"" Nov 12 17:57:30.209605 kubelet[2192]: W1112 17:57:30.209465 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Nov 12 17:57:30.209605 kubelet[2192]: E1112 17:57:30.209524 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Nov 12 17:57:30.233331 systemd[1]: Started cri-containerd-e162444029fae0cedcdafe8dd3edff830d2e42ebd8416794db922d317abc2d8b.scope - libcontainer container e162444029fae0cedcdafe8dd3edff830d2e42ebd8416794db922d317abc2d8b. Nov 12 17:57:30.238940 systemd[1]: Started cri-containerd-925d696c29a5bc2758ceb898cce07b1e94b20c001b10b9566f2b48b47f37c19a.scope - libcontainer container 925d696c29a5bc2758ceb898cce07b1e94b20c001b10b9566f2b48b47f37c19a. Nov 12 17:57:30.242841 systemd[1]: Started cri-containerd-e9e7eb4cf19e8be8f8a1b2cd4a1e389e6e5cb5604a2d9c52525738e3fc717e60.scope - libcontainer container e9e7eb4cf19e8be8f8a1b2cd4a1e389e6e5cb5604a2d9c52525738e3fc717e60. Nov 12 17:57:30.273413 containerd[1440]: time="2024-11-12T17:57:30.273329743Z" level=info msg="StartContainer for \"e162444029fae0cedcdafe8dd3edff830d2e42ebd8416794db922d317abc2d8b\" returns successfully" Nov 12 17:57:30.284185 containerd[1440]: time="2024-11-12T17:57:30.284132865Z" level=info msg="StartContainer for \"e9e7eb4cf19e8be8f8a1b2cd4a1e389e6e5cb5604a2d9c52525738e3fc717e60\" returns successfully" Nov 12 17:57:30.314828 containerd[1440]: time="2024-11-12T17:57:30.314796150Z" level=info msg="StartContainer for \"925d696c29a5bc2758ceb898cce07b1e94b20c001b10b9566f2b48b47f37c19a\" returns successfully" Nov 12 17:57:30.320277 kubelet[2192]: E1112 17:57:30.319998 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="1.6s" Nov 12 17:57:30.400718 kubelet[2192]: W1112 17:57:30.400009 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Nov 12 17:57:30.400718 kubelet[2192]: E1112 17:57:30.400076 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Nov 12 17:57:30.420793 kubelet[2192]: I1112 17:57:30.420715 2192 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:57:30.421008 kubelet[2192]: E1112 17:57:30.420981 2192 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Nov 12 17:57:30.936196 kubelet[2192]: E1112 17:57:30.934053 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:30.936196 kubelet[2192]: E1112 17:57:30.935800 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:30.937651 kubelet[2192]: E1112 17:57:30.937631 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:31.925971 kubelet[2192]: E1112 17:57:31.925928 2192 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 12 17:57:31.938597 kubelet[2192]: E1112 17:57:31.938559 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:32.022654 kubelet[2192]: I1112 17:57:32.022590 2192 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:57:32.031096 kubelet[2192]: I1112 17:57:32.030941 2192 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 17:57:32.046183 kubelet[2192]: E1112 17:57:32.044074 2192 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 17:57:32.146906 kubelet[2192]: E1112 17:57:32.146864 2192 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 17:57:32.247425 kubelet[2192]: E1112 17:57:32.247319 2192 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 17:57:32.347861 kubelet[2192]: E1112 17:57:32.347815 2192 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 17:57:32.448701 kubelet[2192]: E1112 17:57:32.448645 2192 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 17:57:32.885765 kubelet[2192]: E1112 17:57:32.885725 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:32.904271 kubelet[2192]: I1112 17:57:32.904236 2192 apiserver.go:52] "Watching apiserver" Nov 12 17:57:32.914271 kubelet[2192]: I1112 17:57:32.914237 2192 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Nov 12 17:57:32.940647 kubelet[2192]: E1112 17:57:32.940494 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:33.273576 kubelet[2192]: E1112 17:57:33.273502 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:33.781370 systemd[1]: Reloading requested from client PID 2469 ('systemctl') (unit session-7.scope)... Nov 12 17:57:33.781659 systemd[1]: Reloading... Nov 12 17:57:33.846198 zram_generator::config[2509]: No configuration found. Nov 12 17:57:33.935141 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:57:33.942355 kubelet[2192]: E1112 17:57:33.942203 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:34.001671 systemd[1]: Reloading finished in 219 ms. Nov 12 17:57:34.036949 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:57:34.050977 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 17:57:34.051220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:57:34.059439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:57:34.145425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:57:34.149981 (kubelet)[2550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 17:57:34.193893 kubelet[2550]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:57:34.193893 kubelet[2550]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 17:57:34.193893 kubelet[2550]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:57:34.194358 kubelet[2550]: I1112 17:57:34.193954 2550 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 17:57:34.198632 kubelet[2550]: I1112 17:57:34.198593 2550 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Nov 12 17:57:34.198632 kubelet[2550]: I1112 17:57:34.198619 2550 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 17:57:34.198802 kubelet[2550]: I1112 17:57:34.198785 2550 server.go:927] "Client rotation is on, will bootstrap in background" Nov 12 17:57:34.200092 kubelet[2550]: I1112 17:57:34.200076 2550 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 17:57:34.201396 kubelet[2550]: I1112 17:57:34.201272 2550 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 17:57:34.205994 kubelet[2550]: I1112 17:57:34.205958 2550 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 17:57:34.206321 kubelet[2550]: I1112 17:57:34.206297 2550 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 17:57:34.206554 kubelet[2550]: I1112 17:57:34.206393 2550 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 17:57:34.206919 kubelet[2550]: I1112 17:57:34.206665 2550 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 17:57:34.206919 kubelet[2550]: I1112 17:57:34.206681 2550 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 17:57:34.206919 kubelet[2550]: I1112 17:57:34.206715 2550 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:57:34.206919 kubelet[2550]: I1112 17:57:34.206812 2550 kubelet.go:400] "Attempting to sync node with API server" Nov 12 17:57:34.206919 kubelet[2550]: I1112 17:57:34.206828 2550 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 17:57:34.206919 kubelet[2550]: I1112 17:57:34.206853 2550 kubelet.go:312] "Adding apiserver pod source" Nov 12 17:57:34.206919 kubelet[2550]: I1112 17:57:34.206865 2550 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 17:57:34.207658 kubelet[2550]: I1112 17:57:34.207639 2550 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 17:57:34.209814 kubelet[2550]: I1112 17:57:34.208380 2550 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 17:57:34.209814 kubelet[2550]: I1112 17:57:34.208757 2550 server.go:1264] "Started kubelet" Nov 12 17:57:34.211697 kubelet[2550]: I1112 17:57:34.211668 2550 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 17:57:34.212237 kubelet[2550]: I1112 17:57:34.212211 2550 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 17:57:34.213341 kubelet[2550]: I1112 17:57:34.213317 2550 server.go:455] "Adding debug handlers to kubelet server" Nov 12 17:57:34.218207 kubelet[2550]: I1112 17:57:34.214185 2550 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 17:57:34.218207 kubelet[2550]: I1112 17:57:34.214343 2550 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 17:57:34.218207 kubelet[2550]: I1112 17:57:34.214511 2550 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 17:57:34.218207 kubelet[2550]: I1112 17:57:34.214707 2550 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Nov 12 17:57:34.218207 kubelet[2550]: I1112 17:57:34.214869 2550 reconciler.go:26] "Reconciler: start to sync state" Nov 12 17:57:34.218207 kubelet[2550]: I1112 17:57:34.216844 2550 factory.go:221] Registration of the systemd container factory successfully Nov 12 17:57:34.218207 kubelet[2550]: I1112 17:57:34.217308 2550 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 17:57:34.218836 kubelet[2550]: I1112 17:57:34.218802 2550 factory.go:221] Registration of the containerd container factory successfully Nov 12 17:57:34.241563 kubelet[2550]: I1112 17:57:34.241510 2550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 17:57:34.242669 kubelet[2550]: I1112 17:57:34.242650 2550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 17:57:34.242744 kubelet[2550]: I1112 17:57:34.242689 2550 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 17:57:34.242744 kubelet[2550]: I1112 17:57:34.242705 2550 kubelet.go:2337] "Starting kubelet main sync loop" Nov 12 17:57:34.242790 kubelet[2550]: E1112 17:57:34.242744 2550 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 17:57:34.259687 kubelet[2550]: I1112 17:57:34.259653 2550 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 17:57:34.259687 kubelet[2550]: I1112 17:57:34.259667 2550 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 17:57:34.259687 kubelet[2550]: I1112 17:57:34.259684 2550 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:57:34.259828 kubelet[2550]: I1112 17:57:34.259819 2550 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 17:57:34.259852 kubelet[2550]: I1112 17:57:34.259829 2550 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 17:57:34.259852 kubelet[2550]: I1112 17:57:34.259845 2550 policy_none.go:49] "None policy: Start" Nov 12 17:57:34.260447 kubelet[2550]: I1112 17:57:34.260425 2550 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 17:57:34.260447 kubelet[2550]: I1112 17:57:34.260449 2550 state_mem.go:35] "Initializing new in-memory state store" Nov 12 17:57:34.260587 kubelet[2550]: I1112 17:57:34.260571 2550 state_mem.go:75] "Updated machine memory state" Nov 12 17:57:34.264281 kubelet[2550]: I1112 17:57:34.264259 2550 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 17:57:34.264738 kubelet[2550]: I1112 17:57:34.264574 2550 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 17:57:34.264738 kubelet[2550]: I1112 17:57:34.264674 2550 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 17:57:34.318350 kubelet[2550]: I1112 17:57:34.318251 2550 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 17:57:34.325273 kubelet[2550]: I1112 17:57:34.325240 2550 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Nov 12 17:57:34.325408 kubelet[2550]: I1112 17:57:34.325324 2550 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 17:57:34.343099 kubelet[2550]: I1112 17:57:34.343056 2550 topology_manager.go:215] "Topology Admit Handler" podUID="c95384ce7f39fb5cff38cd36dacf8a69" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 17:57:34.343254 kubelet[2550]: I1112 17:57:34.343183 2550 topology_manager.go:215] "Topology Admit Handler" podUID="8c596e283afcc45ee8ff2081cc599b92" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 17:57:34.343254 kubelet[2550]: I1112 17:57:34.343220 2550 topology_manager.go:215] "Topology Admit Handler" podUID="35a50a3f0f14abbdd3fae477f39e6e18" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 17:57:34.349469 kubelet[2550]: E1112 17:57:34.349375 2550 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 17:57:34.349802 kubelet[2550]: E1112 17:57:34.349777 2550 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 12 17:57:34.516072 kubelet[2550]: I1112 17:57:34.516024 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c95384ce7f39fb5cff38cd36dacf8a69-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c95384ce7f39fb5cff38cd36dacf8a69\") " pod="kube-system/kube-scheduler-localhost" Nov 12 17:57:34.516072 kubelet[2550]: I1112 17:57:34.516067 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c596e283afcc45ee8ff2081cc599b92-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8c596e283afcc45ee8ff2081cc599b92\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:57:34.516346 kubelet[2550]: I1112 17:57:34.516095 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:57:34.516346 kubelet[2550]: I1112 17:57:34.516112 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:57:34.516346 kubelet[2550]: I1112 17:57:34.516130 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c596e283afcc45ee8ff2081cc599b92-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8c596e283afcc45ee8ff2081cc599b92\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:57:34.516346 kubelet[2550]: I1112 17:57:34.516147 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c596e283afcc45ee8ff2081cc599b92-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8c596e283afcc45ee8ff2081cc599b92\") " pod="kube-system/kube-apiserver-localhost" Nov 12 17:57:34.516346 kubelet[2550]: I1112 17:57:34.516186 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:57:34.516460 kubelet[2550]: I1112 17:57:34.516203 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:57:34.516460 kubelet[2550]: I1112 17:57:34.516221 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35a50a3f0f14abbdd3fae477f39e6e18-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"35a50a3f0f14abbdd3fae477f39e6e18\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 17:57:34.650371 kubelet[2550]: E1112 17:57:34.650211 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:34.650491 kubelet[2550]: E1112 17:57:34.650370 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:34.650547 kubelet[2550]: E1112 17:57:34.650520 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:35.208272 kubelet[2550]: I1112 17:57:35.208183 2550 apiserver.go:52] "Watching apiserver" Nov 12 17:57:35.214964 kubelet[2550]: I1112 17:57:35.214926 2550 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Nov 12 17:57:35.253208 kubelet[2550]: E1112 17:57:35.252999 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:35.253208 kubelet[2550]: E1112 17:57:35.253098 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:35.254336 kubelet[2550]: E1112 17:57:35.254307 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:35.277448 kubelet[2550]: I1112 17:57:35.277362 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.277348458 podStartE2EDuration="3.277348458s" podCreationTimestamp="2024-11-12 17:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:57:35.277080177 +0000 UTC m=+1.124186778" watchObservedRunningTime="2024-11-12 17:57:35.277348458 +0000 UTC m=+1.124455059" Nov 12 17:57:35.294069 kubelet[2550]: I1112 17:57:35.293868 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.293849538 podStartE2EDuration="2.293849538s" podCreationTimestamp="2024-11-12 17:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:57:35.286430687 +0000 UTC m=+1.133537288" watchObservedRunningTime="2024-11-12 17:57:35.293849538 +0000 UTC m=+1.140956099" Nov 12 17:57:35.302528 kubelet[2550]: I1112 17:57:35.302481 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.302466215 podStartE2EDuration="1.302466215s" podCreationTimestamp="2024-11-12 17:57:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:57:35.294028858 +0000 UTC m=+1.141135419" watchObservedRunningTime="2024-11-12 17:57:35.302466215 +0000 UTC m=+1.149572816" Nov 12 17:57:36.257026 kubelet[2550]: E1112 17:57:36.256982 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:39.017803 sudo[1618]: pam_unix(sudo:session): session closed for user root Nov 12 17:57:39.022230 sshd[1615]: pam_unix(sshd:session): session closed for user core Nov 12 17:57:39.026052 systemd[1]: sshd@6-10.0.0.106:22-10.0.0.1:36032.service: Deactivated successfully. Nov 12 17:57:39.027622 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 17:57:39.027765 systemd[1]: session-7.scope: Consumed 5.820s CPU time, 189.9M memory peak, 0B memory swap peak. Nov 12 17:57:39.028263 systemd-logind[1424]: Session 7 logged out. Waiting for processes to exit. Nov 12 17:57:39.029097 systemd-logind[1424]: Removed session 7. Nov 12 17:57:40.458479 kubelet[2550]: E1112 17:57:40.458197 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:41.265433 kubelet[2550]: E1112 17:57:41.265377 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:42.063589 kubelet[2550]: E1112 17:57:42.063554 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:42.267192 kubelet[2550]: E1112 17:57:42.266846 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:42.267192 kubelet[2550]: E1112 17:57:42.267115 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:42.989448 kubelet[2550]: E1112 17:57:42.989417 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:43.268752 kubelet[2550]: E1112 17:57:43.268713 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:47.933598 kubelet[2550]: I1112 17:57:47.933489 2550 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 17:57:47.945502 containerd[1440]: time="2024-11-12T17:57:47.945434869Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 17:57:47.945789 kubelet[2550]: I1112 17:57:47.945668 2550 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 17:57:49.001763 kubelet[2550]: I1112 17:57:49.001520 2550 topology_manager.go:215] "Topology Admit Handler" podUID="f989ba0f-3c77-4143-8f46-7ce7fc889f54" podNamespace="kube-system" podName="kube-proxy-ts794" Nov 12 17:57:49.015973 kubelet[2550]: I1112 17:57:49.015878 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f989ba0f-3c77-4143-8f46-7ce7fc889f54-kube-proxy\") pod \"kube-proxy-ts794\" (UID: \"f989ba0f-3c77-4143-8f46-7ce7fc889f54\") " pod="kube-system/kube-proxy-ts794" Nov 12 17:57:49.015973 kubelet[2550]: I1112 17:57:49.015916 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gpmp\" (UniqueName: \"kubernetes.io/projected/f989ba0f-3c77-4143-8f46-7ce7fc889f54-kube-api-access-8gpmp\") pod \"kube-proxy-ts794\" (UID: \"f989ba0f-3c77-4143-8f46-7ce7fc889f54\") " pod="kube-system/kube-proxy-ts794" Nov 12 17:57:49.015973 kubelet[2550]: I1112 17:57:49.015939 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f989ba0f-3c77-4143-8f46-7ce7fc889f54-xtables-lock\") pod \"kube-proxy-ts794\" (UID: \"f989ba0f-3c77-4143-8f46-7ce7fc889f54\") " pod="kube-system/kube-proxy-ts794" Nov 12 17:57:49.016454 kubelet[2550]: I1112 17:57:49.015986 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f989ba0f-3c77-4143-8f46-7ce7fc889f54-lib-modules\") pod \"kube-proxy-ts794\" (UID: \"f989ba0f-3c77-4143-8f46-7ce7fc889f54\") " pod="kube-system/kube-proxy-ts794" Nov 12 17:57:49.018006 systemd[1]: Created slice kubepods-besteffort-podf989ba0f_3c77_4143_8f46_7ce7fc889f54.slice - libcontainer container kubepods-besteffort-podf989ba0f_3c77_4143_8f46_7ce7fc889f54.slice. Nov 12 17:57:49.114875 kubelet[2550]: I1112 17:57:49.114061 2550 topology_manager.go:215] "Topology Admit Handler" podUID="a84b6dcd-b09b-4e46-95af-29f3519a8b61" podNamespace="tigera-operator" podName="tigera-operator-5645cfc98-dbrc2" Nov 12 17:57:49.127375 systemd[1]: Created slice kubepods-besteffort-poda84b6dcd_b09b_4e46_95af_29f3519a8b61.slice - libcontainer container kubepods-besteffort-poda84b6dcd_b09b_4e46_95af_29f3519a8b61.slice. Nov 12 17:57:49.217523 kubelet[2550]: I1112 17:57:49.217471 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a84b6dcd-b09b-4e46-95af-29f3519a8b61-var-lib-calico\") pod \"tigera-operator-5645cfc98-dbrc2\" (UID: \"a84b6dcd-b09b-4e46-95af-29f3519a8b61\") " pod="tigera-operator/tigera-operator-5645cfc98-dbrc2" Nov 12 17:57:49.217523 kubelet[2550]: I1112 17:57:49.217525 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8pjk\" (UniqueName: \"kubernetes.io/projected/a84b6dcd-b09b-4e46-95af-29f3519a8b61-kube-api-access-j8pjk\") pod \"tigera-operator-5645cfc98-dbrc2\" (UID: \"a84b6dcd-b09b-4e46-95af-29f3519a8b61\") " pod="tigera-operator/tigera-operator-5645cfc98-dbrc2" Nov 12 17:57:49.331436 kubelet[2550]: E1112 17:57:49.331383 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:49.338133 containerd[1440]: time="2024-11-12T17:57:49.338091060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ts794,Uid:f989ba0f-3c77-4143-8f46-7ce7fc889f54,Namespace:kube-system,Attempt:0,}" Nov 12 17:57:49.363983 containerd[1440]: time="2024-11-12T17:57:49.363883320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:57:49.363983 containerd[1440]: time="2024-11-12T17:57:49.363940440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:57:49.363983 containerd[1440]: time="2024-11-12T17:57:49.363950560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:57:49.364764 containerd[1440]: time="2024-11-12T17:57:49.364095600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:57:49.387326 systemd[1]: Started cri-containerd-45ee893892d96476ea6eeeb1ba6edc47bf446d3cd76ff867042c0f0950d172fc.scope - libcontainer container 45ee893892d96476ea6eeeb1ba6edc47bf446d3cd76ff867042c0f0950d172fc. Nov 12 17:57:49.403689 containerd[1440]: time="2024-11-12T17:57:49.403656314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ts794,Uid:f989ba0f-3c77-4143-8f46-7ce7fc889f54,Namespace:kube-system,Attempt:0,} returns sandbox id \"45ee893892d96476ea6eeeb1ba6edc47bf446d3cd76ff867042c0f0950d172fc\"" Nov 12 17:57:49.406675 kubelet[2550]: E1112 17:57:49.406652 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:49.413035 containerd[1440]: time="2024-11-12T17:57:49.412994790Z" level=info msg="CreateContainer within sandbox \"45ee893892d96476ea6eeeb1ba6edc47bf446d3cd76ff867042c0f0950d172fc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 17:57:49.424441 containerd[1440]: time="2024-11-12T17:57:49.424398394Z" level=info msg="CreateContainer within sandbox \"45ee893892d96476ea6eeeb1ba6edc47bf446d3cd76ff867042c0f0950d172fc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dd36b74aeed056ea1bddc77d579607079cc3046b84067638533bc2e4a40227f4\"" Nov 12 17:57:49.427946 containerd[1440]: time="2024-11-12T17:57:49.426928564Z" level=info msg="StartContainer for \"dd36b74aeed056ea1bddc77d579607079cc3046b84067638533bc2e4a40227f4\"" Nov 12 17:57:49.433987 containerd[1440]: time="2024-11-12T17:57:49.433954751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5645cfc98-dbrc2,Uid:a84b6dcd-b09b-4e46-95af-29f3519a8b61,Namespace:tigera-operator,Attempt:0,}" Nov 12 17:57:49.453506 systemd[1]: Started cri-containerd-dd36b74aeed056ea1bddc77d579607079cc3046b84067638533bc2e4a40227f4.scope - libcontainer container dd36b74aeed056ea1bddc77d579607079cc3046b84067638533bc2e4a40227f4. Nov 12 17:57:49.460703 containerd[1440]: time="2024-11-12T17:57:49.460614535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:57:49.460703 containerd[1440]: time="2024-11-12T17:57:49.460673015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:57:49.460703 containerd[1440]: time="2024-11-12T17:57:49.460688615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:57:49.460895 containerd[1440]: time="2024-11-12T17:57:49.460828696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:57:49.479342 systemd[1]: Started cri-containerd-0ae3d72e4506be129db5edd80d1f3dbeadd0cdd5d024fccc556ee5917c45fc19.scope - libcontainer container 0ae3d72e4506be129db5edd80d1f3dbeadd0cdd5d024fccc556ee5917c45fc19. Nov 12 17:57:49.483002 containerd[1440]: time="2024-11-12T17:57:49.482962021Z" level=info msg="StartContainer for \"dd36b74aeed056ea1bddc77d579607079cc3046b84067638533bc2e4a40227f4\" returns successfully" Nov 12 17:57:49.524948 containerd[1440]: time="2024-11-12T17:57:49.524896424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5645cfc98-dbrc2,Uid:a84b6dcd-b09b-4e46-95af-29f3519a8b61,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0ae3d72e4506be129db5edd80d1f3dbeadd0cdd5d024fccc556ee5917c45fc19\"" Nov 12 17:57:49.526788 containerd[1440]: time="2024-11-12T17:57:49.526572350Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 17:57:50.120257 update_engine[1426]: I20241112 17:57:50.120191 1426 update_attempter.cc:509] Updating boot flags... Nov 12 17:57:50.137215 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2886) Nov 12 17:57:50.283466 kubelet[2550]: E1112 17:57:50.283439 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:51.371101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2250476707.mount: Deactivated successfully. Nov 12 17:57:51.698251 containerd[1440]: time="2024-11-12T17:57:51.698106877Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:51.700274 containerd[1440]: time="2024-11-12T17:57:51.700233245Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=19123657" Nov 12 17:57:51.701982 containerd[1440]: time="2024-11-12T17:57:51.701944211Z" level=info msg="ImageCreate event name:\"sha256:43f5078c762aa5421f1f6830afd7f91e05937aac6b1d97f0516065571164e9ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:51.704509 containerd[1440]: time="2024-11-12T17:57:51.704478420Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:51.705340 containerd[1440]: time="2024-11-12T17:57:51.705310823Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:43f5078c762aa5421f1f6830afd7f91e05937aac6b1d97f0516065571164e9ee\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"19117824\" in 2.178685232s" Nov 12 17:57:51.705395 containerd[1440]: time="2024-11-12T17:57:51.705345543Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:43f5078c762aa5421f1f6830afd7f91e05937aac6b1d97f0516065571164e9ee\"" Nov 12 17:57:51.710004 containerd[1440]: time="2024-11-12T17:57:51.709797118Z" level=info msg="CreateContainer within sandbox \"0ae3d72e4506be129db5edd80d1f3dbeadd0cdd5d024fccc556ee5917c45fc19\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 17:57:51.723324 containerd[1440]: time="2024-11-12T17:57:51.722581883Z" level=info msg="CreateContainer within sandbox \"0ae3d72e4506be129db5edd80d1f3dbeadd0cdd5d024fccc556ee5917c45fc19\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"10471722b3312429a3a4ad52a8f2dbf935541c0c49414b88461686ed00076a1a\"" Nov 12 17:57:51.724571 containerd[1440]: time="2024-11-12T17:57:51.724538090Z" level=info msg="StartContainer for \"10471722b3312429a3a4ad52a8f2dbf935541c0c49414b88461686ed00076a1a\"" Nov 12 17:57:51.752465 systemd[1]: Started cri-containerd-10471722b3312429a3a4ad52a8f2dbf935541c0c49414b88461686ed00076a1a.scope - libcontainer container 10471722b3312429a3a4ad52a8f2dbf935541c0c49414b88461686ed00076a1a. Nov 12 17:57:51.777668 containerd[1440]: time="2024-11-12T17:57:51.777620436Z" level=info msg="StartContainer for \"10471722b3312429a3a4ad52a8f2dbf935541c0c49414b88461686ed00076a1a\" returns successfully" Nov 12 17:57:52.301842 kubelet[2550]: I1112 17:57:52.301661 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ts794" podStartSLOduration=4.301575581 podStartE2EDuration="4.301575581s" podCreationTimestamp="2024-11-12 17:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:57:50.291813942 +0000 UTC m=+16.138920543" watchObservedRunningTime="2024-11-12 17:57:52.301575581 +0000 UTC m=+18.148682142" Nov 12 17:57:52.301842 kubelet[2550]: I1112 17:57:52.301781 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5645cfc98-dbrc2" podStartSLOduration=1.1203279 podStartE2EDuration="3.301776261s" podCreationTimestamp="2024-11-12 17:57:49 +0000 UTC" firstStartedPulling="2024-11-12 17:57:49.526081829 +0000 UTC m=+15.373188430" lastFinishedPulling="2024-11-12 17:57:51.70753023 +0000 UTC m=+17.554636791" observedRunningTime="2024-11-12 17:57:52.301536621 +0000 UTC m=+18.148643222" watchObservedRunningTime="2024-11-12 17:57:52.301776261 +0000 UTC m=+18.148882862" Nov 12 17:57:55.380293 kubelet[2550]: I1112 17:57:55.380238 2550 topology_manager.go:215] "Topology Admit Handler" podUID="8b6bc5a6-757b-4f05-beac-7e2587e2dcfa" podNamespace="calico-system" podName="calico-typha-65c8f797cb-kdk7p" Nov 12 17:57:55.391296 systemd[1]: Created slice kubepods-besteffort-pod8b6bc5a6_757b_4f05_beac_7e2587e2dcfa.slice - libcontainer container kubepods-besteffort-pod8b6bc5a6_757b_4f05_beac_7e2587e2dcfa.slice. Nov 12 17:57:55.443930 kubelet[2550]: I1112 17:57:55.443872 2550 topology_manager.go:215] "Topology Admit Handler" podUID="e8116671-f90f-4611-8ec7-04bc238fe775" podNamespace="calico-system" podName="calico-node-xdnn6" Nov 12 17:57:55.451322 systemd[1]: Created slice kubepods-besteffort-pode8116671_f90f_4611_8ec7_04bc238fe775.slice - libcontainer container kubepods-besteffort-pode8116671_f90f_4611_8ec7_04bc238fe775.slice. Nov 12 17:57:55.466616 kubelet[2550]: I1112 17:57:55.466461 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b6bc5a6-757b-4f05-beac-7e2587e2dcfa-tigera-ca-bundle\") pod \"calico-typha-65c8f797cb-kdk7p\" (UID: \"8b6bc5a6-757b-4f05-beac-7e2587e2dcfa\") " pod="calico-system/calico-typha-65c8f797cb-kdk7p" Nov 12 17:57:55.466616 kubelet[2550]: I1112 17:57:55.466507 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8116671-f90f-4611-8ec7-04bc238fe775-xtables-lock\") pod \"calico-node-xdnn6\" (UID: \"e8116671-f90f-4611-8ec7-04bc238fe775\") " pod="calico-system/calico-node-xdnn6" Nov 12 17:57:55.466616 kubelet[2550]: I1112 17:57:55.466527 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e8116671-f90f-4611-8ec7-04bc238fe775-cni-net-dir\") pod \"calico-node-xdnn6\" (UID: \"e8116671-f90f-4611-8ec7-04bc238fe775\") " pod="calico-system/calico-node-xdnn6" Nov 12 17:57:55.466616 kubelet[2550]: I1112 17:57:55.466544 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp2g4\" (UniqueName: \"kubernetes.io/projected/e8116671-f90f-4611-8ec7-04bc238fe775-kube-api-access-dp2g4\") pod \"calico-node-xdnn6\" (UID: \"e8116671-f90f-4611-8ec7-04bc238fe775\") " pod="calico-system/calico-node-xdnn6" Nov 12 17:57:55.466616 kubelet[2550]: I1112 17:57:55.466565 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xdht\" (UniqueName: \"kubernetes.io/projected/8b6bc5a6-757b-4f05-beac-7e2587e2dcfa-kube-api-access-7xdht\") pod \"calico-typha-65c8f797cb-kdk7p\" (UID: \"8b6bc5a6-757b-4f05-beac-7e2587e2dcfa\") " pod="calico-system/calico-typha-65c8f797cb-kdk7p" Nov 12 17:57:55.466858 kubelet[2550]: I1112 17:57:55.466581 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e8116671-f90f-4611-8ec7-04bc238fe775-policysync\") pod \"calico-node-xdnn6\" (UID: \"e8116671-f90f-4611-8ec7-04bc238fe775\") " pod="calico-system/calico-node-xdnn6" Nov 12 17:57:55.466858 kubelet[2550]: I1112 17:57:55.466596 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e8116671-f90f-4611-8ec7-04bc238fe775-var-run-calico\") pod \"calico-node-xdnn6\" (UID: \"e8116671-f90f-4611-8ec7-04bc238fe775\") " pod="calico-system/calico-node-xdnn6" Nov 12 17:57:55.466858 kubelet[2550]: I1112 17:57:55.466613 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e8116671-f90f-4611-8ec7-04bc238fe775-cni-bin-dir\") pod \"calico-node-xdnn6\" (UID: \"e8116671-f90f-4611-8ec7-04bc238fe775\") " pod="calico-system/calico-node-xdnn6" Nov 12 17:57:55.466858 kubelet[2550]: I1112 17:57:55.466629 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8b6bc5a6-757b-4f05-beac-7e2587e2dcfa-typha-certs\") pod \"calico-typha-65c8f797cb-kdk7p\" (UID: \"8b6bc5a6-757b-4f05-beac-7e2587e2dcfa\") " pod="calico-system/calico-typha-65c8f797cb-kdk7p" Nov 12 17:57:55.466858 kubelet[2550]: I1112 17:57:55.466645 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8116671-f90f-4611-8ec7-04bc238fe775-lib-modules\") pod \"calico-node-xdnn6\" (UID: \"e8116671-f90f-4611-8ec7-04bc238fe775\") " pod="calico-system/calico-node-xdnn6" Nov 12 17:57:55.466977 kubelet[2550]: I1112 17:57:55.466661 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e8116671-f90f-4611-8ec7-04bc238fe775-flexvol-driver-host\") pod \"calico-node-xdnn6\" (UID: \"e8116671-f90f-4611-8ec7-04bc238fe775\") " pod="calico-system/calico-node-xdnn6" Nov 12 17:57:55.466977 kubelet[2550]: I1112 17:57:55.466679 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8116671-f90f-4611-8ec7-04bc238fe775-tigera-ca-bundle\") pod \"calico-node-xdnn6\" (UID: \"e8116671-f90f-4611-8ec7-04bc238fe775\") " pod="calico-system/calico-node-xdnn6" Nov 12 17:57:55.466977 kubelet[2550]: I1112 17:57:55.466696 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e8116671-f90f-4611-8ec7-04bc238fe775-var-lib-calico\") pod \"calico-node-xdnn6\" (UID: \"e8116671-f90f-4611-8ec7-04bc238fe775\") " pod="calico-system/calico-node-xdnn6" Nov 12 17:57:55.466977 kubelet[2550]: I1112 17:57:55.466712 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e8116671-f90f-4611-8ec7-04bc238fe775-cni-log-dir\") pod \"calico-node-xdnn6\" (UID: \"e8116671-f90f-4611-8ec7-04bc238fe775\") " pod="calico-system/calico-node-xdnn6" Nov 12 17:57:55.466977 kubelet[2550]: I1112 17:57:55.466731 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e8116671-f90f-4611-8ec7-04bc238fe775-node-certs\") pod \"calico-node-xdnn6\" (UID: \"e8116671-f90f-4611-8ec7-04bc238fe775\") " pod="calico-system/calico-node-xdnn6" Nov 12 17:57:55.556086 kubelet[2550]: I1112 17:57:55.554758 2550 topology_manager.go:215] "Topology Admit Handler" podUID="ea587a6f-1412-4dff-ac23-3aab0de5e566" podNamespace="calico-system" podName="csi-node-driver-sdd25" Nov 12 17:57:55.556086 kubelet[2550]: E1112 17:57:55.555067 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdd25" podUID="ea587a6f-1412-4dff-ac23-3aab0de5e566" Nov 12 17:57:55.570735 kubelet[2550]: E1112 17:57:55.569757 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.570735 kubelet[2550]: W1112 17:57:55.569784 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.570735 kubelet[2550]: E1112 17:57:55.569821 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.571766 kubelet[2550]: E1112 17:57:55.571751 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.572065 kubelet[2550]: W1112 17:57:55.571868 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.572065 kubelet[2550]: E1112 17:57:55.571964 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.572246 kubelet[2550]: E1112 17:57:55.572232 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.572319 kubelet[2550]: W1112 17:57:55.572308 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.572707 kubelet[2550]: E1112 17:57:55.572563 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.572707 kubelet[2550]: E1112 17:57:55.572625 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.572707 kubelet[2550]: W1112 17:57:55.572636 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.572707 kubelet[2550]: E1112 17:57:55.572656 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.573075 kubelet[2550]: E1112 17:57:55.572974 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.573075 kubelet[2550]: W1112 17:57:55.572986 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.573321 kubelet[2550]: E1112 17:57:55.573305 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.573515 kubelet[2550]: E1112 17:57:55.573452 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.573515 kubelet[2550]: W1112 17:57:55.573462 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.573706 kubelet[2550]: E1112 17:57:55.573619 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.573853 kubelet[2550]: E1112 17:57:55.573842 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.573970 kubelet[2550]: W1112 17:57:55.573911 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.574079 kubelet[2550]: E1112 17:57:55.574048 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.574698 kubelet[2550]: E1112 17:57:55.574592 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.574698 kubelet[2550]: W1112 17:57:55.574605 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.575736 kubelet[2550]: E1112 17:57:55.575588 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.575736 kubelet[2550]: E1112 17:57:55.575654 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.575736 kubelet[2550]: W1112 17:57:55.575661 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.575891 kubelet[2550]: E1112 17:57:55.575873 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.576370 kubelet[2550]: E1112 17:57:55.576340 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.576370 kubelet[2550]: W1112 17:57:55.576353 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.576617 kubelet[2550]: E1112 17:57:55.576527 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.577198 kubelet[2550]: E1112 17:57:55.577100 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.577198 kubelet[2550]: W1112 17:57:55.577114 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.577663 kubelet[2550]: E1112 17:57:55.577389 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.577789 kubelet[2550]: E1112 17:57:55.577778 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.577841 kubelet[2550]: W1112 17:57:55.577831 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.577981 kubelet[2550]: E1112 17:57:55.577968 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.578104 kubelet[2550]: E1112 17:57:55.578093 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.578175 kubelet[2550]: W1112 17:57:55.578150 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.578328 kubelet[2550]: E1112 17:57:55.578315 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.578474 kubelet[2550]: E1112 17:57:55.578456 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.578548 kubelet[2550]: W1112 17:57:55.578537 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.578747 kubelet[2550]: E1112 17:57:55.578673 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.578887 kubelet[2550]: E1112 17:57:55.578876 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.579000 kubelet[2550]: W1112 17:57:55.578948 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.579101 kubelet[2550]: E1112 17:57:55.579051 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.579460 kubelet[2550]: E1112 17:57:55.579392 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.579460 kubelet[2550]: W1112 17:57:55.579405 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.579619 kubelet[2550]: E1112 17:57:55.579570 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.579950 kubelet[2550]: E1112 17:57:55.579826 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.579950 kubelet[2550]: W1112 17:57:55.579839 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.580096 kubelet[2550]: E1112 17:57:55.580080 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.580320 kubelet[2550]: E1112 17:57:55.580226 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.580320 kubelet[2550]: W1112 17:57:55.580271 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.580518 kubelet[2550]: E1112 17:57:55.580424 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.580665 kubelet[2550]: E1112 17:57:55.580635 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.580665 kubelet[2550]: W1112 17:57:55.580651 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.580942 kubelet[2550]: E1112 17:57:55.580796 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.581122 kubelet[2550]: E1112 17:57:55.581065 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.581122 kubelet[2550]: W1112 17:57:55.581076 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.581272 kubelet[2550]: E1112 17:57:55.581195 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.581624 kubelet[2550]: E1112 17:57:55.581492 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.581624 kubelet[2550]: W1112 17:57:55.581505 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.581624 kubelet[2550]: E1112 17:57:55.581554 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.581832 kubelet[2550]: E1112 17:57:55.581799 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.581832 kubelet[2550]: W1112 17:57:55.581812 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.582060 kubelet[2550]: E1112 17:57:55.582017 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.582252 kubelet[2550]: E1112 17:57:55.582198 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.582252 kubelet[2550]: W1112 17:57:55.582209 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.582402 kubelet[2550]: E1112 17:57:55.582295 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.582760 kubelet[2550]: E1112 17:57:55.582632 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.582760 kubelet[2550]: W1112 17:57:55.582659 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.582760 kubelet[2550]: E1112 17:57:55.582707 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.583133 kubelet[2550]: E1112 17:57:55.583048 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.583133 kubelet[2550]: W1112 17:57:55.583061 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.583305 kubelet[2550]: E1112 17:57:55.583271 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.583445 kubelet[2550]: E1112 17:57:55.583398 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.583445 kubelet[2550]: W1112 17:57:55.583408 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.583554 kubelet[2550]: E1112 17:57:55.583528 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.583856 kubelet[2550]: E1112 17:57:55.583766 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.583856 kubelet[2550]: W1112 17:57:55.583778 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.583987 kubelet[2550]: E1112 17:57:55.583973 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.584268 kubelet[2550]: E1112 17:57:55.584126 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.584268 kubelet[2550]: W1112 17:57:55.584137 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.584405 kubelet[2550]: E1112 17:57:55.584390 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.584737 kubelet[2550]: E1112 17:57:55.584682 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.584737 kubelet[2550]: W1112 17:57:55.584695 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.584938 kubelet[2550]: E1112 17:57:55.584862 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.585090 kubelet[2550]: E1112 17:57:55.585066 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.585090 kubelet[2550]: W1112 17:57:55.585078 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.585309 kubelet[2550]: E1112 17:57:55.585230 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.585407 kubelet[2550]: E1112 17:57:55.585395 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.585461 kubelet[2550]: W1112 17:57:55.585450 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.585657 kubelet[2550]: E1112 17:57:55.585615 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.585922 kubelet[2550]: E1112 17:57:55.585825 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.585922 kubelet[2550]: W1112 17:57:55.585836 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.586030 kubelet[2550]: E1112 17:57:55.586005 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.586356 kubelet[2550]: E1112 17:57:55.586243 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.586356 kubelet[2550]: W1112 17:57:55.586256 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.586498 kubelet[2550]: E1112 17:57:55.586482 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.586621 kubelet[2550]: E1112 17:57:55.586610 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.586702 kubelet[2550]: W1112 17:57:55.586685 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.586848 kubelet[2550]: E1112 17:57:55.586815 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.587071 kubelet[2550]: E1112 17:57:55.587042 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.587071 kubelet[2550]: W1112 17:57:55.587054 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.587323 kubelet[2550]: E1112 17:57:55.587307 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.587732 kubelet[2550]: E1112 17:57:55.587718 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.587862 kubelet[2550]: W1112 17:57:55.587806 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.587939 kubelet[2550]: E1112 17:57:55.587927 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.588975 kubelet[2550]: E1112 17:57:55.588825 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.588975 kubelet[2550]: W1112 17:57:55.588840 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.589327 kubelet[2550]: E1112 17:57:55.589250 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.589682 kubelet[2550]: E1112 17:57:55.589662 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.589682 kubelet[2550]: W1112 17:57:55.589679 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.589808 kubelet[2550]: E1112 17:57:55.589786 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.591210 kubelet[2550]: E1112 17:57:55.590033 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.591210 kubelet[2550]: W1112 17:57:55.590049 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.591210 kubelet[2550]: E1112 17:57:55.590107 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.591210 kubelet[2550]: E1112 17:57:55.590302 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.591210 kubelet[2550]: W1112 17:57:55.590313 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.591210 kubelet[2550]: E1112 17:57:55.590364 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.591210 kubelet[2550]: E1112 17:57:55.590527 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.591210 kubelet[2550]: W1112 17:57:55.590536 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.591210 kubelet[2550]: E1112 17:57:55.590578 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.591452 kubelet[2550]: E1112 17:57:55.591238 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.591452 kubelet[2550]: W1112 17:57:55.591257 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.591452 kubelet[2550]: E1112 17:57:55.591297 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.591511 kubelet[2550]: E1112 17:57:55.591452 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.591511 kubelet[2550]: W1112 17:57:55.591462 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.591511 kubelet[2550]: E1112 17:57:55.591489 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.591637 kubelet[2550]: E1112 17:57:55.591617 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.591637 kubelet[2550]: W1112 17:57:55.591628 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.591716 kubelet[2550]: E1112 17:57:55.591659 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.591785 kubelet[2550]: E1112 17:57:55.591768 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.591785 kubelet[2550]: W1112 17:57:55.591779 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.591841 kubelet[2550]: E1112 17:57:55.591806 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.591940 kubelet[2550]: E1112 17:57:55.591926 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.591940 kubelet[2550]: W1112 17:57:55.591936 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.593171 kubelet[2550]: E1112 17:57:55.591980 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.593171 kubelet[2550]: E1112 17:57:55.592075 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.593171 kubelet[2550]: W1112 17:57:55.592083 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.593171 kubelet[2550]: E1112 17:57:55.592098 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.593171 kubelet[2550]: E1112 17:57:55.592262 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.593171 kubelet[2550]: W1112 17:57:55.592270 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.593171 kubelet[2550]: E1112 17:57:55.592284 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.593171 kubelet[2550]: E1112 17:57:55.592447 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.593171 kubelet[2550]: W1112 17:57:55.592456 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.593171 kubelet[2550]: E1112 17:57:55.592467 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.593406 kubelet[2550]: E1112 17:57:55.592745 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.593406 kubelet[2550]: W1112 17:57:55.592759 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.593406 kubelet[2550]: E1112 17:57:55.592940 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.593406 kubelet[2550]: E1112 17:57:55.593015 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.593406 kubelet[2550]: W1112 17:57:55.593034 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.593406 kubelet[2550]: E1112 17:57:55.593047 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.595850 kubelet[2550]: E1112 17:57:55.595509 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.595850 kubelet[2550]: W1112 17:57:55.595556 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.595850 kubelet[2550]: E1112 17:57:55.595572 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.596404 kubelet[2550]: E1112 17:57:55.596290 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.596404 kubelet[2550]: W1112 17:57:55.596307 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.596404 kubelet[2550]: E1112 17:57:55.596324 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.597333 kubelet[2550]: E1112 17:57:55.597217 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.597333 kubelet[2550]: W1112 17:57:55.597233 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.597333 kubelet[2550]: E1112 17:57:55.597243 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.599750 kubelet[2550]: E1112 17:57:55.599727 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.599750 kubelet[2550]: W1112 17:57:55.599747 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.599835 kubelet[2550]: E1112 17:57:55.599762 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.650599 kubelet[2550]: E1112 17:57:55.650496 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.650599 kubelet[2550]: W1112 17:57:55.650519 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.650599 kubelet[2550]: E1112 17:57:55.650539 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.650957 kubelet[2550]: E1112 17:57:55.650836 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.651215 kubelet[2550]: W1112 17:57:55.651187 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.651276 kubelet[2550]: E1112 17:57:55.651221 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.651713 kubelet[2550]: E1112 17:57:55.651696 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.651713 kubelet[2550]: W1112 17:57:55.651713 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.651782 kubelet[2550]: E1112 17:57:55.651725 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.652084 kubelet[2550]: E1112 17:57:55.651889 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.652084 kubelet[2550]: W1112 17:57:55.651899 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.652084 kubelet[2550]: E1112 17:57:55.651907 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.652084 kubelet[2550]: E1112 17:57:55.652046 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.652084 kubelet[2550]: W1112 17:57:55.652053 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.652084 kubelet[2550]: E1112 17:57:55.652060 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.652559 kubelet[2550]: E1112 17:57:55.652200 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.652559 kubelet[2550]: W1112 17:57:55.652208 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.652559 kubelet[2550]: E1112 17:57:55.652218 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.652559 kubelet[2550]: E1112 17:57:55.652336 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.652559 kubelet[2550]: W1112 17:57:55.652343 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.652559 kubelet[2550]: E1112 17:57:55.652350 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.652559 kubelet[2550]: E1112 17:57:55.652495 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.652559 kubelet[2550]: W1112 17:57:55.652503 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.652559 kubelet[2550]: E1112 17:57:55.652538 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.654239 kubelet[2550]: E1112 17:57:55.652677 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.654239 kubelet[2550]: W1112 17:57:55.652687 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.654239 kubelet[2550]: E1112 17:57:55.652695 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.654239 kubelet[2550]: E1112 17:57:55.652808 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.654239 kubelet[2550]: W1112 17:57:55.652814 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.654239 kubelet[2550]: E1112 17:57:55.652821 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.654239 kubelet[2550]: E1112 17:57:55.652941 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.654239 kubelet[2550]: W1112 17:57:55.652948 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.654239 kubelet[2550]: E1112 17:57:55.652955 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.654239 kubelet[2550]: E1112 17:57:55.653072 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.654810 kubelet[2550]: W1112 17:57:55.653078 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.654810 kubelet[2550]: E1112 17:57:55.653085 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.654810 kubelet[2550]: E1112 17:57:55.653220 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.654810 kubelet[2550]: W1112 17:57:55.653227 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.654810 kubelet[2550]: E1112 17:57:55.653234 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.654810 kubelet[2550]: E1112 17:57:55.653355 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.654810 kubelet[2550]: W1112 17:57:55.653362 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.654810 kubelet[2550]: E1112 17:57:55.653368 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.654810 kubelet[2550]: E1112 17:57:55.653490 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.654810 kubelet[2550]: W1112 17:57:55.653496 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.655117 kubelet[2550]: E1112 17:57:55.653503 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.655117 kubelet[2550]: E1112 17:57:55.653619 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.655117 kubelet[2550]: W1112 17:57:55.653625 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.655117 kubelet[2550]: E1112 17:57:55.653632 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.655117 kubelet[2550]: E1112 17:57:55.653770 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.655117 kubelet[2550]: W1112 17:57:55.653778 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.655117 kubelet[2550]: E1112 17:57:55.653785 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.655117 kubelet[2550]: E1112 17:57:55.653902 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.655117 kubelet[2550]: W1112 17:57:55.653909 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.655117 kubelet[2550]: E1112 17:57:55.653916 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.655644 kubelet[2550]: E1112 17:57:55.654040 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.655644 kubelet[2550]: W1112 17:57:55.654047 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.655644 kubelet[2550]: E1112 17:57:55.654054 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.655644 kubelet[2550]: E1112 17:57:55.654185 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.655644 kubelet[2550]: W1112 17:57:55.654191 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.655644 kubelet[2550]: E1112 17:57:55.654198 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.690518 kubelet[2550]: E1112 17:57:55.690420 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.690518 kubelet[2550]: W1112 17:57:55.690442 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.690518 kubelet[2550]: E1112 17:57:55.690458 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.690518 kubelet[2550]: I1112 17:57:55.690486 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ea587a6f-1412-4dff-ac23-3aab0de5e566-socket-dir\") pod \"csi-node-driver-sdd25\" (UID: \"ea587a6f-1412-4dff-ac23-3aab0de5e566\") " pod="calico-system/csi-node-driver-sdd25" Nov 12 17:57:55.690789 kubelet[2550]: E1112 17:57:55.690660 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.690789 kubelet[2550]: W1112 17:57:55.690672 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.690789 kubelet[2550]: E1112 17:57:55.690688 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.690789 kubelet[2550]: I1112 17:57:55.690702 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea587a6f-1412-4dff-ac23-3aab0de5e566-kubelet-dir\") pod \"csi-node-driver-sdd25\" (UID: \"ea587a6f-1412-4dff-ac23-3aab0de5e566\") " pod="calico-system/csi-node-driver-sdd25" Nov 12 17:57:55.690942 kubelet[2550]: E1112 17:57:55.690875 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.690942 kubelet[2550]: W1112 17:57:55.690886 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.690942 kubelet[2550]: E1112 17:57:55.690900 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.690942 kubelet[2550]: I1112 17:57:55.690915 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66trs\" (UniqueName: \"kubernetes.io/projected/ea587a6f-1412-4dff-ac23-3aab0de5e566-kube-api-access-66trs\") pod \"csi-node-driver-sdd25\" (UID: \"ea587a6f-1412-4dff-ac23-3aab0de5e566\") " pod="calico-system/csi-node-driver-sdd25" Nov 12 17:57:55.691097 kubelet[2550]: E1112 17:57:55.691065 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.691097 kubelet[2550]: W1112 17:57:55.691074 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.691097 kubelet[2550]: E1112 17:57:55.691089 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.691222 kubelet[2550]: I1112 17:57:55.691102 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ea587a6f-1412-4dff-ac23-3aab0de5e566-varrun\") pod \"csi-node-driver-sdd25\" (UID: \"ea587a6f-1412-4dff-ac23-3aab0de5e566\") " pod="calico-system/csi-node-driver-sdd25" Nov 12 17:57:55.691328 kubelet[2550]: E1112 17:57:55.691310 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.691328 kubelet[2550]: W1112 17:57:55.691325 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.691381 kubelet[2550]: E1112 17:57:55.691342 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.691381 kubelet[2550]: I1112 17:57:55.691357 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ea587a6f-1412-4dff-ac23-3aab0de5e566-registration-dir\") pod \"csi-node-driver-sdd25\" (UID: \"ea587a6f-1412-4dff-ac23-3aab0de5e566\") " pod="calico-system/csi-node-driver-sdd25" Nov 12 17:57:55.691635 kubelet[2550]: E1112 17:57:55.691610 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.691635 kubelet[2550]: W1112 17:57:55.691623 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.691694 kubelet[2550]: E1112 17:57:55.691642 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.691800 kubelet[2550]: E1112 17:57:55.691790 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.691800 kubelet[2550]: W1112 17:57:55.691800 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.691912 kubelet[2550]: E1112 17:57:55.691859 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.691948 kubelet[2550]: E1112 17:57:55.691937 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.691948 kubelet[2550]: W1112 17:57:55.691944 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.692050 kubelet[2550]: E1112 17:57:55.692016 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.692104 kubelet[2550]: E1112 17:57:55.692093 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.692104 kubelet[2550]: W1112 17:57:55.692102 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.692220 kubelet[2550]: E1112 17:57:55.692157 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.692258 kubelet[2550]: E1112 17:57:55.692253 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.692316 kubelet[2550]: W1112 17:57:55.692261 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.692373 kubelet[2550]: E1112 17:57:55.692295 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.692430 kubelet[2550]: E1112 17:57:55.692420 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.692430 kubelet[2550]: W1112 17:57:55.692429 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.692527 kubelet[2550]: E1112 17:57:55.692472 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.692592 kubelet[2550]: E1112 17:57:55.692582 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.692592 kubelet[2550]: W1112 17:57:55.692592 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.692668 kubelet[2550]: E1112 17:57:55.692600 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.692798 kubelet[2550]: E1112 17:57:55.692771 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.692798 kubelet[2550]: W1112 17:57:55.692780 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.692798 kubelet[2550]: E1112 17:57:55.692787 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.692954 kubelet[2550]: E1112 17:57:55.692943 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.692954 kubelet[2550]: W1112 17:57:55.692953 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.693036 kubelet[2550]: E1112 17:57:55.692961 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.693139 kubelet[2550]: E1112 17:57:55.693128 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.693139 kubelet[2550]: W1112 17:57:55.693139 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.693209 kubelet[2550]: E1112 17:57:55.693148 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.699305 kubelet[2550]: E1112 17:57:55.698940 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:55.700245 containerd[1440]: time="2024-11-12T17:57:55.700017929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65c8f797cb-kdk7p,Uid:8b6bc5a6-757b-4f05-beac-7e2587e2dcfa,Namespace:calico-system,Attempt:0,}" Nov 12 17:57:55.746518 containerd[1440]: time="2024-11-12T17:57:55.746311382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:57:55.746518 containerd[1440]: time="2024-11-12T17:57:55.746371023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:57:55.746518 containerd[1440]: time="2024-11-12T17:57:55.746383423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:57:55.746518 containerd[1440]: time="2024-11-12T17:57:55.746471663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:57:55.754126 kubelet[2550]: E1112 17:57:55.754093 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:55.754739 containerd[1440]: time="2024-11-12T17:57:55.754588286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xdnn6,Uid:e8116671-f90f-4611-8ec7-04bc238fe775,Namespace:calico-system,Attempt:0,}" Nov 12 17:57:55.771351 systemd[1]: Started cri-containerd-07be88d409ac96548519645f122acdf8f697f40188cb7ac675d0dfd5b82eb1f7.scope - libcontainer container 07be88d409ac96548519645f122acdf8f697f40188cb7ac675d0dfd5b82eb1f7. Nov 12 17:57:55.787781 containerd[1440]: time="2024-11-12T17:57:55.787616422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:57:55.787903 containerd[1440]: time="2024-11-12T17:57:55.787761622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:57:55.787938 containerd[1440]: time="2024-11-12T17:57:55.787896622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:57:55.788794 containerd[1440]: time="2024-11-12T17:57:55.788722625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:57:55.792673 kubelet[2550]: E1112 17:57:55.792650 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.792933 kubelet[2550]: W1112 17:57:55.792790 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.792933 kubelet[2550]: E1112 17:57:55.792817 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.793410 kubelet[2550]: E1112 17:57:55.793254 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.793410 kubelet[2550]: W1112 17:57:55.793268 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.793410 kubelet[2550]: E1112 17:57:55.793288 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.793736 kubelet[2550]: E1112 17:57:55.793617 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.793736 kubelet[2550]: W1112 17:57:55.793641 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.793736 kubelet[2550]: E1112 17:57:55.793662 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.794197 kubelet[2550]: E1112 17:57:55.794006 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.794197 kubelet[2550]: W1112 17:57:55.794065 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.794197 kubelet[2550]: E1112 17:57:55.794082 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.794533 kubelet[2550]: E1112 17:57:55.794478 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.794533 kubelet[2550]: W1112 17:57:55.794517 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.794700 kubelet[2550]: E1112 17:57:55.794660 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.795374 kubelet[2550]: E1112 17:57:55.795263 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.795374 kubelet[2550]: W1112 17:57:55.795279 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.795595 kubelet[2550]: E1112 17:57:55.795432 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.795595 kubelet[2550]: E1112 17:57:55.795516 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.795595 kubelet[2550]: W1112 17:57:55.795527 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.795775 kubelet[2550]: E1112 17:57:55.795723 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.795917 kubelet[2550]: E1112 17:57:55.795904 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.796048 kubelet[2550]: W1112 17:57:55.795986 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.796048 kubelet[2550]: E1112 17:57:55.796028 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.796643 kubelet[2550]: E1112 17:57:55.796528 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.796643 kubelet[2550]: W1112 17:57:55.796542 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.796643 kubelet[2550]: E1112 17:57:55.796571 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.797310 kubelet[2550]: E1112 17:57:55.797234 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.797310 kubelet[2550]: W1112 17:57:55.797258 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.797407 kubelet[2550]: E1112 17:57:55.797379 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.797813 kubelet[2550]: E1112 17:57:55.797686 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.797813 kubelet[2550]: W1112 17:57:55.797700 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.797813 kubelet[2550]: E1112 17:57:55.797789 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.798758 kubelet[2550]: E1112 17:57:55.798155 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.798758 kubelet[2550]: W1112 17:57:55.798263 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.798758 kubelet[2550]: E1112 17:57:55.798354 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.799452 kubelet[2550]: E1112 17:57:55.799321 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.799452 kubelet[2550]: W1112 17:57:55.799336 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.799452 kubelet[2550]: E1112 17:57:55.799382 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.799759 kubelet[2550]: E1112 17:57:55.799575 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.799759 kubelet[2550]: W1112 17:57:55.799586 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.799759 kubelet[2550]: E1112 17:57:55.799622 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.800406 kubelet[2550]: E1112 17:57:55.800262 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.800406 kubelet[2550]: W1112 17:57:55.800283 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.800406 kubelet[2550]: E1112 17:57:55.800361 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.801015 kubelet[2550]: E1112 17:57:55.800799 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.801015 kubelet[2550]: W1112 17:57:55.800815 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.801015 kubelet[2550]: E1112 17:57:55.800872 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.801595 kubelet[2550]: E1112 17:57:55.801379 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.801595 kubelet[2550]: W1112 17:57:55.801401 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.801595 kubelet[2550]: E1112 17:57:55.801434 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.804231 kubelet[2550]: E1112 17:57:55.803551 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.804231 kubelet[2550]: W1112 17:57:55.803570 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.804231 kubelet[2550]: E1112 17:57:55.803635 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.805372 kubelet[2550]: E1112 17:57:55.805268 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.805372 kubelet[2550]: W1112 17:57:55.805283 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.805492 kubelet[2550]: E1112 17:57:55.805422 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.805723 kubelet[2550]: E1112 17:57:55.805627 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.805723 kubelet[2550]: W1112 17:57:55.805640 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.805723 kubelet[2550]: E1112 17:57:55.805668 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.806003 kubelet[2550]: E1112 17:57:55.805891 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.806003 kubelet[2550]: W1112 17:57:55.805904 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.806086 kubelet[2550]: E1112 17:57:55.806050 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.806422 kubelet[2550]: E1112 17:57:55.806378 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.806496 kubelet[2550]: W1112 17:57:55.806483 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.806609 kubelet[2550]: E1112 17:57:55.806585 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.806861 kubelet[2550]: E1112 17:57:55.806845 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.806924 kubelet[2550]: W1112 17:57:55.806913 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.807014 kubelet[2550]: E1112 17:57:55.806993 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.807279 kubelet[2550]: E1112 17:57:55.807251 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.807502 kubelet[2550]: W1112 17:57:55.807363 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.807734 kubelet[2550]: E1112 17:57:55.807717 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.808133 kubelet[2550]: W1112 17:57:55.808109 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.808314 kubelet[2550]: E1112 17:57:55.808274 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.808741 systemd[1]: Started cri-containerd-0843eb8a96b2f2cf2109d160bec33b79eabc14b16bc691247f1cfa0ea26b8d1e.scope - libcontainer container 0843eb8a96b2f2cf2109d160bec33b79eabc14b16bc691247f1cfa0ea26b8d1e. Nov 12 17:57:55.811542 kubelet[2550]: E1112 17:57:55.807396 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.816179 containerd[1440]: time="2024-11-12T17:57:55.815744743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65c8f797cb-kdk7p,Uid:8b6bc5a6-757b-4f05-beac-7e2587e2dcfa,Namespace:calico-system,Attempt:0,} returns sandbox id \"07be88d409ac96548519645f122acdf8f697f40188cb7ac675d0dfd5b82eb1f7\"" Nov 12 17:57:55.816968 kubelet[2550]: E1112 17:57:55.816933 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:55.818375 containerd[1440]: time="2024-11-12T17:57:55.818340830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 17:57:55.818766 kubelet[2550]: E1112 17:57:55.818504 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:57:55.818766 kubelet[2550]: W1112 17:57:55.818523 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:57:55.818766 kubelet[2550]: E1112 17:57:55.818544 2550 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:57:55.834191 containerd[1440]: time="2024-11-12T17:57:55.834124236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xdnn6,Uid:e8116671-f90f-4611-8ec7-04bc238fe775,Namespace:calico-system,Attempt:0,} returns sandbox id \"0843eb8a96b2f2cf2109d160bec33b79eabc14b16bc691247f1cfa0ea26b8d1e\"" Nov 12 17:57:55.834815 kubelet[2550]: E1112 17:57:55.834791 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:57.209720 containerd[1440]: time="2024-11-12T17:57:57.209668060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:57.210496 containerd[1440]: time="2024-11-12T17:57:57.210460062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=27849584" Nov 12 17:57:57.211322 containerd[1440]: time="2024-11-12T17:57:57.211288104Z" level=info msg="ImageCreate event name:\"sha256:b2bb88f3f42552b429baa4766d841334e258ac314fd6372cf3b9700487183ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:57.213287 containerd[1440]: time="2024-11-12T17:57:57.213256630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:57.214183 containerd[1440]: time="2024-11-12T17:57:57.214041872Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:b2bb88f3f42552b429baa4766d841334e258ac314fd6372cf3b9700487183ad3\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"29219212\" in 1.395664642s" Nov 12 17:57:57.214183 containerd[1440]: time="2024-11-12T17:57:57.214077312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:b2bb88f3f42552b429baa4766d841334e258ac314fd6372cf3b9700487183ad3\"" Nov 12 17:57:57.215853 containerd[1440]: time="2024-11-12T17:57:57.215119555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 17:57:57.230780 containerd[1440]: time="2024-11-12T17:57:57.230740916Z" level=info msg="CreateContainer within sandbox \"07be88d409ac96548519645f122acdf8f697f40188cb7ac675d0dfd5b82eb1f7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 17:57:57.243756 kubelet[2550]: E1112 17:57:57.243646 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdd25" podUID="ea587a6f-1412-4dff-ac23-3aab0de5e566" Nov 12 17:57:57.286426 containerd[1440]: time="2024-11-12T17:57:57.286375182Z" level=info msg="CreateContainer within sandbox \"07be88d409ac96548519645f122acdf8f697f40188cb7ac675d0dfd5b82eb1f7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"be0c49d8f360e30574e9df190aec5557b7ec0a0bf3095ba9fc022f63c0d5357d\"" Nov 12 17:57:57.287007 containerd[1440]: time="2024-11-12T17:57:57.286974824Z" level=info msg="StartContainer for \"be0c49d8f360e30574e9df190aec5557b7ec0a0bf3095ba9fc022f63c0d5357d\"" Nov 12 17:57:57.322328 systemd[1]: Started cri-containerd-be0c49d8f360e30574e9df190aec5557b7ec0a0bf3095ba9fc022f63c0d5357d.scope - libcontainer container be0c49d8f360e30574e9df190aec5557b7ec0a0bf3095ba9fc022f63c0d5357d. Nov 12 17:57:57.352231 containerd[1440]: time="2024-11-12T17:57:57.352180435Z" level=info msg="StartContainer for \"be0c49d8f360e30574e9df190aec5557b7ec0a0bf3095ba9fc022f63c0d5357d\" returns successfully" Nov 12 17:57:58.212115 containerd[1440]: time="2024-11-12T17:57:58.212052713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:58.212775 containerd[1440]: time="2024-11-12T17:57:58.212728275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5117816" Nov 12 17:57:58.213483 containerd[1440]: time="2024-11-12T17:57:58.213448597Z" level=info msg="ImageCreate event name:\"sha256:bd15f6fc4f6c943c0f50373a7141cb17e8f12e21aaad47c24b6667c3f1c9947e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:58.219328 containerd[1440]: time="2024-11-12T17:57:58.219284771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:57:58.220072 containerd[1440]: time="2024-11-12T17:57:58.220019293Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:bd15f6fc4f6c943c0f50373a7141cb17e8f12e21aaad47c24b6667c3f1c9947e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6487412\" in 1.004861218s" Nov 12 17:57:58.220072 containerd[1440]: time="2024-11-12T17:57:58.220059973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:bd15f6fc4f6c943c0f50373a7141cb17e8f12e21aaad47c24b6667c3f1c9947e\"" Nov 12 17:57:58.223825 containerd[1440]: time="2024-11-12T17:57:58.223792063Z" level=info msg="CreateContainer within sandbox \"0843eb8a96b2f2cf2109d160bec33b79eabc14b16bc691247f1cfa0ea26b8d1e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 17:57:58.234697 containerd[1440]: time="2024-11-12T17:57:58.234662850Z" level=info msg="CreateContainer within sandbox \"0843eb8a96b2f2cf2109d160bec33b79eabc14b16bc691247f1cfa0ea26b8d1e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9db78fef392636252ad0a52d6ac0287bfb5d709f6088c4c2e4621cadceabaaa6\"" Nov 12 17:57:58.236001 containerd[1440]: time="2024-11-12T17:57:58.235065411Z" level=info msg="StartContainer for \"9db78fef392636252ad0a52d6ac0287bfb5d709f6088c4c2e4621cadceabaaa6\"" Nov 12 17:57:58.271551 systemd[1]: Started cri-containerd-9db78fef392636252ad0a52d6ac0287bfb5d709f6088c4c2e4621cadceabaaa6.scope - libcontainer container 9db78fef392636252ad0a52d6ac0287bfb5d709f6088c4c2e4621cadceabaaa6. Nov 12 17:57:58.300629 containerd[1440]: time="2024-11-12T17:57:58.300589496Z" level=info msg="StartContainer for \"9db78fef392636252ad0a52d6ac0287bfb5d709f6088c4c2e4621cadceabaaa6\" returns successfully" Nov 12 17:57:58.311455 kubelet[2550]: E1112 17:57:58.309951 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:58.311455 kubelet[2550]: E1112 17:57:58.310775 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:58.321205 kubelet[2550]: I1112 17:57:58.321113 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-65c8f797cb-kdk7p" podStartSLOduration=1.924113383 podStartE2EDuration="3.321098068s" podCreationTimestamp="2024-11-12 17:57:55 +0000 UTC" firstStartedPulling="2024-11-12 17:57:55.817930669 +0000 UTC m=+21.665037270" lastFinishedPulling="2024-11-12 17:57:57.214915394 +0000 UTC m=+23.062021955" observedRunningTime="2024-11-12 17:57:58.320512466 +0000 UTC m=+24.167619067" watchObservedRunningTime="2024-11-12 17:57:58.321098068 +0000 UTC m=+24.168204629" Nov 12 17:57:58.331363 systemd[1]: cri-containerd-9db78fef392636252ad0a52d6ac0287bfb5d709f6088c4c2e4621cadceabaaa6.scope: Deactivated successfully. Nov 12 17:57:58.373934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9db78fef392636252ad0a52d6ac0287bfb5d709f6088c4c2e4621cadceabaaa6-rootfs.mount: Deactivated successfully. Nov 12 17:57:58.419350 containerd[1440]: time="2024-11-12T17:57:58.419290275Z" level=info msg="shim disconnected" id=9db78fef392636252ad0a52d6ac0287bfb5d709f6088c4c2e4621cadceabaaa6 namespace=k8s.io Nov 12 17:57:58.419350 containerd[1440]: time="2024-11-12T17:57:58.419346715Z" level=warning msg="cleaning up after shim disconnected" id=9db78fef392636252ad0a52d6ac0287bfb5d709f6088c4c2e4621cadceabaaa6 namespace=k8s.io Nov 12 17:57:58.419350 containerd[1440]: time="2024-11-12T17:57:58.419355395Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:57:59.243771 kubelet[2550]: E1112 17:57:59.243664 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdd25" podUID="ea587a6f-1412-4dff-ac23-3aab0de5e566" Nov 12 17:57:59.315181 kubelet[2550]: E1112 17:57:59.314703 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:57:59.316577 containerd[1440]: time="2024-11-12T17:57:59.316199217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 17:57:59.316997 kubelet[2550]: I1112 17:57:59.316615 2550 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:57:59.317391 kubelet[2550]: E1112 17:57:59.317361 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:58:01.243582 kubelet[2550]: E1112 17:58:01.243526 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdd25" podUID="ea587a6f-1412-4dff-ac23-3aab0de5e566" Nov 12 17:58:03.243738 kubelet[2550]: E1112 17:58:03.243679 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdd25" podUID="ea587a6f-1412-4dff-ac23-3aab0de5e566" Nov 12 17:58:03.505237 containerd[1440]: time="2024-11-12T17:58:03.505153573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:58:03.506186 containerd[1440]: time="2024-11-12T17:58:03.505947775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=89700517" Nov 12 17:58:03.506880 containerd[1440]: time="2024-11-12T17:58:03.506848297Z" level=info msg="ImageCreate event name:\"sha256:9c7b7d79ea478f25cd5de34ec1519a0aaa7ac440910e61075e65092a94aea41f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:58:03.509409 containerd[1440]: time="2024-11-12T17:58:03.509362702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:58:03.510356 containerd[1440]: time="2024-11-12T17:58:03.510228903Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:9c7b7d79ea478f25cd5de34ec1519a0aaa7ac440910e61075e65092a94aea41f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"91070153\" in 4.193990046s" Nov 12 17:58:03.510356 containerd[1440]: time="2024-11-12T17:58:03.510263103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:9c7b7d79ea478f25cd5de34ec1519a0aaa7ac440910e61075e65092a94aea41f\"" Nov 12 17:58:03.517226 containerd[1440]: time="2024-11-12T17:58:03.517184038Z" level=info msg="CreateContainer within sandbox \"0843eb8a96b2f2cf2109d160bec33b79eabc14b16bc691247f1cfa0ea26b8d1e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 17:58:03.529478 containerd[1440]: time="2024-11-12T17:58:03.529371502Z" level=info msg="CreateContainer within sandbox \"0843eb8a96b2f2cf2109d160bec33b79eabc14b16bc691247f1cfa0ea26b8d1e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d442bee9786d12457899d7b078e70ae53aebba1d1dc8f1040d99a8ee000d117a\"" Nov 12 17:58:03.529883 containerd[1440]: time="2024-11-12T17:58:03.529856983Z" level=info msg="StartContainer for \"d442bee9786d12457899d7b078e70ae53aebba1d1dc8f1040d99a8ee000d117a\"" Nov 12 17:58:03.570577 systemd[1]: Started cri-containerd-d442bee9786d12457899d7b078e70ae53aebba1d1dc8f1040d99a8ee000d117a.scope - libcontainer container d442bee9786d12457899d7b078e70ae53aebba1d1dc8f1040d99a8ee000d117a. Nov 12 17:58:03.594602 containerd[1440]: time="2024-11-12T17:58:03.594555355Z" level=info msg="StartContainer for \"d442bee9786d12457899d7b078e70ae53aebba1d1dc8f1040d99a8ee000d117a\" returns successfully" Nov 12 17:58:04.192817 systemd[1]: cri-containerd-d442bee9786d12457899d7b078e70ae53aebba1d1dc8f1040d99a8ee000d117a.scope: Deactivated successfully. Nov 12 17:58:04.292355 kubelet[2550]: I1112 17:58:04.291580 2550 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 17:58:04.306550 containerd[1440]: time="2024-11-12T17:58:04.306491263Z" level=info msg="shim disconnected" id=d442bee9786d12457899d7b078e70ae53aebba1d1dc8f1040d99a8ee000d117a namespace=k8s.io Nov 12 17:58:04.307066 containerd[1440]: time="2024-11-12T17:58:04.307009664Z" level=warning msg="cleaning up after shim disconnected" id=d442bee9786d12457899d7b078e70ae53aebba1d1dc8f1040d99a8ee000d117a namespace=k8s.io Nov 12 17:58:04.307066 containerd[1440]: time="2024-11-12T17:58:04.307046744Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:58:04.330647 kubelet[2550]: I1112 17:58:04.325955 2550 topology_manager.go:215] "Topology Admit Handler" podUID="79693433-db53-4b04-81f4-8647ae5d69bf" podNamespace="calico-apiserver" podName="calico-apiserver-6f8577f645-7qhtp" Nov 12 17:58:04.338445 kubelet[2550]: I1112 17:58:04.338388 2550 topology_manager.go:215] "Topology Admit Handler" podUID="e5df49d4-ef7b-414a-a5da-3e33e7b77381" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lxvzw" Nov 12 17:58:04.339169 kubelet[2550]: I1112 17:58:04.339033 2550 topology_manager.go:215] "Topology Admit Handler" podUID="4ed5cf09-ed06-4e8b-8c68-5b53322839e8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lv4gc" Nov 12 17:58:04.340903 kubelet[2550]: I1112 17:58:04.340802 2550 topology_manager.go:215] "Topology Admit Handler" podUID="0bdbc14b-f620-427a-ae0b-74f889d89287" podNamespace="calico-system" podName="calico-kube-controllers-6d6674b4b8-wt8ww" Nov 12 17:58:04.343920 kubelet[2550]: I1112 17:58:04.343876 2550 topology_manager.go:215] "Topology Admit Handler" podUID="796be7c7-8733-4ba8-8a44-8adf215d4e9b" podNamespace="calico-apiserver" podName="calico-apiserver-6f8577f645-42z4v" Nov 12 17:58:04.346606 kubelet[2550]: E1112 17:58:04.346245 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:58:04.347068 systemd[1]: Created slice kubepods-besteffort-pod79693433_db53_4b04_81f4_8647ae5d69bf.slice - libcontainer container kubepods-besteffort-pod79693433_db53_4b04_81f4_8647ae5d69bf.slice. Nov 12 17:58:04.362405 systemd[1]: Created slice kubepods-besteffort-pod0bdbc14b_f620_427a_ae0b_74f889d89287.slice - libcontainer container kubepods-besteffort-pod0bdbc14b_f620_427a_ae0b_74f889d89287.slice. Nov 12 17:58:04.367874 systemd[1]: Created slice kubepods-burstable-pode5df49d4_ef7b_414a_a5da_3e33e7b77381.slice - libcontainer container kubepods-burstable-pode5df49d4_ef7b_414a_a5da_3e33e7b77381.slice. Nov 12 17:58:04.381971 systemd[1]: Created slice kubepods-burstable-pod4ed5cf09_ed06_4e8b_8c68_5b53322839e8.slice - libcontainer container kubepods-burstable-pod4ed5cf09_ed06_4e8b_8c68_5b53322839e8.slice. Nov 12 17:58:04.385117 systemd[1]: Created slice kubepods-besteffort-pod796be7c7_8733_4ba8_8a44_8adf215d4e9b.slice - libcontainer container kubepods-besteffort-pod796be7c7_8733_4ba8_8a44_8adf215d4e9b.slice. Nov 12 17:58:04.454665 kubelet[2550]: I1112 17:58:04.454075 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ed5cf09-ed06-4e8b-8c68-5b53322839e8-config-volume\") pod \"coredns-7db6d8ff4d-lv4gc\" (UID: \"4ed5cf09-ed06-4e8b-8c68-5b53322839e8\") " pod="kube-system/coredns-7db6d8ff4d-lv4gc" Nov 12 17:58:04.454665 kubelet[2550]: I1112 17:58:04.454122 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhs5n\" (UniqueName: \"kubernetes.io/projected/4ed5cf09-ed06-4e8b-8c68-5b53322839e8-kube-api-access-jhs5n\") pod \"coredns-7db6d8ff4d-lv4gc\" (UID: \"4ed5cf09-ed06-4e8b-8c68-5b53322839e8\") " pod="kube-system/coredns-7db6d8ff4d-lv4gc" Nov 12 17:58:04.454665 kubelet[2550]: I1112 17:58:04.454146 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/796be7c7-8733-4ba8-8a44-8adf215d4e9b-calico-apiserver-certs\") pod \"calico-apiserver-6f8577f645-42z4v\" (UID: \"796be7c7-8733-4ba8-8a44-8adf215d4e9b\") " pod="calico-apiserver/calico-apiserver-6f8577f645-42z4v" Nov 12 17:58:04.454665 kubelet[2550]: I1112 17:58:04.454286 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b79dh\" (UniqueName: \"kubernetes.io/projected/79693433-db53-4b04-81f4-8647ae5d69bf-kube-api-access-b79dh\") pod \"calico-apiserver-6f8577f645-7qhtp\" (UID: \"79693433-db53-4b04-81f4-8647ae5d69bf\") " pod="calico-apiserver/calico-apiserver-6f8577f645-7qhtp" Nov 12 17:58:04.454665 kubelet[2550]: I1112 17:58:04.454415 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwdvz\" (UniqueName: \"kubernetes.io/projected/0bdbc14b-f620-427a-ae0b-74f889d89287-kube-api-access-wwdvz\") pod \"calico-kube-controllers-6d6674b4b8-wt8ww\" (UID: \"0bdbc14b-f620-427a-ae0b-74f889d89287\") " pod="calico-system/calico-kube-controllers-6d6674b4b8-wt8ww" Nov 12 17:58:04.454859 kubelet[2550]: I1112 17:58:04.454450 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/79693433-db53-4b04-81f4-8647ae5d69bf-calico-apiserver-certs\") pod \"calico-apiserver-6f8577f645-7qhtp\" (UID: \"79693433-db53-4b04-81f4-8647ae5d69bf\") " pod="calico-apiserver/calico-apiserver-6f8577f645-7qhtp" Nov 12 17:58:04.454859 kubelet[2550]: I1112 17:58:04.454488 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bdbc14b-f620-427a-ae0b-74f889d89287-tigera-ca-bundle\") pod \"calico-kube-controllers-6d6674b4b8-wt8ww\" (UID: \"0bdbc14b-f620-427a-ae0b-74f889d89287\") " pod="calico-system/calico-kube-controllers-6d6674b4b8-wt8ww" Nov 12 17:58:04.454859 kubelet[2550]: I1112 17:58:04.454506 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqfwq\" (UniqueName: \"kubernetes.io/projected/796be7c7-8733-4ba8-8a44-8adf215d4e9b-kube-api-access-cqfwq\") pod \"calico-apiserver-6f8577f645-42z4v\" (UID: \"796be7c7-8733-4ba8-8a44-8adf215d4e9b\") " pod="calico-apiserver/calico-apiserver-6f8577f645-42z4v" Nov 12 17:58:04.454859 kubelet[2550]: I1112 17:58:04.454527 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5df49d4-ef7b-414a-a5da-3e33e7b77381-config-volume\") pod \"coredns-7db6d8ff4d-lxvzw\" (UID: \"e5df49d4-ef7b-414a-a5da-3e33e7b77381\") " pod="kube-system/coredns-7db6d8ff4d-lxvzw" Nov 12 17:58:04.454859 kubelet[2550]: I1112 17:58:04.454545 2550 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdq7v\" (UniqueName: \"kubernetes.io/projected/e5df49d4-ef7b-414a-a5da-3e33e7b77381-kube-api-access-hdq7v\") pod \"coredns-7db6d8ff4d-lxvzw\" (UID: \"e5df49d4-ef7b-414a-a5da-3e33e7b77381\") " pod="kube-system/coredns-7db6d8ff4d-lxvzw" Nov 12 17:58:04.525646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d442bee9786d12457899d7b078e70ae53aebba1d1dc8f1040d99a8ee000d117a-rootfs.mount: Deactivated successfully. Nov 12 17:58:04.654954 containerd[1440]: time="2024-11-12T17:58:04.654905786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8577f645-7qhtp,Uid:79693433-db53-4b04-81f4-8647ae5d69bf,Namespace:calico-apiserver,Attempt:0,}" Nov 12 17:58:04.665927 containerd[1440]: time="2024-11-12T17:58:04.665796527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6674b4b8-wt8ww,Uid:0bdbc14b-f620-427a-ae0b-74f889d89287,Namespace:calico-system,Attempt:0,}" Nov 12 17:58:04.673086 kubelet[2550]: E1112 17:58:04.673041 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:58:04.674445 containerd[1440]: time="2024-11-12T17:58:04.674220384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lxvzw,Uid:e5df49d4-ef7b-414a-a5da-3e33e7b77381,Namespace:kube-system,Attempt:0,}" Nov 12 17:58:04.700083 containerd[1440]: time="2024-11-12T17:58:04.696584148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8577f645-42z4v,Uid:796be7c7-8733-4ba8-8a44-8adf215d4e9b,Namespace:calico-apiserver,Attempt:0,}" Nov 12 17:58:04.700083 containerd[1440]: time="2024-11-12T17:58:04.697178429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lv4gc,Uid:4ed5cf09-ed06-4e8b-8c68-5b53322839e8,Namespace:kube-system,Attempt:0,}" Nov 12 17:58:04.700261 kubelet[2550]: E1112 17:58:04.696608 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:58:05.016458 containerd[1440]: time="2024-11-12T17:58:05.016290733Z" level=error msg="Failed to destroy network for sandbox \"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.016916 containerd[1440]: time="2024-11-12T17:58:05.016865814Z" level=error msg="Failed to destroy network for sandbox \"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.017073 containerd[1440]: time="2024-11-12T17:58:05.017040375Z" level=error msg="encountered an error cleaning up failed sandbox \"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.017246 containerd[1440]: time="2024-11-12T17:58:05.017119295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8577f645-42z4v,Uid:796be7c7-8733-4ba8-8a44-8adf215d4e9b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.017246 containerd[1440]: time="2024-11-12T17:58:05.017231975Z" level=error msg="Failed to destroy network for sandbox \"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.017371 containerd[1440]: time="2024-11-12T17:58:05.017316935Z" level=error msg="encountered an error cleaning up failed sandbox \"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.017396 containerd[1440]: time="2024-11-12T17:58:05.017373415Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8577f645-7qhtp,Uid:79693433-db53-4b04-81f4-8647ae5d69bf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.018036 containerd[1440]: time="2024-11-12T17:58:05.017531855Z" level=error msg="encountered an error cleaning up failed sandbox \"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.018036 containerd[1440]: time="2024-11-12T17:58:05.017574896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6674b4b8-wt8ww,Uid:0bdbc14b-f620-427a-ae0b-74f889d89287,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.019595 kubelet[2550]: E1112 17:58:05.019539 2550 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.019676 kubelet[2550]: E1112 17:58:05.019634 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d6674b4b8-wt8ww" Nov 12 17:58:05.019676 kubelet[2550]: E1112 17:58:05.019656 2550 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d6674b4b8-wt8ww" Nov 12 17:58:05.019737 kubelet[2550]: E1112 17:58:05.019697 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d6674b4b8-wt8ww_calico-system(0bdbc14b-f620-427a-ae0b-74f889d89287)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d6674b4b8-wt8ww_calico-system(0bdbc14b-f620-427a-ae0b-74f889d89287)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d6674b4b8-wt8ww" podUID="0bdbc14b-f620-427a-ae0b-74f889d89287" Nov 12 17:58:05.020059 kubelet[2550]: E1112 17:58:05.020029 2550 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.020227 kubelet[2550]: E1112 17:58:05.020205 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8577f645-7qhtp" Nov 12 17:58:05.020312 kubelet[2550]: E1112 17:58:05.020297 2550 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8577f645-7qhtp" Nov 12 17:58:05.020407 kubelet[2550]: E1112 17:58:05.020386 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f8577f645-7qhtp_calico-apiserver(79693433-db53-4b04-81f4-8647ae5d69bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f8577f645-7qhtp_calico-apiserver(79693433-db53-4b04-81f4-8647ae5d69bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f8577f645-7qhtp" podUID="79693433-db53-4b04-81f4-8647ae5d69bf" Nov 12 17:58:05.021100 kubelet[2550]: E1112 17:58:05.020565 2550 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.021100 kubelet[2550]: E1112 17:58:05.020619 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8577f645-42z4v" Nov 12 17:58:05.021100 kubelet[2550]: E1112 17:58:05.020637 2550 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8577f645-42z4v" Nov 12 17:58:05.021242 kubelet[2550]: E1112 17:58:05.020665 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f8577f645-42z4v_calico-apiserver(796be7c7-8733-4ba8-8a44-8adf215d4e9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f8577f645-42z4v_calico-apiserver(796be7c7-8733-4ba8-8a44-8adf215d4e9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f8577f645-42z4v" podUID="796be7c7-8733-4ba8-8a44-8adf215d4e9b" Nov 12 17:58:05.026833 containerd[1440]: time="2024-11-12T17:58:05.025833431Z" level=error msg="Failed to destroy network for sandbox \"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.027262 containerd[1440]: time="2024-11-12T17:58:05.027233714Z" level=error msg="encountered an error cleaning up failed sandbox \"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.027375 containerd[1440]: time="2024-11-12T17:58:05.027352794Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lxvzw,Uid:e5df49d4-ef7b-414a-a5da-3e33e7b77381,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.027661 kubelet[2550]: E1112 17:58:05.027604 2550 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.027717 kubelet[2550]: E1112 17:58:05.027667 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lxvzw" Nov 12 17:58:05.027717 kubelet[2550]: E1112 17:58:05.027683 2550 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lxvzw" Nov 12 17:58:05.027772 kubelet[2550]: E1112 17:58:05.027721 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lxvzw_kube-system(e5df49d4-ef7b-414a-a5da-3e33e7b77381)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lxvzw_kube-system(e5df49d4-ef7b-414a-a5da-3e33e7b77381)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lxvzw" podUID="e5df49d4-ef7b-414a-a5da-3e33e7b77381" Nov 12 17:58:05.029208 containerd[1440]: time="2024-11-12T17:58:05.029157277Z" level=error msg="Failed to destroy network for sandbox \"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.029652 containerd[1440]: time="2024-11-12T17:58:05.029621438Z" level=error msg="encountered an error cleaning up failed sandbox \"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.029785 containerd[1440]: time="2024-11-12T17:58:05.029760919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lv4gc,Uid:4ed5cf09-ed06-4e8b-8c68-5b53322839e8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.030007 kubelet[2550]: E1112 17:58:05.029975 2550 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.030103 kubelet[2550]: E1112 17:58:05.030015 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lv4gc" Nov 12 17:58:05.030103 kubelet[2550]: E1112 17:58:05.030031 2550 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lv4gc" Nov 12 17:58:05.030103 kubelet[2550]: E1112 17:58:05.030058 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lv4gc_kube-system(4ed5cf09-ed06-4e8b-8c68-5b53322839e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lv4gc_kube-system(4ed5cf09-ed06-4e8b-8c68-5b53322839e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lv4gc" podUID="4ed5cf09-ed06-4e8b-8c68-5b53322839e8" Nov 12 17:58:05.248040 systemd[1]: Created slice kubepods-besteffort-podea587a6f_1412_4dff_ac23_3aab0de5e566.slice - libcontainer container kubepods-besteffort-podea587a6f_1412_4dff_ac23_3aab0de5e566.slice. Nov 12 17:58:05.250094 containerd[1440]: time="2024-11-12T17:58:05.250056094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sdd25,Uid:ea587a6f-1412-4dff-ac23-3aab0de5e566,Namespace:calico-system,Attempt:0,}" Nov 12 17:58:05.298474 containerd[1440]: time="2024-11-12T17:58:05.298296425Z" level=error msg="Failed to destroy network for sandbox \"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.299891 containerd[1440]: time="2024-11-12T17:58:05.299719828Z" level=error msg="encountered an error cleaning up failed sandbox \"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.299891 containerd[1440]: time="2024-11-12T17:58:05.299792708Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sdd25,Uid:ea587a6f-1412-4dff-ac23-3aab0de5e566,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.300061 kubelet[2550]: E1112 17:58:05.300023 2550 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.300633 kubelet[2550]: E1112 17:58:05.300081 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sdd25" Nov 12 17:58:05.300633 kubelet[2550]: E1112 17:58:05.300105 2550 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sdd25" Nov 12 17:58:05.300633 kubelet[2550]: E1112 17:58:05.300150 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sdd25_calico-system(ea587a6f-1412-4dff-ac23-3aab0de5e566)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sdd25_calico-system(ea587a6f-1412-4dff-ac23-3aab0de5e566)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sdd25" podUID="ea587a6f-1412-4dff-ac23-3aab0de5e566" Nov 12 17:58:05.349183 kubelet[2550]: I1112 17:58:05.348539 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Nov 12 17:58:05.349391 containerd[1440]: time="2024-11-12T17:58:05.349147801Z" level=info msg="StopPodSandbox for \"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\"" Nov 12 17:58:05.349574 containerd[1440]: time="2024-11-12T17:58:05.349553922Z" level=info msg="Ensure that sandbox 1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264 in task-service has been cleanup successfully" Nov 12 17:58:05.351834 kubelet[2550]: E1112 17:58:05.351810 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:58:05.353299 kubelet[2550]: I1112 17:58:05.353265 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Nov 12 17:58:05.353635 containerd[1440]: time="2024-11-12T17:58:05.353600290Z" level=info msg="StopPodSandbox for \"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\"" Nov 12 17:58:05.353929 containerd[1440]: time="2024-11-12T17:58:05.353889890Z" level=info msg="Ensure that sandbox 3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07 in task-service has been cleanup successfully" Nov 12 17:58:05.354155 containerd[1440]: time="2024-11-12T17:58:05.354126331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 17:58:05.355482 kubelet[2550]: I1112 17:58:05.355443 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Nov 12 17:58:05.356194 containerd[1440]: time="2024-11-12T17:58:05.355956454Z" level=info msg="StopPodSandbox for \"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\"" Nov 12 17:58:05.356194 containerd[1440]: time="2024-11-12T17:58:05.356098974Z" level=info msg="Ensure that sandbox 92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f in task-service has been cleanup successfully" Nov 12 17:58:05.357093 kubelet[2550]: I1112 17:58:05.357027 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Nov 12 17:58:05.358096 containerd[1440]: time="2024-11-12T17:58:05.357864218Z" level=info msg="StopPodSandbox for \"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\"" Nov 12 17:58:05.358460 containerd[1440]: time="2024-11-12T17:58:05.358422939Z" level=info msg="Ensure that sandbox e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646 in task-service has been cleanup successfully" Nov 12 17:58:05.363366 kubelet[2550]: I1112 17:58:05.363345 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Nov 12 17:58:05.364546 containerd[1440]: time="2024-11-12T17:58:05.364507310Z" level=info msg="StopPodSandbox for \"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\"" Nov 12 17:58:05.364675 containerd[1440]: time="2024-11-12T17:58:05.364652150Z" level=info msg="Ensure that sandbox 19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac in task-service has been cleanup successfully" Nov 12 17:58:05.366800 kubelet[2550]: I1112 17:58:05.366781 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Nov 12 17:58:05.368621 containerd[1440]: time="2024-11-12T17:58:05.368581078Z" level=info msg="StopPodSandbox for \"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\"" Nov 12 17:58:05.368836 containerd[1440]: time="2024-11-12T17:58:05.368810518Z" level=info msg="Ensure that sandbox b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32 in task-service has been cleanup successfully" Nov 12 17:58:05.404133 containerd[1440]: time="2024-11-12T17:58:05.404084625Z" level=error msg="StopPodSandbox for \"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\" failed" error="failed to destroy network for sandbox \"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.407848 kubelet[2550]: E1112 17:58:05.407790 2550 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Nov 12 17:58:05.407916 kubelet[2550]: E1112 17:58:05.407868 2550 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264"} Nov 12 17:58:05.407958 kubelet[2550]: E1112 17:58:05.407930 2550 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0bdbc14b-f620-427a-ae0b-74f889d89287\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:58:05.408030 kubelet[2550]: E1112 17:58:05.407953 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0bdbc14b-f620-427a-ae0b-74f889d89287\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d6674b4b8-wt8ww" podUID="0bdbc14b-f620-427a-ae0b-74f889d89287" Nov 12 17:58:05.408753 containerd[1440]: time="2024-11-12T17:58:05.408721354Z" level=error msg="StopPodSandbox for \"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\" failed" error="failed to destroy network for sandbox \"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.409010 kubelet[2550]: E1112 17:58:05.408972 2550 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Nov 12 17:58:05.409103 kubelet[2550]: E1112 17:58:05.409016 2550 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f"} Nov 12 17:58:05.409103 kubelet[2550]: E1112 17:58:05.409040 2550 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"79693433-db53-4b04-81f4-8647ae5d69bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:58:05.409103 kubelet[2550]: E1112 17:58:05.409061 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"79693433-db53-4b04-81f4-8647ae5d69bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f8577f645-7qhtp" podUID="79693433-db53-4b04-81f4-8647ae5d69bf" Nov 12 17:58:05.409251 containerd[1440]: time="2024-11-12T17:58:05.409054354Z" level=error msg="StopPodSandbox for \"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\" failed" error="failed to destroy network for sandbox \"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.409287 kubelet[2550]: E1112 17:58:05.409197 2550 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Nov 12 17:58:05.409287 kubelet[2550]: E1112 17:58:05.409232 2550 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07"} Nov 12 17:58:05.413717 kubelet[2550]: E1112 17:58:05.413681 2550 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4ed5cf09-ed06-4e8b-8c68-5b53322839e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:58:05.413790 kubelet[2550]: E1112 17:58:05.413736 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4ed5cf09-ed06-4e8b-8c68-5b53322839e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lv4gc" podUID="4ed5cf09-ed06-4e8b-8c68-5b53322839e8" Nov 12 17:58:05.419440 containerd[1440]: time="2024-11-12T17:58:05.419388094Z" level=error msg="StopPodSandbox for \"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\" failed" error="failed to destroy network for sandbox \"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.419582 kubelet[2550]: E1112 17:58:05.419551 2550 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Nov 12 17:58:05.419633 kubelet[2550]: E1112 17:58:05.419588 2550 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646"} Nov 12 17:58:05.419633 kubelet[2550]: E1112 17:58:05.419612 2550 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ea587a6f-1412-4dff-ac23-3aab0de5e566\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:58:05.419698 kubelet[2550]: E1112 17:58:05.419629 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ea587a6f-1412-4dff-ac23-3aab0de5e566\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sdd25" podUID="ea587a6f-1412-4dff-ac23-3aab0de5e566" Nov 12 17:58:05.422605 containerd[1440]: time="2024-11-12T17:58:05.422572380Z" level=error msg="StopPodSandbox for \"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\" failed" error="failed to destroy network for sandbox \"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.422906 kubelet[2550]: E1112 17:58:05.422855 2550 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Nov 12 17:58:05.422972 kubelet[2550]: E1112 17:58:05.422914 2550 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac"} Nov 12 17:58:05.422972 kubelet[2550]: E1112 17:58:05.422945 2550 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"796be7c7-8733-4ba8-8a44-8adf215d4e9b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:58:05.422972 kubelet[2550]: E1112 17:58:05.422964 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"796be7c7-8733-4ba8-8a44-8adf215d4e9b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f8577f645-42z4v" podUID="796be7c7-8733-4ba8-8a44-8adf215d4e9b" Nov 12 17:58:05.430413 containerd[1440]: time="2024-11-12T17:58:05.430371754Z" level=error msg="StopPodSandbox for \"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\" failed" error="failed to destroy network for sandbox \"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:58:05.430574 kubelet[2550]: E1112 17:58:05.430540 2550 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Nov 12 17:58:05.430612 kubelet[2550]: E1112 17:58:05.430583 2550 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32"} Nov 12 17:58:05.430612 kubelet[2550]: E1112 17:58:05.430607 2550 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e5df49d4-ef7b-414a-a5da-3e33e7b77381\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:58:05.430675 kubelet[2550]: E1112 17:58:05.430626 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e5df49d4-ef7b-414a-a5da-3e33e7b77381\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lxvzw" podUID="e5df49d4-ef7b-414a-a5da-3e33e7b77381" Nov 12 17:58:05.527483 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f-shm.mount: Deactivated successfully. Nov 12 17:58:05.917930 systemd[1]: Started sshd@7-10.0.0.106:22-10.0.0.1:54894.service - OpenSSH per-connection server daemon (10.0.0.1:54894). Nov 12 17:58:05.961044 sshd[3702]: Accepted publickey for core from 10.0.0.1 port 54894 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:58:05.963624 sshd[3702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:58:05.967865 systemd-logind[1424]: New session 8 of user core. Nov 12 17:58:05.978288 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 17:58:06.091933 sshd[3702]: pam_unix(sshd:session): session closed for user core Nov 12 17:58:06.094862 systemd[1]: sshd@7-10.0.0.106:22-10.0.0.1:54894.service: Deactivated successfully. Nov 12 17:58:06.096530 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 17:58:06.098405 systemd-logind[1424]: Session 8 logged out. Waiting for processes to exit. Nov 12 17:58:06.099239 systemd-logind[1424]: Removed session 8. Nov 12 17:58:09.023303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3088754438.mount: Deactivated successfully. Nov 12 17:58:09.100218 containerd[1440]: time="2024-11-12T17:58:09.100138298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:58:09.100714 containerd[1440]: time="2024-11-12T17:58:09.100658619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=135495328" Nov 12 17:58:09.101648 containerd[1440]: time="2024-11-12T17:58:09.101613021Z" level=info msg="ImageCreate event name:\"sha256:8d083b1bdef5f976f011d47e03dcb8015c1a80cb54a915c6b8e64df03f0743d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:58:09.103602 containerd[1440]: time="2024-11-12T17:58:09.103564544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:58:09.104122 containerd[1440]: time="2024-11-12T17:58:09.104082945Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:8d083b1bdef5f976f011d47e03dcb8015c1a80cb54a915c6b8e64df03f0743d5\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"135495190\" in 3.749916854s" Nov 12 17:58:09.104122 containerd[1440]: time="2024-11-12T17:58:09.104118985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:8d083b1bdef5f976f011d47e03dcb8015c1a80cb54a915c6b8e64df03f0743d5\"" Nov 12 17:58:09.141563 containerd[1440]: time="2024-11-12T17:58:09.141515006Z" level=info msg="CreateContainer within sandbox \"0843eb8a96b2f2cf2109d160bec33b79eabc14b16bc691247f1cfa0ea26b8d1e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 17:58:09.154544 containerd[1440]: time="2024-11-12T17:58:09.154498467Z" level=info msg="CreateContainer within sandbox \"0843eb8a96b2f2cf2109d160bec33b79eabc14b16bc691247f1cfa0ea26b8d1e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"234cc9f6a0d9e09c864fec61fcb82ac1a249413523601137e25c309debd43f66\"" Nov 12 17:58:09.154963 containerd[1440]: time="2024-11-12T17:58:09.154938908Z" level=info msg="StartContainer for \"234cc9f6a0d9e09c864fec61fcb82ac1a249413523601137e25c309debd43f66\"" Nov 12 17:58:09.218360 systemd[1]: Started cri-containerd-234cc9f6a0d9e09c864fec61fcb82ac1a249413523601137e25c309debd43f66.scope - libcontainer container 234cc9f6a0d9e09c864fec61fcb82ac1a249413523601137e25c309debd43f66. Nov 12 17:58:09.343660 containerd[1440]: time="2024-11-12T17:58:09.343550577Z" level=info msg="StartContainer for \"234cc9f6a0d9e09c864fec61fcb82ac1a249413523601137e25c309debd43f66\" returns successfully" Nov 12 17:58:09.379008 kubelet[2550]: E1112 17:58:09.377440 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:58:09.396410 kubelet[2550]: I1112 17:58:09.395939 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xdnn6" podStartSLOduration=1.098719071 podStartE2EDuration="14.395924142s" podCreationTimestamp="2024-11-12 17:57:55 +0000 UTC" firstStartedPulling="2024-11-12 17:57:55.836261602 +0000 UTC m=+21.683368203" lastFinishedPulling="2024-11-12 17:58:09.133466713 +0000 UTC m=+34.980573274" observedRunningTime="2024-11-12 17:58:09.3947119 +0000 UTC m=+35.241818501" watchObservedRunningTime="2024-11-12 17:58:09.395924142 +0000 UTC m=+35.243030743" Nov 12 17:58:09.470769 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 17:58:09.470885 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 17:58:10.378568 kubelet[2550]: I1112 17:58:10.378525 2550 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:58:10.379318 kubelet[2550]: E1112 17:58:10.379300 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:58:10.665025 kubelet[2550]: I1112 17:58:10.664609 2550 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:58:10.665684 kubelet[2550]: E1112 17:58:10.665256 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:58:10.915200 kernel: bpftool[3919]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 17:58:11.059259 systemd-networkd[1383]: vxlan.calico: Link UP Nov 12 17:58:11.059268 systemd-networkd[1383]: vxlan.calico: Gained carrier Nov 12 17:58:11.113016 systemd[1]: Started sshd@8-10.0.0.106:22-10.0.0.1:54904.service - OpenSSH per-connection server daemon (10.0.0.1:54904). Nov 12 17:58:11.155876 sshd[3960]: Accepted publickey for core from 10.0.0.1 port 54904 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:58:11.157281 sshd[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:58:11.160826 systemd-logind[1424]: New session 9 of user core. Nov 12 17:58:11.170342 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 17:58:11.310838 sshd[3960]: pam_unix(sshd:session): session closed for user core Nov 12 17:58:11.314088 systemd-logind[1424]: Session 9 logged out. Waiting for processes to exit. Nov 12 17:58:11.314350 systemd[1]: sshd@8-10.0.0.106:22-10.0.0.1:54904.service: Deactivated successfully. Nov 12 17:58:11.316315 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 17:58:11.319634 systemd-logind[1424]: Removed session 9. Nov 12 17:58:11.380522 kubelet[2550]: E1112 17:58:11.380497 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:58:13.094332 systemd-networkd[1383]: vxlan.calico: Gained IPv6LL Nov 12 17:58:16.325868 systemd[1]: Started sshd@9-10.0.0.106:22-10.0.0.1:58152.service - OpenSSH per-connection server daemon (10.0.0.1:58152). Nov 12 17:58:16.362269 sshd[4017]: Accepted publickey for core from 10.0.0.1 port 58152 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:58:16.363571 sshd[4017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:58:16.368083 systemd-logind[1424]: New session 10 of user core. Nov 12 17:58:16.379382 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 17:58:16.504863 sshd[4017]: pam_unix(sshd:session): session closed for user core Nov 12 17:58:16.521802 systemd[1]: sshd@9-10.0.0.106:22-10.0.0.1:58152.service: Deactivated successfully. Nov 12 17:58:16.523479 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 17:58:16.524693 systemd-logind[1424]: Session 10 logged out. Waiting for processes to exit. Nov 12 17:58:16.526060 systemd[1]: Started sshd@10-10.0.0.106:22-10.0.0.1:58156.service - OpenSSH per-connection server daemon (10.0.0.1:58156). Nov 12 17:58:16.527189 systemd-logind[1424]: Removed session 10. Nov 12 17:58:16.562844 sshd[4032]: Accepted publickey for core from 10.0.0.1 port 58156 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:58:16.564063 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:58:16.567638 systemd-logind[1424]: New session 11 of user core. Nov 12 17:58:16.580322 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 17:58:16.740703 sshd[4032]: pam_unix(sshd:session): session closed for user core Nov 12 17:58:16.751186 systemd[1]: sshd@10-10.0.0.106:22-10.0.0.1:58156.service: Deactivated successfully. Nov 12 17:58:16.754065 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 17:58:16.756219 systemd-logind[1424]: Session 11 logged out. Waiting for processes to exit. Nov 12 17:58:16.757716 systemd[1]: Started sshd@11-10.0.0.106:22-10.0.0.1:58166.service - OpenSSH per-connection server daemon (10.0.0.1:58166). Nov 12 17:58:16.760455 systemd-logind[1424]: Removed session 11. Nov 12 17:58:16.798043 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 58166 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:58:16.799494 sshd[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:58:16.803175 systemd-logind[1424]: New session 12 of user core. Nov 12 17:58:16.813304 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 17:58:16.929532 sshd[4044]: pam_unix(sshd:session): session closed for user core Nov 12 17:58:16.933136 systemd[1]: sshd@11-10.0.0.106:22-10.0.0.1:58166.service: Deactivated successfully. Nov 12 17:58:16.935197 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 17:58:16.936300 systemd-logind[1424]: Session 12 logged out. Waiting for processes to exit. Nov 12 17:58:16.937296 systemd-logind[1424]: Removed session 12. Nov 12 17:58:17.243597 containerd[1440]: time="2024-11-12T17:58:17.243488946Z" level=info msg="StopPodSandbox for \"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\"" Nov 12 17:58:17.431620 containerd[1440]: 2024-11-12 17:58:17.333 [INFO][4073] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Nov 12 17:58:17.431620 containerd[1440]: 2024-11-12 17:58:17.334 [INFO][4073] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" iface="eth0" netns="/var/run/netns/cni-f10614de-5f76-b771-01ce-0f86d93da79d" Nov 12 17:58:17.431620 containerd[1440]: 2024-11-12 17:58:17.335 [INFO][4073] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" iface="eth0" netns="/var/run/netns/cni-f10614de-5f76-b771-01ce-0f86d93da79d" Nov 12 17:58:17.431620 containerd[1440]: 2024-11-12 17:58:17.336 [INFO][4073] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" iface="eth0" netns="/var/run/netns/cni-f10614de-5f76-b771-01ce-0f86d93da79d" Nov 12 17:58:17.431620 containerd[1440]: 2024-11-12 17:58:17.336 [INFO][4073] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Nov 12 17:58:17.431620 containerd[1440]: 2024-11-12 17:58:17.336 [INFO][4073] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Nov 12 17:58:17.431620 containerd[1440]: 2024-11-12 17:58:17.418 [INFO][4081] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" HandleID="k8s-pod-network.b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Workload="localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0" Nov 12 17:58:17.431620 containerd[1440]: 2024-11-12 17:58:17.418 [INFO][4081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:17.431620 containerd[1440]: 2024-11-12 17:58:17.418 [INFO][4081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:17.431620 containerd[1440]: 2024-11-12 17:58:17.427 [WARNING][4081] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" HandleID="k8s-pod-network.b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Workload="localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0" Nov 12 17:58:17.431620 containerd[1440]: 2024-11-12 17:58:17.427 [INFO][4081] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" HandleID="k8s-pod-network.b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Workload="localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0" Nov 12 17:58:17.431620 containerd[1440]: 2024-11-12 17:58:17.428 [INFO][4081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:17.431620 containerd[1440]: 2024-11-12 17:58:17.430 [INFO][4073] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Nov 12 17:58:17.432099 containerd[1440]: time="2024-11-12T17:58:17.431766789Z" level=info msg="TearDown network for sandbox \"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\" successfully" Nov 12 17:58:17.432099 containerd[1440]: time="2024-11-12T17:58:17.431798109Z" level=info msg="StopPodSandbox for \"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\" returns successfully" Nov 12 17:58:17.432147 kubelet[2550]: E1112 17:58:17.432118 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:58:17.433150 containerd[1440]: time="2024-11-12T17:58:17.432661630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lxvzw,Uid:e5df49d4-ef7b-414a-a5da-3e33e7b77381,Namespace:kube-system,Attempt:1,}" Nov 12 17:58:17.434869 systemd[1]: run-netns-cni\x2df10614de\x2d5f76\x2db771\x2d01ce\x2d0f86d93da79d.mount: Deactivated successfully. Nov 12 17:58:17.556542 systemd-networkd[1383]: calieaed83a4045: Link UP Nov 12 17:58:17.557199 systemd-networkd[1383]: calieaed83a4045: Gained carrier Nov 12 17:58:17.571038 containerd[1440]: 2024-11-12 17:58:17.479 [INFO][4090] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0 coredns-7db6d8ff4d- kube-system e5df49d4-ef7b-414a-a5da-3e33e7b77381 908 0 2024-11-12 17:57:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-lxvzw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calieaed83a4045 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lxvzw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lxvzw-" Nov 12 17:58:17.571038 containerd[1440]: 2024-11-12 17:58:17.479 [INFO][4090] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lxvzw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0" Nov 12 17:58:17.571038 containerd[1440]: 2024-11-12 17:58:17.508 [INFO][4102] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92" HandleID="k8s-pod-network.897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92" Workload="localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0" Nov 12 17:58:17.571038 containerd[1440]: 2024-11-12 17:58:17.520 [INFO][4102] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92" HandleID="k8s-pod-network.897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92" Workload="localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dba40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-lxvzw", "timestamp":"2024-11-12 17:58:17.508010848 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:58:17.571038 containerd[1440]: 2024-11-12 17:58:17.520 [INFO][4102] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:17.571038 containerd[1440]: 2024-11-12 17:58:17.520 [INFO][4102] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:17.571038 containerd[1440]: 2024-11-12 17:58:17.520 [INFO][4102] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 17:58:17.571038 containerd[1440]: 2024-11-12 17:58:17.521 [INFO][4102] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92" host="localhost" Nov 12 17:58:17.571038 containerd[1440]: 2024-11-12 17:58:17.528 [INFO][4102] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 17:58:17.571038 containerd[1440]: 2024-11-12 17:58:17.532 [INFO][4102] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 17:58:17.571038 containerd[1440]: 2024-11-12 17:58:17.534 [INFO][4102] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 17:58:17.571038 containerd[1440]: 2024-11-12 17:58:17.540 [INFO][4102] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 17:58:17.571038 containerd[1440]: 2024-11-12 17:58:17.540 [INFO][4102] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92" host="localhost" Nov 12 17:58:17.571038 containerd[1440]: 2024-11-12 17:58:17.542 [INFO][4102] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92 Nov 12 17:58:17.571038 containerd[1440]: 2024-11-12 17:58:17.545 [INFO][4102] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92" host="localhost" Nov 12 17:58:17.571038 containerd[1440]: 2024-11-12 17:58:17.550 [INFO][4102] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92" host="localhost" Nov 12 17:58:17.571038 containerd[1440]: 2024-11-12 17:58:17.550 [INFO][4102] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92" host="localhost" Nov 12 17:58:17.571038 containerd[1440]: 2024-11-12 17:58:17.550 [INFO][4102] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:17.571038 containerd[1440]: 2024-11-12 17:58:17.550 [INFO][4102] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92" HandleID="k8s-pod-network.897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92" Workload="localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0" Nov 12 17:58:17.571664 containerd[1440]: 2024-11-12 17:58:17.552 [INFO][4090] cni-plugin/k8s.go 386: Populated endpoint ContainerID="897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lxvzw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e5df49d4-ef7b-414a-a5da-3e33e7b77381", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-lxvzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieaed83a4045", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:17.571664 containerd[1440]: 2024-11-12 17:58:17.552 [INFO][4090] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lxvzw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0" Nov 12 17:58:17.571664 containerd[1440]: 2024-11-12 17:58:17.552 [INFO][4090] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieaed83a4045 ContainerID="897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lxvzw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0" Nov 12 17:58:17.571664 containerd[1440]: 2024-11-12 17:58:17.557 [INFO][4090] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lxvzw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0" Nov 12 17:58:17.571664 containerd[1440]: 2024-11-12 17:58:17.557 [INFO][4090] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lxvzw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e5df49d4-ef7b-414a-a5da-3e33e7b77381", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92", Pod:"coredns-7db6d8ff4d-lxvzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieaed83a4045", MAC:"3e:95:4b:ca:8c:2e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:17.571664 containerd[1440]: 2024-11-12 17:58:17.568 [INFO][4090] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lxvzw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0" Nov 12 17:58:17.590308 containerd[1440]: time="2024-11-12T17:58:17.590210034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:58:17.590561 containerd[1440]: time="2024-11-12T17:58:17.590291914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:58:17.590561 containerd[1440]: time="2024-11-12T17:58:17.590313994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:58:17.590561 containerd[1440]: time="2024-11-12T17:58:17.590401754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:58:17.611331 systemd[1]: Started cri-containerd-897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92.scope - libcontainer container 897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92. Nov 12 17:58:17.621826 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 17:58:17.641744 containerd[1440]: time="2024-11-12T17:58:17.641704941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lxvzw,Uid:e5df49d4-ef7b-414a-a5da-3e33e7b77381,Namespace:kube-system,Attempt:1,} returns sandbox id \"897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92\"" Nov 12 17:58:17.642801 kubelet[2550]: E1112 17:58:17.642776 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:58:17.656065 containerd[1440]: time="2024-11-12T17:58:17.655882879Z" level=info msg="CreateContainer within sandbox \"897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 17:58:17.667976 containerd[1440]: time="2024-11-12T17:58:17.667928975Z" level=info msg="CreateContainer within sandbox \"897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1178f6f11cc9bee9d39dd87ffcceeffd03a35380bf53b2754401b892914eac42\"" Nov 12 17:58:17.668415 containerd[1440]: time="2024-11-12T17:58:17.668386615Z" level=info msg="StartContainer for \"1178f6f11cc9bee9d39dd87ffcceeffd03a35380bf53b2754401b892914eac42\"" Nov 12 17:58:17.694335 systemd[1]: Started cri-containerd-1178f6f11cc9bee9d39dd87ffcceeffd03a35380bf53b2754401b892914eac42.scope - libcontainer container 1178f6f11cc9bee9d39dd87ffcceeffd03a35380bf53b2754401b892914eac42. Nov 12 17:58:17.722649 containerd[1440]: time="2024-11-12T17:58:17.722607365Z" level=info msg="StartContainer for \"1178f6f11cc9bee9d39dd87ffcceeffd03a35380bf53b2754401b892914eac42\" returns successfully" Nov 12 17:58:18.244449 containerd[1440]: time="2024-11-12T17:58:18.244394313Z" level=info msg="StopPodSandbox for \"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\"" Nov 12 17:58:18.319319 containerd[1440]: 2024-11-12 17:58:18.287 [INFO][4224] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Nov 12 17:58:18.319319 containerd[1440]: 2024-11-12 17:58:18.288 [INFO][4224] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" iface="eth0" netns="/var/run/netns/cni-d98cd0ca-a8e7-1a15-8842-5b64f2fb3006" Nov 12 17:58:18.319319 containerd[1440]: 2024-11-12 17:58:18.288 [INFO][4224] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" iface="eth0" netns="/var/run/netns/cni-d98cd0ca-a8e7-1a15-8842-5b64f2fb3006" Nov 12 17:58:18.319319 containerd[1440]: 2024-11-12 17:58:18.289 [INFO][4224] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" iface="eth0" netns="/var/run/netns/cni-d98cd0ca-a8e7-1a15-8842-5b64f2fb3006" Nov 12 17:58:18.319319 containerd[1440]: 2024-11-12 17:58:18.289 [INFO][4224] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Nov 12 17:58:18.319319 containerd[1440]: 2024-11-12 17:58:18.289 [INFO][4224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Nov 12 17:58:18.319319 containerd[1440]: 2024-11-12 17:58:18.306 [INFO][4232] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" HandleID="k8s-pod-network.1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Workload="localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0" Nov 12 17:58:18.319319 containerd[1440]: 2024-11-12 17:58:18.307 [INFO][4232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:18.319319 containerd[1440]: 2024-11-12 17:58:18.307 [INFO][4232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:18.319319 containerd[1440]: 2024-11-12 17:58:18.315 [WARNING][4232] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" HandleID="k8s-pod-network.1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Workload="localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0" Nov 12 17:58:18.319319 containerd[1440]: 2024-11-12 17:58:18.315 [INFO][4232] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" HandleID="k8s-pod-network.1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Workload="localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0" Nov 12 17:58:18.319319 containerd[1440]: 2024-11-12 17:58:18.316 [INFO][4232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:18.319319 containerd[1440]: 2024-11-12 17:58:18.317 [INFO][4224] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Nov 12 17:58:18.319714 containerd[1440]: time="2024-11-12T17:58:18.319448327Z" level=info msg="TearDown network for sandbox \"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\" successfully" Nov 12 17:58:18.319714 containerd[1440]: time="2024-11-12T17:58:18.319475647Z" level=info msg="StopPodSandbox for \"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\" returns successfully" Nov 12 17:58:18.320124 containerd[1440]: time="2024-11-12T17:58:18.320094088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6674b4b8-wt8ww,Uid:0bdbc14b-f620-427a-ae0b-74f889d89287,Namespace:calico-system,Attempt:1,}" Nov 12 17:58:18.395589 kubelet[2550]: E1112 17:58:18.395543 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:58:18.405796 kubelet[2550]: I1112 17:58:18.405742 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lxvzw" podStartSLOduration=29.405726556 podStartE2EDuration="29.405726556s" podCreationTimestamp="2024-11-12 17:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:58:18.404793075 +0000 UTC m=+44.251899676" watchObservedRunningTime="2024-11-12 17:58:18.405726556 +0000 UTC m=+44.252833157" Nov 12 17:58:18.437278 systemd[1]: run-netns-cni\x2dd98cd0ca\x2da8e7\x2d1a15\x2d8842\x2d5b64f2fb3006.mount: Deactivated successfully. Nov 12 17:58:18.468509 systemd-networkd[1383]: cali10b9aaf06fa: Link UP Nov 12 17:58:18.469338 systemd-networkd[1383]: cali10b9aaf06fa: Gained carrier Nov 12 17:58:18.482013 containerd[1440]: 2024-11-12 17:58:18.384 [INFO][4240] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0 calico-kube-controllers-6d6674b4b8- calico-system 0bdbc14b-f620-427a-ae0b-74f889d89287 940 0 2024-11-12 17:57:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d6674b4b8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6d6674b4b8-wt8ww eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali10b9aaf06fa [] []}} ContainerID="c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9" Namespace="calico-system" Pod="calico-kube-controllers-6d6674b4b8-wt8ww" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-" Nov 12 17:58:18.482013 containerd[1440]: 2024-11-12 17:58:18.384 [INFO][4240] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9" Namespace="calico-system" Pod="calico-kube-controllers-6d6674b4b8-wt8ww" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0" Nov 12 17:58:18.482013 containerd[1440]: 2024-11-12 17:58:18.415 [INFO][4253] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9" HandleID="k8s-pod-network.c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9" Workload="localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0" Nov 12 17:58:18.482013 containerd[1440]: 2024-11-12 17:58:18.434 [INFO][4253] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9" HandleID="k8s-pod-network.c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9" Workload="localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000403c90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6d6674b4b8-wt8ww", "timestamp":"2024-11-12 17:58:18.415748289 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:58:18.482013 containerd[1440]: 2024-11-12 17:58:18.435 [INFO][4253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:18.482013 containerd[1440]: 2024-11-12 17:58:18.435 [INFO][4253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:18.482013 containerd[1440]: 2024-11-12 17:58:18.435 [INFO][4253] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 17:58:18.482013 containerd[1440]: 2024-11-12 17:58:18.440 [INFO][4253] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9" host="localhost" Nov 12 17:58:18.482013 containerd[1440]: 2024-11-12 17:58:18.444 [INFO][4253] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 17:58:18.482013 containerd[1440]: 2024-11-12 17:58:18.448 [INFO][4253] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 17:58:18.482013 containerd[1440]: 2024-11-12 17:58:18.449 [INFO][4253] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 17:58:18.482013 containerd[1440]: 2024-11-12 17:58:18.451 [INFO][4253] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 17:58:18.482013 containerd[1440]: 2024-11-12 17:58:18.451 [INFO][4253] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9" host="localhost" Nov 12 17:58:18.482013 containerd[1440]: 2024-11-12 17:58:18.453 [INFO][4253] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9 Nov 12 17:58:18.482013 containerd[1440]: 2024-11-12 17:58:18.456 [INFO][4253] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9" host="localhost" Nov 12 17:58:18.482013 containerd[1440]: 2024-11-12 17:58:18.461 [INFO][4253] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9" host="localhost" Nov 12 17:58:18.482013 containerd[1440]: 2024-11-12 17:58:18.461 [INFO][4253] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9" host="localhost" Nov 12 17:58:18.482013 containerd[1440]: 2024-11-12 17:58:18.461 [INFO][4253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:18.482013 containerd[1440]: 2024-11-12 17:58:18.461 [INFO][4253] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9" HandleID="k8s-pod-network.c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9" Workload="localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0" Nov 12 17:58:18.482580 containerd[1440]: 2024-11-12 17:58:18.465 [INFO][4240] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9" Namespace="calico-system" Pod="calico-kube-controllers-6d6674b4b8-wt8ww" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0", GenerateName:"calico-kube-controllers-6d6674b4b8-", Namespace:"calico-system", SelfLink:"", UID:"0bdbc14b-f620-427a-ae0b-74f889d89287", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d6674b4b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6d6674b4b8-wt8ww", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali10b9aaf06fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:18.482580 containerd[1440]: 2024-11-12 17:58:18.466 [INFO][4240] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9" Namespace="calico-system" Pod="calico-kube-controllers-6d6674b4b8-wt8ww" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0" Nov 12 17:58:18.482580 containerd[1440]: 2024-11-12 17:58:18.466 [INFO][4240] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali10b9aaf06fa ContainerID="c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9" Namespace="calico-system" Pod="calico-kube-controllers-6d6674b4b8-wt8ww" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0" Nov 12 17:58:18.482580 containerd[1440]: 2024-11-12 17:58:18.469 [INFO][4240] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9" Namespace="calico-system" Pod="calico-kube-controllers-6d6674b4b8-wt8ww" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0" Nov 12 17:58:18.482580 containerd[1440]: 2024-11-12 17:58:18.470 [INFO][4240] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9" Namespace="calico-system" Pod="calico-kube-controllers-6d6674b4b8-wt8ww" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0", GenerateName:"calico-kube-controllers-6d6674b4b8-", Namespace:"calico-system", SelfLink:"", UID:"0bdbc14b-f620-427a-ae0b-74f889d89287", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d6674b4b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9", Pod:"calico-kube-controllers-6d6674b4b8-wt8ww", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali10b9aaf06fa", MAC:"3a:a4:16:6b:b2:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:18.482580 containerd[1440]: 2024-11-12 17:58:18.480 [INFO][4240] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9" Namespace="calico-system" Pod="calico-kube-controllers-6d6674b4b8-wt8ww" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0" Nov 12 17:58:18.497771 containerd[1440]: time="2024-11-12T17:58:18.497252752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:58:18.497771 containerd[1440]: time="2024-11-12T17:58:18.497688512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:58:18.497771 containerd[1440]: time="2024-11-12T17:58:18.497703312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:58:18.497962 containerd[1440]: time="2024-11-12T17:58:18.497788232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:58:18.528320 systemd[1]: Started cri-containerd-c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9.scope - libcontainer container c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9. Nov 12 17:58:18.537404 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 17:58:18.553263 containerd[1440]: time="2024-11-12T17:58:18.553224062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6674b4b8-wt8ww,Uid:0bdbc14b-f620-427a-ae0b-74f889d89287,Namespace:calico-system,Attempt:1,} returns sandbox id \"c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9\"" Nov 12 17:58:18.555351 containerd[1440]: time="2024-11-12T17:58:18.555323185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 17:58:19.243909 containerd[1440]: time="2024-11-12T17:58:19.243778326Z" level=info msg="StopPodSandbox for \"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\"" Nov 12 17:58:19.244244 containerd[1440]: time="2024-11-12T17:58:19.244217447Z" level=info msg="StopPodSandbox for \"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\"" Nov 12 17:58:19.339293 containerd[1440]: 2024-11-12 17:58:19.302 [INFO][4354] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Nov 12 17:58:19.339293 containerd[1440]: 2024-11-12 17:58:19.302 [INFO][4354] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" iface="eth0" netns="/var/run/netns/cni-3045d07d-e97b-5c3c-afbc-0ded854e13aa" Nov 12 17:58:19.339293 containerd[1440]: 2024-11-12 17:58:19.302 [INFO][4354] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" iface="eth0" netns="/var/run/netns/cni-3045d07d-e97b-5c3c-afbc-0ded854e13aa" Nov 12 17:58:19.339293 containerd[1440]: 2024-11-12 17:58:19.302 [INFO][4354] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" iface="eth0" netns="/var/run/netns/cni-3045d07d-e97b-5c3c-afbc-0ded854e13aa" Nov 12 17:58:19.339293 containerd[1440]: 2024-11-12 17:58:19.302 [INFO][4354] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Nov 12 17:58:19.339293 containerd[1440]: 2024-11-12 17:58:19.302 [INFO][4354] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Nov 12 17:58:19.339293 containerd[1440]: 2024-11-12 17:58:19.323 [INFO][4374] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" HandleID="k8s-pod-network.e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Workload="localhost-k8s-csi--node--driver--sdd25-eth0" Nov 12 17:58:19.339293 containerd[1440]: 2024-11-12 17:58:19.323 [INFO][4374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:19.339293 containerd[1440]: 2024-11-12 17:58:19.323 [INFO][4374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:19.339293 containerd[1440]: 2024-11-12 17:58:19.331 [WARNING][4374] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" HandleID="k8s-pod-network.e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Workload="localhost-k8s-csi--node--driver--sdd25-eth0" Nov 12 17:58:19.339293 containerd[1440]: 2024-11-12 17:58:19.331 [INFO][4374] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" HandleID="k8s-pod-network.e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Workload="localhost-k8s-csi--node--driver--sdd25-eth0" Nov 12 17:58:19.339293 containerd[1440]: 2024-11-12 17:58:19.333 [INFO][4374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:19.339293 containerd[1440]: 2024-11-12 17:58:19.335 [INFO][4354] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Nov 12 17:58:19.342179 containerd[1440]: time="2024-11-12T17:58:19.340548126Z" level=info msg="TearDown network for sandbox \"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\" successfully" Nov 12 17:58:19.342179 containerd[1440]: time="2024-11-12T17:58:19.340585806Z" level=info msg="StopPodSandbox for \"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\" returns successfully" Nov 12 17:58:19.342359 containerd[1440]: time="2024-11-12T17:58:19.342328008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sdd25,Uid:ea587a6f-1412-4dff-ac23-3aab0de5e566,Namespace:calico-system,Attempt:1,}" Nov 12 17:58:19.343204 systemd[1]: run-netns-cni\x2d3045d07d\x2de97b\x2d5c3c\x2dafbc\x2d0ded854e13aa.mount: Deactivated successfully. Nov 12 17:58:19.347625 containerd[1440]: 2024-11-12 17:58:19.290 [INFO][4353] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Nov 12 17:58:19.347625 containerd[1440]: 2024-11-12 17:58:19.291 [INFO][4353] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" iface="eth0" netns="/var/run/netns/cni-85fdd223-1df0-1c36-1945-1562ce0b0d45" Nov 12 17:58:19.347625 containerd[1440]: 2024-11-12 17:58:19.291 [INFO][4353] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" iface="eth0" netns="/var/run/netns/cni-85fdd223-1df0-1c36-1945-1562ce0b0d45" Nov 12 17:58:19.347625 containerd[1440]: 2024-11-12 17:58:19.292 [INFO][4353] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" iface="eth0" netns="/var/run/netns/cni-85fdd223-1df0-1c36-1945-1562ce0b0d45" Nov 12 17:58:19.347625 containerd[1440]: 2024-11-12 17:58:19.292 [INFO][4353] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Nov 12 17:58:19.347625 containerd[1440]: 2024-11-12 17:58:19.292 [INFO][4353] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Nov 12 17:58:19.347625 containerd[1440]: 2024-11-12 17:58:19.325 [INFO][4368] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" HandleID="k8s-pod-network.19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Workload="localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0" Nov 12 17:58:19.347625 containerd[1440]: 2024-11-12 17:58:19.325 [INFO][4368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:19.347625 containerd[1440]: 2024-11-12 17:58:19.333 [INFO][4368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:19.347625 containerd[1440]: 2024-11-12 17:58:19.342 [WARNING][4368] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" HandleID="k8s-pod-network.19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Workload="localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0" Nov 12 17:58:19.347625 containerd[1440]: 2024-11-12 17:58:19.342 [INFO][4368] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" HandleID="k8s-pod-network.19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Workload="localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0" Nov 12 17:58:19.347625 containerd[1440]: 2024-11-12 17:58:19.343 [INFO][4368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:19.347625 containerd[1440]: 2024-11-12 17:58:19.345 [INFO][4353] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Nov 12 17:58:19.348092 containerd[1440]: time="2024-11-12T17:58:19.347986895Z" level=info msg="TearDown network for sandbox \"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\" successfully" Nov 12 17:58:19.348092 containerd[1440]: time="2024-11-12T17:58:19.348013095Z" level=info msg="StopPodSandbox for \"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\" returns successfully" Nov 12 17:58:19.348920 containerd[1440]: time="2024-11-12T17:58:19.348595856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8577f645-42z4v,Uid:796be7c7-8733-4ba8-8a44-8adf215d4e9b,Namespace:calico-apiserver,Attempt:1,}" Nov 12 17:58:19.399235 kubelet[2550]: E1112 17:58:19.399203 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:58:19.436651 systemd[1]: run-netns-cni\x2d85fdd223\x2d1df0\x2d1c36\x2d1945\x2d1562ce0b0d45.mount: Deactivated successfully. Nov 12 17:58:19.487966 systemd-networkd[1383]: calife582e83cb9: Link UP Nov 12 17:58:19.489033 systemd-networkd[1383]: calife582e83cb9: Gained carrier Nov 12 17:58:19.496687 systemd-networkd[1383]: calieaed83a4045: Gained IPv6LL Nov 12 17:58:19.505188 containerd[1440]: 2024-11-12 17:58:19.404 [INFO][4384] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--sdd25-eth0 csi-node-driver- calico-system ea587a6f-1412-4dff-ac23-3aab0de5e566 959 0 2024-11-12 17:57:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85bdc57578 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-sdd25 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calife582e83cb9 [] []}} ContainerID="370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75" Namespace="calico-system" Pod="csi-node-driver-sdd25" WorkloadEndpoint="localhost-k8s-csi--node--driver--sdd25-" Nov 12 17:58:19.505188 containerd[1440]: 2024-11-12 17:58:19.405 [INFO][4384] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75" Namespace="calico-system" Pod="csi-node-driver-sdd25" WorkloadEndpoint="localhost-k8s-csi--node--driver--sdd25-eth0" Nov 12 17:58:19.505188 containerd[1440]: 2024-11-12 17:58:19.436 [INFO][4409] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75" HandleID="k8s-pod-network.370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75" Workload="localhost-k8s-csi--node--driver--sdd25-eth0" Nov 12 17:58:19.505188 containerd[1440]: 2024-11-12 17:58:19.448 [INFO][4409] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75" HandleID="k8s-pod-network.370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75" Workload="localhost-k8s-csi--node--driver--sdd25-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003c22a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-sdd25", "timestamp":"2024-11-12 17:58:19.436910564 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:58:19.505188 containerd[1440]: 2024-11-12 17:58:19.448 [INFO][4409] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:19.505188 containerd[1440]: 2024-11-12 17:58:19.449 [INFO][4409] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:19.505188 containerd[1440]: 2024-11-12 17:58:19.449 [INFO][4409] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 17:58:19.505188 containerd[1440]: 2024-11-12 17:58:19.450 [INFO][4409] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75" host="localhost" Nov 12 17:58:19.505188 containerd[1440]: 2024-11-12 17:58:19.454 [INFO][4409] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 17:58:19.505188 containerd[1440]: 2024-11-12 17:58:19.462 [INFO][4409] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 17:58:19.505188 containerd[1440]: 2024-11-12 17:58:19.465 [INFO][4409] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 17:58:19.505188 containerd[1440]: 2024-11-12 17:58:19.468 [INFO][4409] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 17:58:19.505188 containerd[1440]: 2024-11-12 17:58:19.468 [INFO][4409] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75" host="localhost" Nov 12 17:58:19.505188 containerd[1440]: 2024-11-12 17:58:19.471 [INFO][4409] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75 Nov 12 17:58:19.505188 containerd[1440]: 2024-11-12 17:58:19.475 [INFO][4409] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75" host="localhost" Nov 12 17:58:19.505188 containerd[1440]: 2024-11-12 17:58:19.482 [INFO][4409] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75" host="localhost" Nov 12 17:58:19.505188 containerd[1440]: 2024-11-12 17:58:19.482 [INFO][4409] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75" host="localhost" Nov 12 17:58:19.505188 containerd[1440]: 2024-11-12 17:58:19.482 [INFO][4409] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:19.505188 containerd[1440]: 2024-11-12 17:58:19.482 [INFO][4409] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75" HandleID="k8s-pod-network.370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75" Workload="localhost-k8s-csi--node--driver--sdd25-eth0" Nov 12 17:58:19.505733 containerd[1440]: 2024-11-12 17:58:19.484 [INFO][4384] cni-plugin/k8s.go 386: Populated endpoint ContainerID="370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75" Namespace="calico-system" Pod="csi-node-driver-sdd25" WorkloadEndpoint="localhost-k8s-csi--node--driver--sdd25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sdd25-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ea587a6f-1412-4dff-ac23-3aab0de5e566", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-sdd25", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calife582e83cb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:19.505733 containerd[1440]: 2024-11-12 17:58:19.485 [INFO][4384] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75" Namespace="calico-system" Pod="csi-node-driver-sdd25" WorkloadEndpoint="localhost-k8s-csi--node--driver--sdd25-eth0" Nov 12 17:58:19.505733 containerd[1440]: 2024-11-12 17:58:19.485 [INFO][4384] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife582e83cb9 ContainerID="370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75" Namespace="calico-system" Pod="csi-node-driver-sdd25" WorkloadEndpoint="localhost-k8s-csi--node--driver--sdd25-eth0" Nov 12 17:58:19.505733 containerd[1440]: 2024-11-12 17:58:19.489 [INFO][4384] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75" Namespace="calico-system" Pod="csi-node-driver-sdd25" WorkloadEndpoint="localhost-k8s-csi--node--driver--sdd25-eth0" Nov 12 17:58:19.505733 containerd[1440]: 2024-11-12 17:58:19.490 [INFO][4384] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75" Namespace="calico-system" Pod="csi-node-driver-sdd25" WorkloadEndpoint="localhost-k8s-csi--node--driver--sdd25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sdd25-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ea587a6f-1412-4dff-ac23-3aab0de5e566", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75", Pod:"csi-node-driver-sdd25", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calife582e83cb9", MAC:"b6:4d:b4:1f:50:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:19.505733 containerd[1440]: 2024-11-12 17:58:19.502 [INFO][4384] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75" Namespace="calico-system" Pod="csi-node-driver-sdd25" WorkloadEndpoint="localhost-k8s-csi--node--driver--sdd25-eth0" Nov 12 17:58:19.528669 systemd-networkd[1383]: calif5a2dfd1eaa: Link UP Nov 12 17:58:19.529151 systemd-networkd[1383]: calif5a2dfd1eaa: Gained carrier Nov 12 17:58:19.549604 containerd[1440]: time="2024-11-12T17:58:19.548542142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:58:19.549604 containerd[1440]: time="2024-11-12T17:58:19.548591862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:58:19.549604 containerd[1440]: time="2024-11-12T17:58:19.548617982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:58:19.549604 containerd[1440]: time="2024-11-12T17:58:19.548711142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:58:19.550361 containerd[1440]: 2024-11-12 17:58:19.424 [INFO][4396] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0 calico-apiserver-6f8577f645- calico-apiserver 796be7c7-8733-4ba8-8a44-8adf215d4e9b 958 0 2024-11-12 17:57:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f8577f645 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f8577f645-42z4v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif5a2dfd1eaa [] []}} ContainerID="35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5" Namespace="calico-apiserver" Pod="calico-apiserver-6f8577f645-42z4v" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8577f645--42z4v-" Nov 12 17:58:19.550361 containerd[1440]: 2024-11-12 17:58:19.425 [INFO][4396] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5" Namespace="calico-apiserver" Pod="calico-apiserver-6f8577f645-42z4v" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0" Nov 12 17:58:19.550361 containerd[1440]: 2024-11-12 17:58:19.456 [INFO][4417] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5" HandleID="k8s-pod-network.35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5" Workload="localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0" Nov 12 17:58:19.550361 containerd[1440]: 2024-11-12 17:58:19.476 [INFO][4417] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5" HandleID="k8s-pod-network.35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5" Workload="localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000303ba0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f8577f645-42z4v", "timestamp":"2024-11-12 17:58:19.456492589 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:58:19.550361 containerd[1440]: 2024-11-12 17:58:19.476 [INFO][4417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:19.550361 containerd[1440]: 2024-11-12 17:58:19.482 [INFO][4417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:19.550361 containerd[1440]: 2024-11-12 17:58:19.482 [INFO][4417] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 17:58:19.550361 containerd[1440]: 2024-11-12 17:58:19.484 [INFO][4417] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5" host="localhost" Nov 12 17:58:19.550361 containerd[1440]: 2024-11-12 17:58:19.492 [INFO][4417] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 17:58:19.550361 containerd[1440]: 2024-11-12 17:58:19.497 [INFO][4417] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 17:58:19.550361 containerd[1440]: 2024-11-12 17:58:19.499 [INFO][4417] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 17:58:19.550361 containerd[1440]: 2024-11-12 17:58:19.506 [INFO][4417] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 17:58:19.550361 containerd[1440]: 2024-11-12 17:58:19.506 [INFO][4417] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5" host="localhost" Nov 12 17:58:19.550361 containerd[1440]: 2024-11-12 17:58:19.508 [INFO][4417] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5 Nov 12 17:58:19.550361 containerd[1440]: 2024-11-12 17:58:19.513 [INFO][4417] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5" host="localhost" Nov 12 17:58:19.550361 containerd[1440]: 2024-11-12 17:58:19.521 [INFO][4417] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5" host="localhost" Nov 12 17:58:19.550361 containerd[1440]: 2024-11-12 17:58:19.521 [INFO][4417] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5" host="localhost" Nov 12 17:58:19.550361 containerd[1440]: 2024-11-12 17:58:19.521 [INFO][4417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:19.550361 containerd[1440]: 2024-11-12 17:58:19.521 [INFO][4417] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5" HandleID="k8s-pod-network.35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5" Workload="localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0" Nov 12 17:58:19.550798 containerd[1440]: 2024-11-12 17:58:19.524 [INFO][4396] cni-plugin/k8s.go 386: Populated endpoint ContainerID="35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5" Namespace="calico-apiserver" Pod="calico-apiserver-6f8577f645-42z4v" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0", GenerateName:"calico-apiserver-6f8577f645-", Namespace:"calico-apiserver", SelfLink:"", UID:"796be7c7-8733-4ba8-8a44-8adf215d4e9b", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8577f645", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f8577f645-42z4v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif5a2dfd1eaa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:19.550798 containerd[1440]: 2024-11-12 17:58:19.524 [INFO][4396] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5" Namespace="calico-apiserver" Pod="calico-apiserver-6f8577f645-42z4v" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0" Nov 12 17:58:19.550798 containerd[1440]: 2024-11-12 17:58:19.524 [INFO][4396] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5a2dfd1eaa ContainerID="35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5" Namespace="calico-apiserver" Pod="calico-apiserver-6f8577f645-42z4v" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0" Nov 12 17:58:19.550798 containerd[1440]: 2024-11-12 17:58:19.528 [INFO][4396] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5" Namespace="calico-apiserver" Pod="calico-apiserver-6f8577f645-42z4v" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0" Nov 12 17:58:19.550798 containerd[1440]: 2024-11-12 17:58:19.529 [INFO][4396] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5" Namespace="calico-apiserver" Pod="calico-apiserver-6f8577f645-42z4v" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0", GenerateName:"calico-apiserver-6f8577f645-", Namespace:"calico-apiserver", SelfLink:"", UID:"796be7c7-8733-4ba8-8a44-8adf215d4e9b", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8577f645", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5", Pod:"calico-apiserver-6f8577f645-42z4v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif5a2dfd1eaa", MAC:"0e:6b:fa:66:75:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:19.550798 containerd[1440]: 2024-11-12 17:58:19.547 [INFO][4396] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5" Namespace="calico-apiserver" Pod="calico-apiserver-6f8577f645-42z4v" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0" Nov 12 17:58:19.575707 containerd[1440]: time="2024-11-12T17:58:19.575507695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:58:19.575707 containerd[1440]: time="2024-11-12T17:58:19.575600015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:58:19.575707 containerd[1440]: time="2024-11-12T17:58:19.575614655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:58:19.575980 containerd[1440]: time="2024-11-12T17:58:19.575893656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:58:19.581493 systemd[1]: Started cri-containerd-370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75.scope - libcontainer container 370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75. Nov 12 17:58:19.605327 systemd[1]: Started cri-containerd-35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5.scope - libcontainer container 35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5. Nov 12 17:58:19.609192 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 17:58:19.626120 containerd[1440]: time="2024-11-12T17:58:19.626066797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sdd25,Uid:ea587a6f-1412-4dff-ac23-3aab0de5e566,Namespace:calico-system,Attempt:1,} returns sandbox id \"370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75\"" Nov 12 17:58:19.633642 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 17:58:19.655386 containerd[1440]: time="2024-11-12T17:58:19.655282553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8577f645-42z4v,Uid:796be7c7-8733-4ba8-8a44-8adf215d4e9b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5\"" Nov 12 17:58:19.686374 systemd-networkd[1383]: cali10b9aaf06fa: Gained IPv6LL Nov 12 17:58:20.031787 containerd[1440]: time="2024-11-12T17:58:20.031738336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:58:20.033760 containerd[1440]: time="2024-11-12T17:58:20.033720019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=31961371" Nov 12 17:58:20.034571 containerd[1440]: time="2024-11-12T17:58:20.034533900Z" level=info msg="ImageCreate event name:\"sha256:526584192bc71f907fcb2d2ef01be0c760fee2ab7bb1e05e41ad9ade98a986b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:58:20.036566 containerd[1440]: time="2024-11-12T17:58:20.036518342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:58:20.037270 containerd[1440]: time="2024-11-12T17:58:20.037241463Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:526584192bc71f907fcb2d2ef01be0c760fee2ab7bb1e05e41ad9ade98a986b3\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"33330975\" in 1.481881558s" Nov 12 17:58:20.037457 containerd[1440]: time="2024-11-12T17:58:20.037354783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:526584192bc71f907fcb2d2ef01be0c760fee2ab7bb1e05e41ad9ade98a986b3\"" Nov 12 17:58:20.038578 containerd[1440]: time="2024-11-12T17:58:20.038214784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 17:58:20.045198 containerd[1440]: time="2024-11-12T17:58:20.044932432Z" level=info msg="CreateContainer within sandbox \"c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 17:58:20.058261 containerd[1440]: time="2024-11-12T17:58:20.058212648Z" level=info msg="CreateContainer within sandbox \"c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b9b41292e3fb66240716a90a8db5ce7967532b4e632ce797775dcf78d299aea2\"" Nov 12 17:58:20.059402 containerd[1440]: time="2024-11-12T17:58:20.058646849Z" level=info msg="StartContainer for \"b9b41292e3fb66240716a90a8db5ce7967532b4e632ce797775dcf78d299aea2\"" Nov 12 17:58:20.087357 systemd[1]: Started cri-containerd-b9b41292e3fb66240716a90a8db5ce7967532b4e632ce797775dcf78d299aea2.scope - libcontainer container b9b41292e3fb66240716a90a8db5ce7967532b4e632ce797775dcf78d299aea2. Nov 12 17:58:20.120488 containerd[1440]: time="2024-11-12T17:58:20.120430483Z" level=info msg="StartContainer for \"b9b41292e3fb66240716a90a8db5ce7967532b4e632ce797775dcf78d299aea2\" returns successfully" Nov 12 17:58:20.244021 containerd[1440]: time="2024-11-12T17:58:20.243975432Z" level=info msg="StopPodSandbox for \"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\"" Nov 12 17:58:20.244856 containerd[1440]: time="2024-11-12T17:58:20.244342592Z" level=info msg="StopPodSandbox for \"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\"" Nov 12 17:58:20.335912 containerd[1440]: 2024-11-12 17:58:20.291 [INFO][4615] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Nov 12 17:58:20.335912 containerd[1440]: 2024-11-12 17:58:20.291 [INFO][4615] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" iface="eth0" netns="/var/run/netns/cni-6b3faebd-d219-ae12-2ddf-8ffd30563aa4" Nov 12 17:58:20.335912 containerd[1440]: 2024-11-12 17:58:20.291 [INFO][4615] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" iface="eth0" netns="/var/run/netns/cni-6b3faebd-d219-ae12-2ddf-8ffd30563aa4" Nov 12 17:58:20.335912 containerd[1440]: 2024-11-12 17:58:20.292 [INFO][4615] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" iface="eth0" netns="/var/run/netns/cni-6b3faebd-d219-ae12-2ddf-8ffd30563aa4" Nov 12 17:58:20.335912 containerd[1440]: 2024-11-12 17:58:20.292 [INFO][4615] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Nov 12 17:58:20.335912 containerd[1440]: 2024-11-12 17:58:20.292 [INFO][4615] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Nov 12 17:58:20.335912 containerd[1440]: 2024-11-12 17:58:20.318 [INFO][4630] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" HandleID="k8s-pod-network.3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Workload="localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0" Nov 12 17:58:20.335912 containerd[1440]: 2024-11-12 17:58:20.319 [INFO][4630] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:20.335912 containerd[1440]: 2024-11-12 17:58:20.319 [INFO][4630] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:20.335912 containerd[1440]: 2024-11-12 17:58:20.328 [WARNING][4630] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" HandleID="k8s-pod-network.3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Workload="localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0" Nov 12 17:58:20.335912 containerd[1440]: 2024-11-12 17:58:20.328 [INFO][4630] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" HandleID="k8s-pod-network.3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Workload="localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0" Nov 12 17:58:20.335912 containerd[1440]: 2024-11-12 17:58:20.330 [INFO][4630] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:20.335912 containerd[1440]: 2024-11-12 17:58:20.334 [INFO][4615] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Nov 12 17:58:20.336350 containerd[1440]: time="2024-11-12T17:58:20.335984903Z" level=info msg="TearDown network for sandbox \"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\" successfully" Nov 12 17:58:20.336350 containerd[1440]: time="2024-11-12T17:58:20.336009343Z" level=info msg="StopPodSandbox for \"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\" returns successfully" Nov 12 17:58:20.336396 kubelet[2550]: E1112 17:58:20.336265 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:58:20.338628 containerd[1440]: time="2024-11-12T17:58:20.338530426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lv4gc,Uid:4ed5cf09-ed06-4e8b-8c68-5b53322839e8,Namespace:kube-system,Attempt:1,}" Nov 12 17:58:20.347233 containerd[1440]: 2024-11-12 17:58:20.293 [INFO][4616] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Nov 12 17:58:20.347233 containerd[1440]: 2024-11-12 17:58:20.293 [INFO][4616] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" iface="eth0" netns="/var/run/netns/cni-d716f0f1-8d66-965b-231c-6a6ad299641e" Nov 12 17:58:20.347233 containerd[1440]: 2024-11-12 17:58:20.294 [INFO][4616] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" iface="eth0" netns="/var/run/netns/cni-d716f0f1-8d66-965b-231c-6a6ad299641e" Nov 12 17:58:20.347233 containerd[1440]: 2024-11-12 17:58:20.294 [INFO][4616] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" iface="eth0" netns="/var/run/netns/cni-d716f0f1-8d66-965b-231c-6a6ad299641e" Nov 12 17:58:20.347233 containerd[1440]: 2024-11-12 17:58:20.294 [INFO][4616] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Nov 12 17:58:20.347233 containerd[1440]: 2024-11-12 17:58:20.294 [INFO][4616] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Nov 12 17:58:20.347233 containerd[1440]: 2024-11-12 17:58:20.319 [INFO][4631] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" HandleID="k8s-pod-network.92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Workload="localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0" Nov 12 17:58:20.347233 containerd[1440]: 2024-11-12 17:58:20.319 [INFO][4631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:20.347233 containerd[1440]: 2024-11-12 17:58:20.330 [INFO][4631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:20.347233 containerd[1440]: 2024-11-12 17:58:20.341 [WARNING][4631] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" HandleID="k8s-pod-network.92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Workload="localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0" Nov 12 17:58:20.347233 containerd[1440]: 2024-11-12 17:58:20.341 [INFO][4631] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" HandleID="k8s-pod-network.92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Workload="localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0" Nov 12 17:58:20.347233 containerd[1440]: 2024-11-12 17:58:20.343 [INFO][4631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:20.347233 containerd[1440]: 2024-11-12 17:58:20.345 [INFO][4616] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Nov 12 17:58:20.347766 containerd[1440]: time="2024-11-12T17:58:20.347314556Z" level=info msg="TearDown network for sandbox \"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\" successfully" Nov 12 17:58:20.347766 containerd[1440]: time="2024-11-12T17:58:20.347335596Z" level=info msg="StopPodSandbox for \"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\" returns successfully" Nov 12 17:58:20.348642 containerd[1440]: time="2024-11-12T17:58:20.348602958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8577f645-7qhtp,Uid:79693433-db53-4b04-81f4-8647ae5d69bf,Namespace:calico-apiserver,Attempt:1,}" Nov 12 17:58:20.410248 kubelet[2550]: E1112 17:58:20.409004 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:58:20.426691 kubelet[2550]: I1112 17:58:20.426031 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d6674b4b8-wt8ww" podStartSLOduration=23.942346811 podStartE2EDuration="25.426014491s" podCreationTimestamp="2024-11-12 17:57:55 +0000 UTC" firstStartedPulling="2024-11-12 17:58:18.554387984 +0000 UTC m=+44.401494585" lastFinishedPulling="2024-11-12 17:58:20.038055664 +0000 UTC m=+45.885162265" observedRunningTime="2024-11-12 17:58:20.425724731 +0000 UTC m=+46.272831292" watchObservedRunningTime="2024-11-12 17:58:20.426014491 +0000 UTC m=+46.273121092" Nov 12 17:58:20.440654 systemd[1]: run-netns-cni\x2d6b3faebd\x2dd219\x2dae12\x2d2ddf\x2d8ffd30563aa4.mount: Deactivated successfully. Nov 12 17:58:20.440744 systemd[1]: run-netns-cni\x2dd716f0f1\x2d8d66\x2d965b\x2d231c\x2d6a6ad299641e.mount: Deactivated successfully. Nov 12 17:58:20.525851 systemd-networkd[1383]: cali57828232e3c: Link UP Nov 12 17:58:20.527754 systemd-networkd[1383]: cali57828232e3c: Gained carrier Nov 12 17:58:20.553219 containerd[1440]: 2024-11-12 17:58:20.430 [INFO][4657] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0 calico-apiserver-6f8577f645- calico-apiserver 79693433-db53-4b04-81f4-8647ae5d69bf 979 0 2024-11-12 17:57:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f8577f645 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f8577f645-7qhtp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali57828232e3c [] []}} ContainerID="f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33" Namespace="calico-apiserver" Pod="calico-apiserver-6f8577f645-7qhtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8577f645--7qhtp-" Nov 12 17:58:20.553219 containerd[1440]: 2024-11-12 17:58:20.430 [INFO][4657] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33" Namespace="calico-apiserver" Pod="calico-apiserver-6f8577f645-7qhtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0" Nov 12 17:58:20.553219 containerd[1440]: 2024-11-12 17:58:20.473 [INFO][4676] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33" HandleID="k8s-pod-network.f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33" Workload="localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0" Nov 12 17:58:20.553219 containerd[1440]: 2024-11-12 17:58:20.488 [INFO][4676] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33" HandleID="k8s-pod-network.f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33" Workload="localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000360950), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f8577f645-7qhtp", "timestamp":"2024-11-12 17:58:20.473460748 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:58:20.553219 containerd[1440]: 2024-11-12 17:58:20.488 [INFO][4676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:20.553219 containerd[1440]: 2024-11-12 17:58:20.488 [INFO][4676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:20.553219 containerd[1440]: 2024-11-12 17:58:20.488 [INFO][4676] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 17:58:20.553219 containerd[1440]: 2024-11-12 17:58:20.490 [INFO][4676] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33" host="localhost" Nov 12 17:58:20.553219 containerd[1440]: 2024-11-12 17:58:20.494 [INFO][4676] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 17:58:20.553219 containerd[1440]: 2024-11-12 17:58:20.500 [INFO][4676] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 17:58:20.553219 containerd[1440]: 2024-11-12 17:58:20.503 [INFO][4676] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 17:58:20.553219 containerd[1440]: 2024-11-12 17:58:20.507 [INFO][4676] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 17:58:20.553219 containerd[1440]: 2024-11-12 17:58:20.507 [INFO][4676] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33" host="localhost" Nov 12 17:58:20.553219 containerd[1440]: 2024-11-12 17:58:20.508 [INFO][4676] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33 Nov 12 17:58:20.553219 containerd[1440]: 2024-11-12 17:58:20.512 [INFO][4676] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33" host="localhost" Nov 12 17:58:20.553219 containerd[1440]: 2024-11-12 17:58:20.518 [INFO][4676] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33" host="localhost" Nov 12 17:58:20.553219 containerd[1440]: 2024-11-12 17:58:20.518 [INFO][4676] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33" host="localhost" Nov 12 17:58:20.553219 containerd[1440]: 2024-11-12 17:58:20.518 [INFO][4676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:20.553219 containerd[1440]: 2024-11-12 17:58:20.518 [INFO][4676] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33" HandleID="k8s-pod-network.f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33" Workload="localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0" Nov 12 17:58:20.554065 containerd[1440]: 2024-11-12 17:58:20.521 [INFO][4657] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33" Namespace="calico-apiserver" Pod="calico-apiserver-6f8577f645-7qhtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0", GenerateName:"calico-apiserver-6f8577f645-", Namespace:"calico-apiserver", SelfLink:"", UID:"79693433-db53-4b04-81f4-8647ae5d69bf", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8577f645", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f8577f645-7qhtp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57828232e3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:20.554065 containerd[1440]: 2024-11-12 17:58:20.521 [INFO][4657] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33" Namespace="calico-apiserver" Pod="calico-apiserver-6f8577f645-7qhtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0" Nov 12 17:58:20.554065 containerd[1440]: 2024-11-12 17:58:20.521 [INFO][4657] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57828232e3c ContainerID="f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33" Namespace="calico-apiserver" Pod="calico-apiserver-6f8577f645-7qhtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0" Nov 12 17:58:20.554065 containerd[1440]: 2024-11-12 17:58:20.528 [INFO][4657] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33" Namespace="calico-apiserver" Pod="calico-apiserver-6f8577f645-7qhtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0" Nov 12 17:58:20.554065 containerd[1440]: 2024-11-12 17:58:20.528 [INFO][4657] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33" Namespace="calico-apiserver" Pod="calico-apiserver-6f8577f645-7qhtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0", GenerateName:"calico-apiserver-6f8577f645-", Namespace:"calico-apiserver", SelfLink:"", UID:"79693433-db53-4b04-81f4-8647ae5d69bf", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8577f645", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33", Pod:"calico-apiserver-6f8577f645-7qhtp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57828232e3c", MAC:"f2:a6:bb:c8:da:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:20.554065 containerd[1440]: 2024-11-12 17:58:20.539 [INFO][4657] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33" Namespace="calico-apiserver" Pod="calico-apiserver-6f8577f645-7qhtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0" Nov 12 17:58:20.570508 systemd-networkd[1383]: cali144ef64d91d: Link UP Nov 12 17:58:20.571376 systemd-networkd[1383]: cali144ef64d91d: Gained carrier Nov 12 17:58:20.583774 systemd-networkd[1383]: calife582e83cb9: Gained IPv6LL Nov 12 17:58:20.584759 containerd[1440]: time="2024-11-12T17:58:20.584590762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:58:20.584759 containerd[1440]: time="2024-11-12T17:58:20.584646242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:58:20.584759 containerd[1440]: time="2024-11-12T17:58:20.584661482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:58:20.585029 containerd[1440]: time="2024-11-12T17:58:20.584744282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:58:20.589585 containerd[1440]: 2024-11-12 17:58:20.430 [INFO][4645] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0 coredns-7db6d8ff4d- kube-system 4ed5cf09-ed06-4e8b-8c68-5b53322839e8 978 0 2024-11-12 17:57:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-lv4gc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali144ef64d91d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lv4gc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lv4gc-" Nov 12 17:58:20.589585 containerd[1440]: 2024-11-12 17:58:20.430 [INFO][4645] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lv4gc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0" Nov 12 17:58:20.589585 containerd[1440]: 2024-11-12 17:58:20.476 [INFO][4675] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705" HandleID="k8s-pod-network.e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705" Workload="localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0" Nov 12 17:58:20.589585 containerd[1440]: 2024-11-12 17:58:20.491 [INFO][4675] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705" HandleID="k8s-pod-network.e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705" Workload="localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000293780), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-lv4gc", "timestamp":"2024-11-12 17:58:20.476469432 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:58:20.589585 containerd[1440]: 2024-11-12 17:58:20.492 [INFO][4675] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:20.589585 containerd[1440]: 2024-11-12 17:58:20.519 [INFO][4675] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:20.589585 containerd[1440]: 2024-11-12 17:58:20.519 [INFO][4675] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 17:58:20.589585 containerd[1440]: 2024-11-12 17:58:20.522 [INFO][4675] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705" host="localhost" Nov 12 17:58:20.589585 containerd[1440]: 2024-11-12 17:58:20.528 [INFO][4675] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 17:58:20.589585 containerd[1440]: 2024-11-12 17:58:20.534 [INFO][4675] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 17:58:20.589585 containerd[1440]: 2024-11-12 17:58:20.540 [INFO][4675] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 17:58:20.589585 containerd[1440]: 2024-11-12 17:58:20.544 [INFO][4675] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 17:58:20.589585 containerd[1440]: 2024-11-12 17:58:20.544 [INFO][4675] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705" host="localhost" Nov 12 17:58:20.589585 containerd[1440]: 2024-11-12 17:58:20.548 [INFO][4675] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705 Nov 12 17:58:20.589585 containerd[1440]: 2024-11-12 17:58:20.556 [INFO][4675] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705" host="localhost" Nov 12 17:58:20.589585 containerd[1440]: 2024-11-12 17:58:20.563 [INFO][4675] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705" host="localhost" Nov 12 17:58:20.589585 containerd[1440]: 2024-11-12 17:58:20.563 [INFO][4675] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705" host="localhost" Nov 12 17:58:20.589585 containerd[1440]: 2024-11-12 17:58:20.563 [INFO][4675] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:20.589585 containerd[1440]: 2024-11-12 17:58:20.563 [INFO][4675] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705" HandleID="k8s-pod-network.e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705" Workload="localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0" Nov 12 17:58:20.590073 containerd[1440]: 2024-11-12 17:58:20.566 [INFO][4645] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lv4gc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4ed5cf09-ed06-4e8b-8c68-5b53322839e8", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-lv4gc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali144ef64d91d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:20.590073 containerd[1440]: 2024-11-12 17:58:20.567 [INFO][4645] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lv4gc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0" Nov 12 17:58:20.590073 containerd[1440]: 2024-11-12 17:58:20.567 [INFO][4645] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali144ef64d91d ContainerID="e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lv4gc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0" Nov 12 17:58:20.590073 containerd[1440]: 2024-11-12 17:58:20.571 [INFO][4645] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lv4gc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0" Nov 12 17:58:20.590073 containerd[1440]: 2024-11-12 17:58:20.572 [INFO][4645] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lv4gc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4ed5cf09-ed06-4e8b-8c68-5b53322839e8", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705", Pod:"coredns-7db6d8ff4d-lv4gc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali144ef64d91d", MAC:"46:40:44:6d:ea:a7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:20.590073 containerd[1440]: 2024-11-12 17:58:20.584 [INFO][4645] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lv4gc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0" Nov 12 17:58:20.608206 containerd[1440]: time="2024-11-12T17:58:20.607945190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:58:20.608206 containerd[1440]: time="2024-11-12T17:58:20.608018790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:58:20.608206 containerd[1440]: time="2024-11-12T17:58:20.608034350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:58:20.608377 containerd[1440]: time="2024-11-12T17:58:20.608320351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:58:20.615643 systemd[1]: Started cri-containerd-f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33.scope - libcontainer container f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33. Nov 12 17:58:20.634332 systemd[1]: Started cri-containerd-e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705.scope - libcontainer container e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705. Nov 12 17:58:20.641559 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 17:58:20.646348 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 17:58:20.669747 containerd[1440]: time="2024-11-12T17:58:20.669692105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8577f645-7qhtp,Uid:79693433-db53-4b04-81f4-8647ae5d69bf,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33\"" Nov 12 17:58:20.672266 containerd[1440]: time="2024-11-12T17:58:20.672238228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lv4gc,Uid:4ed5cf09-ed06-4e8b-8c68-5b53322839e8,Namespace:kube-system,Attempt:1,} returns sandbox id \"e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705\"" Nov 12 17:58:20.673503 kubelet[2550]: E1112 17:58:20.673476 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:58:20.675417 containerd[1440]: time="2024-11-12T17:58:20.675387351Z" level=info msg="CreateContainer within sandbox \"e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 17:58:20.693371 containerd[1440]: time="2024-11-12T17:58:20.693319573Z" level=info msg="CreateContainer within sandbox \"e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"449ec916543af172d1fe20a061681b92822e5a08a189408a564b2a6f38a2fc6c\"" Nov 12 17:58:20.694035 containerd[1440]: time="2024-11-12T17:58:20.693992094Z" level=info msg="StartContainer for \"449ec916543af172d1fe20a061681b92822e5a08a189408a564b2a6f38a2fc6c\"" Nov 12 17:58:20.720322 systemd[1]: Started cri-containerd-449ec916543af172d1fe20a061681b92822e5a08a189408a564b2a6f38a2fc6c.scope - libcontainer container 449ec916543af172d1fe20a061681b92822e5a08a189408a564b2a6f38a2fc6c. Nov 12 17:58:20.741490 containerd[1440]: time="2024-11-12T17:58:20.741451671Z" level=info msg="StartContainer for \"449ec916543af172d1fe20a061681b92822e5a08a189408a564b2a6f38a2fc6c\" returns successfully" Nov 12 17:58:21.031338 systemd-networkd[1383]: calif5a2dfd1eaa: Gained IPv6LL Nov 12 17:58:21.047091 containerd[1440]: time="2024-11-12T17:58:21.047030798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:58:21.048068 containerd[1440]: time="2024-11-12T17:58:21.048038399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7464731" Nov 12 17:58:21.048928 containerd[1440]: time="2024-11-12T17:58:21.048882160Z" level=info msg="ImageCreate event name:\"sha256:7c36e10791d457ced41235b20bab3cd8f54891dd8f7ddaa627378845532c8737\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:58:21.051091 containerd[1440]: time="2024-11-12T17:58:21.051044563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:58:21.051997 containerd[1440]: time="2024-11-12T17:58:21.051957524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:7c36e10791d457ced41235b20bab3cd8f54891dd8f7ddaa627378845532c8737\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"8834367\" in 1.01369522s" Nov 12 17:58:21.052051 containerd[1440]: time="2024-11-12T17:58:21.051998004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:7c36e10791d457ced41235b20bab3cd8f54891dd8f7ddaa627378845532c8737\"" Nov 12 17:58:21.053741 containerd[1440]: time="2024-11-12T17:58:21.053716646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 17:58:21.054914 containerd[1440]: time="2024-11-12T17:58:21.054884047Z" level=info msg="CreateContainer within sandbox \"370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 17:58:21.065345 containerd[1440]: time="2024-11-12T17:58:21.065310979Z" level=info msg="CreateContainer within sandbox \"370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c3c7c904755312cbc6a622a6e1e8b96e53a044eb469a7b2a7d91a6cbc7912a53\"" Nov 12 17:58:21.066452 containerd[1440]: time="2024-11-12T17:58:21.065712540Z" level=info msg="StartContainer for \"c3c7c904755312cbc6a622a6e1e8b96e53a044eb469a7b2a7d91a6cbc7912a53\"" Nov 12 17:58:21.096344 systemd[1]: Started cri-containerd-c3c7c904755312cbc6a622a6e1e8b96e53a044eb469a7b2a7d91a6cbc7912a53.scope - libcontainer container c3c7c904755312cbc6a622a6e1e8b96e53a044eb469a7b2a7d91a6cbc7912a53. Nov 12 17:58:21.121564 containerd[1440]: time="2024-11-12T17:58:21.121526486Z" level=info msg="StartContainer for \"c3c7c904755312cbc6a622a6e1e8b96e53a044eb469a7b2a7d91a6cbc7912a53\" returns successfully" Nov 12 17:58:21.417456 kubelet[2550]: E1112 17:58:21.417358 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:58:21.427251 kubelet[2550]: I1112 17:58:21.427196 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lv4gc" podStartSLOduration=32.427180006 podStartE2EDuration="32.427180006s" podCreationTimestamp="2024-11-12 17:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:58:21.426489325 +0000 UTC m=+47.273595926" watchObservedRunningTime="2024-11-12 17:58:21.427180006 +0000 UTC m=+47.274286607" Nov 12 17:58:21.734300 systemd-networkd[1383]: cali144ef64d91d: Gained IPv6LL Nov 12 17:58:21.940756 systemd[1]: Started sshd@12-10.0.0.106:22-10.0.0.1:58182.service - OpenSSH per-connection server daemon (10.0.0.1:58182). Nov 12 17:58:21.992593 sshd[4914]: Accepted publickey for core from 10.0.0.1 port 58182 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:58:21.994873 sshd[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:58:21.999819 systemd-logind[1424]: New session 13 of user core. Nov 12 17:58:22.007317 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 17:58:22.187644 sshd[4914]: pam_unix(sshd:session): session closed for user core Nov 12 17:58:22.201985 systemd[1]: sshd@12-10.0.0.106:22-10.0.0.1:58182.service: Deactivated successfully. Nov 12 17:58:22.203636 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 17:58:22.206097 systemd-logind[1424]: Session 13 logged out. Waiting for processes to exit. Nov 12 17:58:22.207257 systemd[1]: Started sshd@13-10.0.0.106:22-10.0.0.1:58192.service - OpenSSH per-connection server daemon (10.0.0.1:58192). Nov 12 17:58:22.208720 systemd-logind[1424]: Removed session 13. Nov 12 17:58:22.245466 sshd[4928]: Accepted publickey for core from 10.0.0.1 port 58192 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:58:22.246992 sshd[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:58:22.251153 systemd-logind[1424]: New session 14 of user core. Nov 12 17:58:22.268301 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 17:58:22.310357 systemd-networkd[1383]: cali57828232e3c: Gained IPv6LL Nov 12 17:58:22.426169 kubelet[2550]: E1112 17:58:22.426119 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:58:22.551033 sshd[4928]: pam_unix(sshd:session): session closed for user core Nov 12 17:58:22.560025 systemd[1]: sshd@13-10.0.0.106:22-10.0.0.1:58192.service: Deactivated successfully. Nov 12 17:58:22.562489 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 17:58:22.564421 systemd-logind[1424]: Session 14 logged out. Waiting for processes to exit. Nov 12 17:58:22.575224 systemd[1]: Started sshd@14-10.0.0.106:22-10.0.0.1:42114.service - OpenSSH per-connection server daemon (10.0.0.1:42114). Nov 12 17:58:22.576892 systemd-logind[1424]: Removed session 14. Nov 12 17:58:22.622673 sshd[4944]: Accepted publickey for core from 10.0.0.1 port 42114 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:58:22.624347 sshd[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:58:22.629789 systemd-logind[1424]: New session 15 of user core. Nov 12 17:58:22.638311 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 17:58:22.856503 containerd[1440]: time="2024-11-12T17:58:22.856390908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:58:22.857758 containerd[1440]: time="2024-11-12T17:58:22.856904509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=39277239" Nov 12 17:58:22.858396 containerd[1440]: time="2024-11-12T17:58:22.858129030Z" level=info msg="ImageCreate event name:\"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:58:22.860617 containerd[1440]: time="2024-11-12T17:58:22.860336953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:58:22.861713 containerd[1440]: time="2024-11-12T17:58:22.861686514Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"40646891\" in 1.807940588s" Nov 12 17:58:22.861760 containerd[1440]: time="2024-11-12T17:58:22.861720074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\"" Nov 12 17:58:22.862778 containerd[1440]: time="2024-11-12T17:58:22.862552195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 17:58:22.865493 containerd[1440]: time="2024-11-12T17:58:22.865257599Z" level=info msg="CreateContainer within sandbox \"35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 17:58:22.878067 containerd[1440]: time="2024-11-12T17:58:22.877972013Z" level=info msg="CreateContainer within sandbox \"35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4573ffb1e141775c7b3017505949c4a41709b29dcf785ee9ca99b3ee8bc66bfa\"" Nov 12 17:58:22.878713 containerd[1440]: time="2024-11-12T17:58:22.878670494Z" level=info msg="StartContainer for \"4573ffb1e141775c7b3017505949c4a41709b29dcf785ee9ca99b3ee8bc66bfa\"" Nov 12 17:58:22.920335 systemd[1]: Started cri-containerd-4573ffb1e141775c7b3017505949c4a41709b29dcf785ee9ca99b3ee8bc66bfa.scope - libcontainer container 4573ffb1e141775c7b3017505949c4a41709b29dcf785ee9ca99b3ee8bc66bfa. Nov 12 17:58:22.950617 containerd[1440]: time="2024-11-12T17:58:22.950567257Z" level=info msg="StartContainer for \"4573ffb1e141775c7b3017505949c4a41709b29dcf785ee9ca99b3ee8bc66bfa\" returns successfully" Nov 12 17:58:23.073364 containerd[1440]: time="2024-11-12T17:58:23.073307237Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:58:23.074099 containerd[1440]: time="2024-11-12T17:58:23.074056958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 12 17:58:23.077618 containerd[1440]: time="2024-11-12T17:58:23.077573562Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"40646891\" in 214.988687ms" Nov 12 17:58:23.077618 containerd[1440]: time="2024-11-12T17:58:23.077615282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\"" Nov 12 17:58:23.078858 containerd[1440]: time="2024-11-12T17:58:23.078835603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 17:58:23.079573 containerd[1440]: time="2024-11-12T17:58:23.079528844Z" level=info msg="CreateContainer within sandbox \"f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 17:58:23.094555 containerd[1440]: time="2024-11-12T17:58:23.094504901Z" level=info msg="CreateContainer within sandbox \"f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ca05418fe686b6a61783a9e9937763e80531d85108e80fe0ddb96346b6545e0a\"" Nov 12 17:58:23.095961 containerd[1440]: time="2024-11-12T17:58:23.095928822Z" level=info msg="StartContainer for \"ca05418fe686b6a61783a9e9937763e80531d85108e80fe0ddb96346b6545e0a\"" Nov 12 17:58:23.195312 systemd[1]: Started cri-containerd-ca05418fe686b6a61783a9e9937763e80531d85108e80fe0ddb96346b6545e0a.scope - libcontainer container ca05418fe686b6a61783a9e9937763e80531d85108e80fe0ddb96346b6545e0a. Nov 12 17:58:23.245902 containerd[1440]: time="2024-11-12T17:58:23.245845952Z" level=info msg="StartContainer for \"ca05418fe686b6a61783a9e9937763e80531d85108e80fe0ddb96346b6545e0a\" returns successfully" Nov 12 17:58:23.434822 kubelet[2550]: E1112 17:58:23.434781 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:58:23.460518 kubelet[2550]: I1112 17:58:23.460248 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f8577f645-42z4v" podStartSLOduration=25.254473154 podStartE2EDuration="28.460230874s" podCreationTimestamp="2024-11-12 17:57:55 +0000 UTC" firstStartedPulling="2024-11-12 17:58:19.656623875 +0000 UTC m=+45.503730476" lastFinishedPulling="2024-11-12 17:58:22.862381595 +0000 UTC m=+48.709488196" observedRunningTime="2024-11-12 17:58:23.459320473 +0000 UTC m=+49.306427074" watchObservedRunningTime="2024-11-12 17:58:23.460230874 +0000 UTC m=+49.307337475" Nov 12 17:58:23.478730 kubelet[2550]: I1112 17:58:23.478207 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f8577f645-7qhtp" podStartSLOduration=26.070968038 podStartE2EDuration="28.478188295s" podCreationTimestamp="2024-11-12 17:57:55 +0000 UTC" firstStartedPulling="2024-11-12 17:58:20.671079186 +0000 UTC m=+46.518185787" lastFinishedPulling="2024-11-12 17:58:23.078299443 +0000 UTC m=+48.925406044" observedRunningTime="2024-11-12 17:58:23.476846653 +0000 UTC m=+49.323953254" watchObservedRunningTime="2024-11-12 17:58:23.478188295 +0000 UTC m=+49.325294976" Nov 12 17:58:24.179130 kubelet[2550]: I1112 17:58:24.179091 2550 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:58:24.181321 kubelet[2550]: E1112 17:58:24.181298 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:58:24.211999 sshd[4944]: pam_unix(sshd:session): session closed for user core Nov 12 17:58:24.226992 systemd[1]: sshd@14-10.0.0.106:22-10.0.0.1:42114.service: Deactivated successfully. Nov 12 17:58:24.234205 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 17:58:24.241135 systemd-logind[1424]: Session 15 logged out. Waiting for processes to exit. Nov 12 17:58:24.248700 systemd[1]: Started sshd@15-10.0.0.106:22-10.0.0.1:42122.service - OpenSSH per-connection server daemon (10.0.0.1:42122). Nov 12 17:58:24.256026 systemd-logind[1424]: Removed session 15. Nov 12 17:58:24.352312 sshd[5058]: Accepted publickey for core from 10.0.0.1 port 42122 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:58:24.354525 sshd[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:58:24.362391 systemd-logind[1424]: New session 16 of user core. Nov 12 17:58:24.369429 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 17:58:24.440416 kubelet[2550]: E1112 17:58:24.440302 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 17:58:24.440707 kubelet[2550]: I1112 17:58:24.440683 2550 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:58:24.580754 containerd[1440]: time="2024-11-12T17:58:24.580246288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:58:24.581927 containerd[1440]: time="2024-11-12T17:58:24.581834570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=9883360" Nov 12 17:58:24.582770 containerd[1440]: time="2024-11-12T17:58:24.582717171Z" level=info msg="ImageCreate event name:\"sha256:fe02b0a9952e3e3b3828f30f55de14ed8db1a2c781e5563c5c70e2a748e28486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:58:24.589809 containerd[1440]: time="2024-11-12T17:58:24.589055378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:58:24.589809 containerd[1440]: time="2024-11-12T17:58:24.589685379Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:fe02b0a9952e3e3b3828f30f55de14ed8db1a2c781e5563c5c70e2a748e28486\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11252948\" in 1.510814456s" Nov 12 17:58:24.589809 containerd[1440]: time="2024-11-12T17:58:24.589718299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:fe02b0a9952e3e3b3828f30f55de14ed8db1a2c781e5563c5c70e2a748e28486\"" Nov 12 17:58:24.593931 containerd[1440]: time="2024-11-12T17:58:24.593888663Z" level=info msg="CreateContainer within sandbox \"370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 17:58:24.621573 containerd[1440]: time="2024-11-12T17:58:24.621521934Z" level=info msg="CreateContainer within sandbox \"370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ffbf53eccb4a0a03db94dd0442444da937f2612e9af5198b75d1fc6a40512b3d\"" Nov 12 17:58:24.623758 containerd[1440]: time="2024-11-12T17:58:24.622213655Z" level=info msg="StartContainer for \"ffbf53eccb4a0a03db94dd0442444da937f2612e9af5198b75d1fc6a40512b3d\"" Nov 12 17:58:24.671366 systemd[1]: Started cri-containerd-ffbf53eccb4a0a03db94dd0442444da937f2612e9af5198b75d1fc6a40512b3d.scope - libcontainer container ffbf53eccb4a0a03db94dd0442444da937f2612e9af5198b75d1fc6a40512b3d. Nov 12 17:58:24.722025 containerd[1440]: time="2024-11-12T17:58:24.721900125Z" level=info msg="StartContainer for \"ffbf53eccb4a0a03db94dd0442444da937f2612e9af5198b75d1fc6a40512b3d\" returns successfully" Nov 12 17:58:24.828141 sshd[5058]: pam_unix(sshd:session): session closed for user core Nov 12 17:58:24.837641 systemd[1]: sshd@15-10.0.0.106:22-10.0.0.1:42122.service: Deactivated successfully. Nov 12 17:58:24.839072 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 17:58:24.839696 systemd-logind[1424]: Session 16 logged out. Waiting for processes to exit. Nov 12 17:58:24.847452 systemd[1]: Started sshd@16-10.0.0.106:22-10.0.0.1:42138.service - OpenSSH per-connection server daemon (10.0.0.1:42138). Nov 12 17:58:24.848584 systemd-logind[1424]: Removed session 16. Nov 12 17:58:24.889226 sshd[5159]: Accepted publickey for core from 10.0.0.1 port 42138 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:58:24.890611 sshd[5159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:58:24.894387 systemd-logind[1424]: New session 17 of user core. Nov 12 17:58:24.904316 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 17:58:25.047825 sshd[5159]: pam_unix(sshd:session): session closed for user core Nov 12 17:58:25.051553 systemd[1]: sshd@16-10.0.0.106:22-10.0.0.1:42138.service: Deactivated successfully. Nov 12 17:58:25.053609 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 17:58:25.055028 systemd-logind[1424]: Session 17 logged out. Waiting for processes to exit. Nov 12 17:58:25.056583 systemd-logind[1424]: Removed session 17. Nov 12 17:58:25.308791 kubelet[2550]: I1112 17:58:25.308679 2550 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 17:58:25.312828 kubelet[2550]: I1112 17:58:25.312774 2550 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 17:58:30.063827 systemd[1]: Started sshd@17-10.0.0.106:22-10.0.0.1:42146.service - OpenSSH per-connection server daemon (10.0.0.1:42146). Nov 12 17:58:30.108932 sshd[5183]: Accepted publickey for core from 10.0.0.1 port 42146 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:58:30.110567 sshd[5183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:58:30.115209 systemd-logind[1424]: New session 18 of user core. Nov 12 17:58:30.124342 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 17:58:30.304340 sshd[5183]: pam_unix(sshd:session): session closed for user core Nov 12 17:58:30.307427 systemd[1]: sshd@17-10.0.0.106:22-10.0.0.1:42146.service: Deactivated successfully. Nov 12 17:58:30.310154 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 17:58:30.311686 systemd-logind[1424]: Session 18 logged out. Waiting for processes to exit. Nov 12 17:58:30.312520 systemd-logind[1424]: Removed session 18. Nov 12 17:58:34.225008 containerd[1440]: time="2024-11-12T17:58:34.224907418Z" level=info msg="StopPodSandbox for \"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\"" Nov 12 17:58:34.323646 containerd[1440]: 2024-11-12 17:58:34.266 [WARNING][5219] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0", GenerateName:"calico-apiserver-6f8577f645-", Namespace:"calico-apiserver", SelfLink:"", UID:"79693433-db53-4b04-81f4-8647ae5d69bf", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8577f645", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33", Pod:"calico-apiserver-6f8577f645-7qhtp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57828232e3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:34.323646 containerd[1440]: 2024-11-12 17:58:34.266 [INFO][5219] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Nov 12 17:58:34.323646 containerd[1440]: 2024-11-12 17:58:34.266 [INFO][5219] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" iface="eth0" netns="" Nov 12 17:58:34.323646 containerd[1440]: 2024-11-12 17:58:34.266 [INFO][5219] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Nov 12 17:58:34.323646 containerd[1440]: 2024-11-12 17:58:34.266 [INFO][5219] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Nov 12 17:58:34.323646 containerd[1440]: 2024-11-12 17:58:34.309 [INFO][5229] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" HandleID="k8s-pod-network.92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Workload="localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0" Nov 12 17:58:34.323646 containerd[1440]: 2024-11-12 17:58:34.309 [INFO][5229] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:34.323646 containerd[1440]: 2024-11-12 17:58:34.309 [INFO][5229] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:34.323646 containerd[1440]: 2024-11-12 17:58:34.318 [WARNING][5229] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" HandleID="k8s-pod-network.92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Workload="localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0" Nov 12 17:58:34.323646 containerd[1440]: 2024-11-12 17:58:34.318 [INFO][5229] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" HandleID="k8s-pod-network.92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Workload="localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0" Nov 12 17:58:34.323646 containerd[1440]: 2024-11-12 17:58:34.320 [INFO][5229] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:34.323646 containerd[1440]: 2024-11-12 17:58:34.321 [INFO][5219] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Nov 12 17:58:34.324242 containerd[1440]: time="2024-11-12T17:58:34.323685112Z" level=info msg="TearDown network for sandbox \"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\" successfully" Nov 12 17:58:34.324242 containerd[1440]: time="2024-11-12T17:58:34.323720672Z" level=info msg="StopPodSandbox for \"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\" returns successfully" Nov 12 17:58:34.324292 containerd[1440]: time="2024-11-12T17:58:34.324266593Z" level=info msg="RemovePodSandbox for \"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\"" Nov 12 17:58:34.340529 containerd[1440]: time="2024-11-12T17:58:34.340464728Z" level=info msg="Forcibly stopping sandbox \"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\"" Nov 12 17:58:34.458033 containerd[1440]: 2024-11-12 17:58:34.421 [WARNING][5252] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0", GenerateName:"calico-apiserver-6f8577f645-", Namespace:"calico-apiserver", SelfLink:"", UID:"79693433-db53-4b04-81f4-8647ae5d69bf", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8577f645", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f3931796a2d65024c96e255d13b885a159336bd369d927afe5b9358b80e73b33", Pod:"calico-apiserver-6f8577f645-7qhtp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57828232e3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:34.458033 containerd[1440]: 2024-11-12 17:58:34.422 [INFO][5252] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Nov 12 17:58:34.458033 containerd[1440]: 2024-11-12 17:58:34.422 [INFO][5252] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" iface="eth0" netns="" Nov 12 17:58:34.458033 containerd[1440]: 2024-11-12 17:58:34.422 [INFO][5252] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Nov 12 17:58:34.458033 containerd[1440]: 2024-11-12 17:58:34.422 [INFO][5252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Nov 12 17:58:34.458033 containerd[1440]: 2024-11-12 17:58:34.443 [INFO][5259] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" HandleID="k8s-pod-network.92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Workload="localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0" Nov 12 17:58:34.458033 containerd[1440]: 2024-11-12 17:58:34.443 [INFO][5259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:34.458033 containerd[1440]: 2024-11-12 17:58:34.443 [INFO][5259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:34.458033 containerd[1440]: 2024-11-12 17:58:34.451 [WARNING][5259] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" HandleID="k8s-pod-network.92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Workload="localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0" Nov 12 17:58:34.458033 containerd[1440]: 2024-11-12 17:58:34.451 [INFO][5259] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" HandleID="k8s-pod-network.92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Workload="localhost-k8s-calico--apiserver--6f8577f645--7qhtp-eth0" Nov 12 17:58:34.458033 containerd[1440]: 2024-11-12 17:58:34.454 [INFO][5259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:34.458033 containerd[1440]: 2024-11-12 17:58:34.456 [INFO][5252] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f" Nov 12 17:58:34.458541 containerd[1440]: time="2024-11-12T17:58:34.458071880Z" level=info msg="TearDown network for sandbox \"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\" successfully" Nov 12 17:58:34.470026 containerd[1440]: time="2024-11-12T17:58:34.469962772Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:58:34.470143 containerd[1440]: time="2024-11-12T17:58:34.470048812Z" level=info msg="RemovePodSandbox \"92f64f3ab036c9e28677afab54fc0d9b58b6dcb9578711b229f4f0022f44c21f\" returns successfully" Nov 12 17:58:34.470573 containerd[1440]: time="2024-11-12T17:58:34.470528572Z" level=info msg="StopPodSandbox for \"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\"" Nov 12 17:58:34.541575 containerd[1440]: 2024-11-12 17:58:34.506 [WARNING][5282] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4ed5cf09-ed06-4e8b-8c68-5b53322839e8", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705", Pod:"coredns-7db6d8ff4d-lv4gc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali144ef64d91d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:34.541575 containerd[1440]: 2024-11-12 17:58:34.506 [INFO][5282] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Nov 12 17:58:34.541575 containerd[1440]: 2024-11-12 17:58:34.506 [INFO][5282] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" iface="eth0" netns="" Nov 12 17:58:34.541575 containerd[1440]: 2024-11-12 17:58:34.506 [INFO][5282] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Nov 12 17:58:34.541575 containerd[1440]: 2024-11-12 17:58:34.506 [INFO][5282] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Nov 12 17:58:34.541575 containerd[1440]: 2024-11-12 17:58:34.524 [INFO][5290] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" HandleID="k8s-pod-network.3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Workload="localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0" Nov 12 17:58:34.541575 containerd[1440]: 2024-11-12 17:58:34.524 [INFO][5290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:34.541575 containerd[1440]: 2024-11-12 17:58:34.525 [INFO][5290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:34.541575 containerd[1440]: 2024-11-12 17:58:34.534 [WARNING][5290] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" HandleID="k8s-pod-network.3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Workload="localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0" Nov 12 17:58:34.541575 containerd[1440]: 2024-11-12 17:58:34.534 [INFO][5290] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" HandleID="k8s-pod-network.3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Workload="localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0" Nov 12 17:58:34.541575 containerd[1440]: 2024-11-12 17:58:34.536 [INFO][5290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:34.541575 containerd[1440]: 2024-11-12 17:58:34.540 [INFO][5282] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Nov 12 17:58:34.541995 containerd[1440]: time="2024-11-12T17:58:34.541604160Z" level=info msg="TearDown network for sandbox \"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\" successfully" Nov 12 17:58:34.541995 containerd[1440]: time="2024-11-12T17:58:34.541625480Z" level=info msg="StopPodSandbox for \"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\" returns successfully" Nov 12 17:58:34.541995 containerd[1440]: time="2024-11-12T17:58:34.541941081Z" level=info msg="RemovePodSandbox for \"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\"" Nov 12 17:58:34.541995 containerd[1440]: time="2024-11-12T17:58:34.541971961Z" level=info msg="Forcibly stopping sandbox \"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\"" Nov 12 17:58:34.608562 containerd[1440]: 2024-11-12 17:58:34.577 [WARNING][5313] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4ed5cf09-ed06-4e8b-8c68-5b53322839e8", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2bc5e6e7bb0c8f70e3440184de6ef2060236238cc99678464d3fb5cde479705", Pod:"coredns-7db6d8ff4d-lv4gc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali144ef64d91d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:34.608562 containerd[1440]: 2024-11-12 17:58:34.577 [INFO][5313] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Nov 12 17:58:34.608562 containerd[1440]: 2024-11-12 17:58:34.578 [INFO][5313] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" iface="eth0" netns="" Nov 12 17:58:34.608562 containerd[1440]: 2024-11-12 17:58:34.578 [INFO][5313] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Nov 12 17:58:34.608562 containerd[1440]: 2024-11-12 17:58:34.578 [INFO][5313] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Nov 12 17:58:34.608562 containerd[1440]: 2024-11-12 17:58:34.595 [INFO][5321] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" HandleID="k8s-pod-network.3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Workload="localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0" Nov 12 17:58:34.608562 containerd[1440]: 2024-11-12 17:58:34.595 [INFO][5321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:34.608562 containerd[1440]: 2024-11-12 17:58:34.595 [INFO][5321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:34.608562 containerd[1440]: 2024-11-12 17:58:34.603 [WARNING][5321] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" HandleID="k8s-pod-network.3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Workload="localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0" Nov 12 17:58:34.608562 containerd[1440]: 2024-11-12 17:58:34.603 [INFO][5321] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" HandleID="k8s-pod-network.3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Workload="localhost-k8s-coredns--7db6d8ff4d--lv4gc-eth0" Nov 12 17:58:34.608562 containerd[1440]: 2024-11-12 17:58:34.604 [INFO][5321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:34.608562 containerd[1440]: 2024-11-12 17:58:34.606 [INFO][5313] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07" Nov 12 17:58:34.608562 containerd[1440]: time="2024-11-12T17:58:34.607391743Z" level=info msg="TearDown network for sandbox \"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\" successfully" Nov 12 17:58:34.610348 containerd[1440]: time="2024-11-12T17:58:34.610317186Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:58:34.610504 containerd[1440]: time="2024-11-12T17:58:34.610486306Z" level=info msg="RemovePodSandbox \"3ad73710e00a85f55e4305b9af0833387988f8f980726ad0f12225f48abfda07\" returns successfully" Nov 12 17:58:34.611026 containerd[1440]: time="2024-11-12T17:58:34.611000627Z" level=info msg="StopPodSandbox for \"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\"" Nov 12 17:58:34.682079 containerd[1440]: 2024-11-12 17:58:34.643 [WARNING][5343] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0", GenerateName:"calico-apiserver-6f8577f645-", Namespace:"calico-apiserver", SelfLink:"", UID:"796be7c7-8733-4ba8-8a44-8adf215d4e9b", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8577f645", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5", Pod:"calico-apiserver-6f8577f645-42z4v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif5a2dfd1eaa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:34.682079 containerd[1440]: 2024-11-12 17:58:34.643 [INFO][5343] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Nov 12 17:58:34.682079 containerd[1440]: 2024-11-12 17:58:34.643 [INFO][5343] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" iface="eth0" netns="" Nov 12 17:58:34.682079 containerd[1440]: 2024-11-12 17:58:34.643 [INFO][5343] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Nov 12 17:58:34.682079 containerd[1440]: 2024-11-12 17:58:34.643 [INFO][5343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Nov 12 17:58:34.682079 containerd[1440]: 2024-11-12 17:58:34.664 [INFO][5351] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" HandleID="k8s-pod-network.19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Workload="localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0" Nov 12 17:58:34.682079 containerd[1440]: 2024-11-12 17:58:34.665 [INFO][5351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:34.682079 containerd[1440]: 2024-11-12 17:58:34.665 [INFO][5351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:34.682079 containerd[1440]: 2024-11-12 17:58:34.676 [WARNING][5351] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" HandleID="k8s-pod-network.19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Workload="localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0" Nov 12 17:58:34.682079 containerd[1440]: 2024-11-12 17:58:34.676 [INFO][5351] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" HandleID="k8s-pod-network.19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Workload="localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0" Nov 12 17:58:34.682079 containerd[1440]: 2024-11-12 17:58:34.677 [INFO][5351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:34.682079 containerd[1440]: 2024-11-12 17:58:34.679 [INFO][5343] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Nov 12 17:58:34.682651 containerd[1440]: time="2024-11-12T17:58:34.682118534Z" level=info msg="TearDown network for sandbox \"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\" successfully" Nov 12 17:58:34.682651 containerd[1440]: time="2024-11-12T17:58:34.682141534Z" level=info msg="StopPodSandbox for \"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\" returns successfully" Nov 12 17:58:34.682869 containerd[1440]: time="2024-11-12T17:58:34.682839175Z" level=info msg="RemovePodSandbox for \"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\"" Nov 12 17:58:34.682913 containerd[1440]: time="2024-11-12T17:58:34.682873335Z" level=info msg="Forcibly stopping sandbox \"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\"" Nov 12 17:58:34.755787 containerd[1440]: 2024-11-12 17:58:34.722 [WARNING][5391] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0", GenerateName:"calico-apiserver-6f8577f645-", Namespace:"calico-apiserver", SelfLink:"", UID:"796be7c7-8733-4ba8-8a44-8adf215d4e9b", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8577f645", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35a242955d1170c0cfe22aa74acc0e5c5300f44ecea3b3c14472e2fe4bebd2b5", Pod:"calico-apiserver-6f8577f645-42z4v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif5a2dfd1eaa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:34.755787 containerd[1440]: 2024-11-12 17:58:34.722 [INFO][5391] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Nov 12 17:58:34.755787 containerd[1440]: 2024-11-12 17:58:34.722 [INFO][5391] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" iface="eth0" netns="" Nov 12 17:58:34.755787 containerd[1440]: 2024-11-12 17:58:34.722 [INFO][5391] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Nov 12 17:58:34.755787 containerd[1440]: 2024-11-12 17:58:34.722 [INFO][5391] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Nov 12 17:58:34.755787 containerd[1440]: 2024-11-12 17:58:34.742 [INFO][5403] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" HandleID="k8s-pod-network.19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Workload="localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0" Nov 12 17:58:34.755787 containerd[1440]: 2024-11-12 17:58:34.742 [INFO][5403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:34.755787 containerd[1440]: 2024-11-12 17:58:34.742 [INFO][5403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:34.755787 containerd[1440]: 2024-11-12 17:58:34.750 [WARNING][5403] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" HandleID="k8s-pod-network.19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Workload="localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0" Nov 12 17:58:34.755787 containerd[1440]: 2024-11-12 17:58:34.750 [INFO][5403] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" HandleID="k8s-pod-network.19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Workload="localhost-k8s-calico--apiserver--6f8577f645--42z4v-eth0" Nov 12 17:58:34.755787 containerd[1440]: 2024-11-12 17:58:34.751 [INFO][5403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:34.755787 containerd[1440]: 2024-11-12 17:58:34.752 [INFO][5391] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac" Nov 12 17:58:34.756221 containerd[1440]: time="2024-11-12T17:58:34.755828445Z" level=info msg="TearDown network for sandbox \"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\" successfully" Nov 12 17:58:34.766102 containerd[1440]: time="2024-11-12T17:58:34.766064735Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:58:34.766152 containerd[1440]: time="2024-11-12T17:58:34.766124455Z" level=info msg="RemovePodSandbox \"19a9dd3a53470592da7d58235f936970b32fa79d2cab36c349b1e74034ce10ac\" returns successfully" Nov 12 17:58:34.766745 containerd[1440]: time="2024-11-12T17:58:34.766717215Z" level=info msg="StopPodSandbox for \"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\"" Nov 12 17:58:34.835527 containerd[1440]: 2024-11-12 17:58:34.800 [WARNING][5426] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e5df49d4-ef7b-414a-a5da-3e33e7b77381", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92", Pod:"coredns-7db6d8ff4d-lxvzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieaed83a4045", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:34.835527 containerd[1440]: 2024-11-12 17:58:34.801 [INFO][5426] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Nov 12 17:58:34.835527 containerd[1440]: 2024-11-12 17:58:34.801 [INFO][5426] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" iface="eth0" netns="" Nov 12 17:58:34.835527 containerd[1440]: 2024-11-12 17:58:34.801 [INFO][5426] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Nov 12 17:58:34.835527 containerd[1440]: 2024-11-12 17:58:34.801 [INFO][5426] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Nov 12 17:58:34.835527 containerd[1440]: 2024-11-12 17:58:34.823 [INFO][5433] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" HandleID="k8s-pod-network.b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Workload="localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0" Nov 12 17:58:34.835527 containerd[1440]: 2024-11-12 17:58:34.823 [INFO][5433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:34.835527 containerd[1440]: 2024-11-12 17:58:34.823 [INFO][5433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:34.835527 containerd[1440]: 2024-11-12 17:58:34.830 [WARNING][5433] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" HandleID="k8s-pod-network.b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Workload="localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0" Nov 12 17:58:34.835527 containerd[1440]: 2024-11-12 17:58:34.830 [INFO][5433] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" HandleID="k8s-pod-network.b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Workload="localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0" Nov 12 17:58:34.835527 containerd[1440]: 2024-11-12 17:58:34.832 [INFO][5433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:34.835527 containerd[1440]: 2024-11-12 17:58:34.834 [INFO][5426] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Nov 12 17:58:34.835527 containerd[1440]: time="2024-11-12T17:58:34.835511641Z" level=info msg="TearDown network for sandbox \"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\" successfully" Nov 12 17:58:34.836137 containerd[1440]: time="2024-11-12T17:58:34.835537641Z" level=info msg="StopPodSandbox for \"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\" returns successfully" Nov 12 17:58:34.836653 containerd[1440]: time="2024-11-12T17:58:34.836387882Z" level=info msg="RemovePodSandbox for \"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\"" Nov 12 17:58:34.836653 containerd[1440]: time="2024-11-12T17:58:34.836422922Z" level=info msg="Forcibly stopping sandbox \"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\"" Nov 12 17:58:34.901635 containerd[1440]: 2024-11-12 17:58:34.871 [WARNING][5455] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e5df49d4-ef7b-414a-a5da-3e33e7b77381", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"897ce2bc64728f12714e1611933e5e7a468dbd90d68a2a473d883ae6df467d92", Pod:"coredns-7db6d8ff4d-lxvzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieaed83a4045", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:34.901635 containerd[1440]: 2024-11-12 17:58:34.871 [INFO][5455] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Nov 12 17:58:34.901635 containerd[1440]: 2024-11-12 17:58:34.871 [INFO][5455] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" iface="eth0" netns="" Nov 12 17:58:34.901635 containerd[1440]: 2024-11-12 17:58:34.871 [INFO][5455] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Nov 12 17:58:34.901635 containerd[1440]: 2024-11-12 17:58:34.871 [INFO][5455] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Nov 12 17:58:34.901635 containerd[1440]: 2024-11-12 17:58:34.889 [INFO][5463] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" HandleID="k8s-pod-network.b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Workload="localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0" Nov 12 17:58:34.901635 containerd[1440]: 2024-11-12 17:58:34.889 [INFO][5463] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:34.901635 containerd[1440]: 2024-11-12 17:58:34.889 [INFO][5463] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:34.901635 containerd[1440]: 2024-11-12 17:58:34.897 [WARNING][5463] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" HandleID="k8s-pod-network.b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Workload="localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0" Nov 12 17:58:34.901635 containerd[1440]: 2024-11-12 17:58:34.897 [INFO][5463] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" HandleID="k8s-pod-network.b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Workload="localhost-k8s-coredns--7db6d8ff4d--lxvzw-eth0" Nov 12 17:58:34.901635 containerd[1440]: 2024-11-12 17:58:34.898 [INFO][5463] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:34.901635 containerd[1440]: 2024-11-12 17:58:34.900 [INFO][5455] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32" Nov 12 17:58:34.902081 containerd[1440]: time="2024-11-12T17:58:34.901662624Z" level=info msg="TearDown network for sandbox \"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\" successfully" Nov 12 17:58:34.905667 containerd[1440]: time="2024-11-12T17:58:34.905521508Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:58:34.905667 containerd[1440]: time="2024-11-12T17:58:34.905583228Z" level=info msg="RemovePodSandbox \"b13bb17db4f16e41afe7af71728a08c608a35d7f918886a7c00a41ef545bef32\" returns successfully" Nov 12 17:58:34.906150 containerd[1440]: time="2024-11-12T17:58:34.906125028Z" level=info msg="StopPodSandbox for \"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\"" Nov 12 17:58:34.969421 containerd[1440]: 2024-11-12 17:58:34.938 [WARNING][5485] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sdd25-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ea587a6f-1412-4dff-ac23-3aab0de5e566", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75", Pod:"csi-node-driver-sdd25", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calife582e83cb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:34.969421 containerd[1440]: 2024-11-12 17:58:34.938 [INFO][5485] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Nov 12 17:58:34.969421 containerd[1440]: 2024-11-12 17:58:34.939 [INFO][5485] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" iface="eth0" netns="" Nov 12 17:58:34.969421 containerd[1440]: 2024-11-12 17:58:34.939 [INFO][5485] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Nov 12 17:58:34.969421 containerd[1440]: 2024-11-12 17:58:34.939 [INFO][5485] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Nov 12 17:58:34.969421 containerd[1440]: 2024-11-12 17:58:34.957 [INFO][5493] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" HandleID="k8s-pod-network.e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Workload="localhost-k8s-csi--node--driver--sdd25-eth0" Nov 12 17:58:34.969421 containerd[1440]: 2024-11-12 17:58:34.958 [INFO][5493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:34.969421 containerd[1440]: 2024-11-12 17:58:34.958 [INFO][5493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:34.969421 containerd[1440]: 2024-11-12 17:58:34.965 [WARNING][5493] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" HandleID="k8s-pod-network.e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Workload="localhost-k8s-csi--node--driver--sdd25-eth0" Nov 12 17:58:34.969421 containerd[1440]: 2024-11-12 17:58:34.965 [INFO][5493] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" HandleID="k8s-pod-network.e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Workload="localhost-k8s-csi--node--driver--sdd25-eth0" Nov 12 17:58:34.969421 containerd[1440]: 2024-11-12 17:58:34.966 [INFO][5493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:34.969421 containerd[1440]: 2024-11-12 17:58:34.968 [INFO][5485] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Nov 12 17:58:34.970198 containerd[1440]: time="2024-11-12T17:58:34.969862809Z" level=info msg="TearDown network for sandbox \"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\" successfully" Nov 12 17:58:34.970198 containerd[1440]: time="2024-11-12T17:58:34.969906169Z" level=info msg="StopPodSandbox for \"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\" returns successfully" Nov 12 17:58:34.970840 containerd[1440]: time="2024-11-12T17:58:34.970542170Z" level=info msg="RemovePodSandbox for \"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\"" Nov 12 17:58:34.970840 containerd[1440]: time="2024-11-12T17:58:34.970574050Z" level=info msg="Forcibly stopping sandbox \"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\"" Nov 12 17:58:35.036728 containerd[1440]: 2024-11-12 17:58:35.003 [WARNING][5517] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sdd25-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ea587a6f-1412-4dff-ac23-3aab0de5e566", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"370007f610990538a646301e7ae57eb616090f76efbd350f5004bd4594b0ab75", Pod:"csi-node-driver-sdd25", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calife582e83cb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:35.036728 containerd[1440]: 2024-11-12 17:58:35.003 [INFO][5517] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Nov 12 17:58:35.036728 containerd[1440]: 2024-11-12 17:58:35.003 [INFO][5517] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" iface="eth0" netns="" Nov 12 17:58:35.036728 containerd[1440]: 2024-11-12 17:58:35.003 [INFO][5517] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Nov 12 17:58:35.036728 containerd[1440]: 2024-11-12 17:58:35.003 [INFO][5517] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Nov 12 17:58:35.036728 containerd[1440]: 2024-11-12 17:58:35.024 [INFO][5525] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" HandleID="k8s-pod-network.e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Workload="localhost-k8s-csi--node--driver--sdd25-eth0" Nov 12 17:58:35.036728 containerd[1440]: 2024-11-12 17:58:35.024 [INFO][5525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:35.036728 containerd[1440]: 2024-11-12 17:58:35.024 [INFO][5525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:35.036728 containerd[1440]: 2024-11-12 17:58:35.032 [WARNING][5525] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" HandleID="k8s-pod-network.e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Workload="localhost-k8s-csi--node--driver--sdd25-eth0" Nov 12 17:58:35.036728 containerd[1440]: 2024-11-12 17:58:35.032 [INFO][5525] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" HandleID="k8s-pod-network.e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Workload="localhost-k8s-csi--node--driver--sdd25-eth0" Nov 12 17:58:35.036728 containerd[1440]: 2024-11-12 17:58:35.033 [INFO][5525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:35.036728 containerd[1440]: 2024-11-12 17:58:35.035 [INFO][5517] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646" Nov 12 17:58:35.037122 containerd[1440]: time="2024-11-12T17:58:35.036757433Z" level=info msg="TearDown network for sandbox \"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\" successfully" Nov 12 17:58:35.043807 containerd[1440]: time="2024-11-12T17:58:35.043772640Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:58:35.043875 containerd[1440]: time="2024-11-12T17:58:35.043849400Z" level=info msg="RemovePodSandbox \"e354fc7f3f7dd053b0720ced84aa5f8b67066692a340c1d71d44dfc7535bb646\" returns successfully" Nov 12 17:58:35.044326 containerd[1440]: time="2024-11-12T17:58:35.044287840Z" level=info msg="StopPodSandbox for \"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\"" Nov 12 17:58:35.108354 containerd[1440]: 2024-11-12 17:58:35.077 [WARNING][5548] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0", GenerateName:"calico-kube-controllers-6d6674b4b8-", Namespace:"calico-system", SelfLink:"", UID:"0bdbc14b-f620-427a-ae0b-74f889d89287", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d6674b4b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9", Pod:"calico-kube-controllers-6d6674b4b8-wt8ww", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali10b9aaf06fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:35.108354 containerd[1440]: 2024-11-12 17:58:35.077 [INFO][5548] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Nov 12 17:58:35.108354 containerd[1440]: 2024-11-12 17:58:35.077 [INFO][5548] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" iface="eth0" netns="" Nov 12 17:58:35.108354 containerd[1440]: 2024-11-12 17:58:35.077 [INFO][5548] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Nov 12 17:58:35.108354 containerd[1440]: 2024-11-12 17:58:35.077 [INFO][5548] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Nov 12 17:58:35.108354 containerd[1440]: 2024-11-12 17:58:35.094 [INFO][5555] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" HandleID="k8s-pod-network.1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Workload="localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0" Nov 12 17:58:35.108354 containerd[1440]: 2024-11-12 17:58:35.094 [INFO][5555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:35.108354 containerd[1440]: 2024-11-12 17:58:35.094 [INFO][5555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:35.108354 containerd[1440]: 2024-11-12 17:58:35.102 [WARNING][5555] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" HandleID="k8s-pod-network.1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Workload="localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0" Nov 12 17:58:35.108354 containerd[1440]: 2024-11-12 17:58:35.102 [INFO][5555] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" HandleID="k8s-pod-network.1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Workload="localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0" Nov 12 17:58:35.108354 containerd[1440]: 2024-11-12 17:58:35.103 [INFO][5555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:35.108354 containerd[1440]: 2024-11-12 17:58:35.105 [INFO][5548] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Nov 12 17:58:35.108354 containerd[1440]: time="2024-11-12T17:58:35.108324341Z" level=info msg="TearDown network for sandbox \"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\" successfully" Nov 12 17:58:35.108354 containerd[1440]: time="2024-11-12T17:58:35.108357141Z" level=info msg="StopPodSandbox for \"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\" returns successfully" Nov 12 17:58:35.109466 containerd[1440]: time="2024-11-12T17:58:35.108981101Z" level=info msg="RemovePodSandbox for \"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\"" Nov 12 17:58:35.109466 containerd[1440]: time="2024-11-12T17:58:35.109010061Z" level=info msg="Forcibly stopping sandbox \"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\"" Nov 12 17:58:35.177404 containerd[1440]: 2024-11-12 17:58:35.145 [WARNING][5577] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0", GenerateName:"calico-kube-controllers-6d6674b4b8-", Namespace:"calico-system", SelfLink:"", UID:"0bdbc14b-f620-427a-ae0b-74f889d89287", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d6674b4b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6445e65d8fc6581c6899f273f3665b0d92dfa1ac6edc4c29a739565df240be9", Pod:"calico-kube-controllers-6d6674b4b8-wt8ww", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali10b9aaf06fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:58:35.177404 containerd[1440]: 2024-11-12 17:58:35.146 [INFO][5577] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Nov 12 17:58:35.177404 containerd[1440]: 2024-11-12 17:58:35.146 [INFO][5577] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" iface="eth0" netns="" Nov 12 17:58:35.177404 containerd[1440]: 2024-11-12 17:58:35.146 [INFO][5577] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Nov 12 17:58:35.177404 containerd[1440]: 2024-11-12 17:58:35.146 [INFO][5577] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Nov 12 17:58:35.177404 containerd[1440]: 2024-11-12 17:58:35.165 [INFO][5584] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" HandleID="k8s-pod-network.1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Workload="localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0" Nov 12 17:58:35.177404 containerd[1440]: 2024-11-12 17:58:35.165 [INFO][5584] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:58:35.177404 containerd[1440]: 2024-11-12 17:58:35.165 [INFO][5584] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:58:35.177404 containerd[1440]: 2024-11-12 17:58:35.173 [WARNING][5584] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" HandleID="k8s-pod-network.1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Workload="localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0" Nov 12 17:58:35.177404 containerd[1440]: 2024-11-12 17:58:35.173 [INFO][5584] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" HandleID="k8s-pod-network.1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Workload="localhost-k8s-calico--kube--controllers--6d6674b4b8--wt8ww-eth0" Nov 12 17:58:35.177404 containerd[1440]: 2024-11-12 17:58:35.174 [INFO][5584] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:58:35.177404 containerd[1440]: 2024-11-12 17:58:35.176 [INFO][5577] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264" Nov 12 17:58:35.177786 containerd[1440]: time="2024-11-12T17:58:35.177433006Z" level=info msg="TearDown network for sandbox \"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\" successfully" Nov 12 17:58:35.180242 containerd[1440]: time="2024-11-12T17:58:35.180207808Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:58:35.180286 containerd[1440]: time="2024-11-12T17:58:35.180265169Z" level=info msg="RemovePodSandbox \"1fc13cb846769d6f13cddbe0ca9329aa8ed418623dfbff75beafc9c1a2771264\" returns successfully" Nov 12 17:58:35.328411 systemd[1]: Started sshd@18-10.0.0.106:22-10.0.0.1:42466.service - OpenSSH per-connection server daemon (10.0.0.1:42466). Nov 12 17:58:35.367418 sshd[5594]: Accepted publickey for core from 10.0.0.1 port 42466 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:58:35.368796 sshd[5594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:58:35.372817 systemd-logind[1424]: New session 19 of user core. Nov 12 17:58:35.379520 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 17:58:35.523420 sshd[5594]: pam_unix(sshd:session): session closed for user core Nov 12 17:58:35.527385 systemd[1]: sshd@18-10.0.0.106:22-10.0.0.1:42466.service: Deactivated successfully. Nov 12 17:58:35.529827 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 17:58:35.530541 systemd-logind[1424]: Session 19 logged out. Waiting for processes to exit. Nov 12 17:58:35.531288 systemd-logind[1424]: Removed session 19. Nov 12 17:58:40.535373 systemd[1]: Started sshd@19-10.0.0.106:22-10.0.0.1:42468.service - OpenSSH per-connection server daemon (10.0.0.1:42468). Nov 12 17:58:40.573700 sshd[5608]: Accepted publickey for core from 10.0.0.1 port 42468 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg Nov 12 17:58:40.575041 sshd[5608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:58:40.578531 systemd-logind[1424]: New session 20 of user core. Nov 12 17:58:40.588376 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 17:58:40.716306 sshd[5608]: pam_unix(sshd:session): session closed for user core Nov 12 17:58:40.720507 systemd[1]: sshd@19-10.0.0.106:22-10.0.0.1:42468.service: Deactivated successfully. Nov 12 17:58:40.722798 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 17:58:40.723942 systemd-logind[1424]: Session 20 logged out. Waiting for processes to exit. Nov 12 17:58:40.724982 systemd-logind[1424]: Removed session 20.