Aug 5 22:48:41.022424 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 5 20:36:22 -00 2024 Aug 5 22:48:41.022448 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:48:41.022461 kernel: BIOS-provided physical RAM map: Aug 5 22:48:41.022470 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 5 22:48:41.022477 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 5 22:48:41.022485 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 5 22:48:41.022495 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 5 22:48:41.022503 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 5 22:48:41.022511 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 5 22:48:41.022521 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 5 22:48:41.022529 kernel: NX (Execute Disable) protection: active Aug 5 22:48:41.022537 kernel: APIC: Static calls initialized Aug 5 22:48:41.022545 kernel: SMBIOS 2.8 present. Aug 5 22:48:41.022554 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Aug 5 22:48:41.022564 kernel: Hypervisor detected: KVM Aug 5 22:48:41.022575 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 5 22:48:41.022584 kernel: kvm-clock: using sched offset of 6920457945 cycles Aug 5 22:48:41.022592 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 5 22:48:41.022602 kernel: tsc: Detected 1996.249 MHz processor Aug 5 22:48:41.022611 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 5 22:48:41.022620 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 5 22:48:41.022629 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 5 22:48:41.022638 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 5 22:48:41.022647 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 5 22:48:41.022659 kernel: ACPI: Early table checksum verification disabled Aug 5 22:48:41.022667 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Aug 5 22:48:41.022676 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:48:41.022685 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:48:41.022695 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:48:41.022705 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 5 22:48:41.022713 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:48:41.022722 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:48:41.022730 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Aug 5 22:48:41.022740 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Aug 5 22:48:41.022748 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 5 22:48:41.022756 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Aug 5 22:48:41.022764 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Aug 5 22:48:41.022772 kernel: No NUMA configuration found Aug 5 22:48:41.022780 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Aug 5 22:48:41.022789 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Aug 5 22:48:41.022800 kernel: Zone ranges: Aug 5 22:48:41.022810 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 5 22:48:41.022819 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Aug 5 22:48:41.022828 kernel: Normal empty Aug 5 22:48:41.022836 kernel: Movable zone start for each node Aug 5 22:48:41.022845 kernel: Early memory node ranges Aug 5 22:48:41.022853 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 5 22:48:41.022863 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 5 22:48:41.022872 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Aug 5 22:48:41.022881 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 5 22:48:41.022889 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 5 22:48:41.022898 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Aug 5 22:48:41.022907 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 5 22:48:41.022915 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 5 22:48:41.022924 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 5 22:48:41.022933 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 5 22:48:41.022941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 5 22:48:41.022952 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 5 22:48:41.022960 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 5 22:48:41.022969 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 5 22:48:41.022978 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 5 22:48:41.022986 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 5 22:48:41.022995 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 5 22:48:41.023004 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Aug 5 22:48:41.023012 kernel: Booting paravirtualized kernel on KVM Aug 5 22:48:41.023021 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 5 22:48:41.023031 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 5 22:48:41.023040 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Aug 5 22:48:41.023049 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Aug 5 22:48:41.023057 kernel: pcpu-alloc: [0] 0 1 Aug 5 22:48:41.023066 kernel: kvm-guest: PV spinlocks disabled, no host support Aug 5 22:48:41.023093 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:48:41.023103 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 22:48:41.023111 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 22:48:41.023122 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 5 22:48:41.023131 kernel: Fallback order for Node 0: 0 Aug 5 22:48:41.023140 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Aug 5 22:48:41.023148 kernel: Policy zone: DMA32 Aug 5 22:48:41.023157 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 22:48:41.023166 kernel: Memory: 1965068K/2096620K available (12288K kernel code, 2302K rwdata, 22640K rodata, 49372K init, 1972K bss, 131292K reserved, 0K cma-reserved) Aug 5 22:48:41.023175 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 5 22:48:41.023183 kernel: ftrace: allocating 37659 entries in 148 pages Aug 5 22:48:41.023194 kernel: ftrace: allocated 148 pages with 3 groups Aug 5 22:48:41.023202 kernel: Dynamic Preempt: voluntary Aug 5 22:48:41.023211 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 22:48:41.023220 kernel: rcu: RCU event tracing is enabled. Aug 5 22:48:41.023229 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 5 22:48:41.023238 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 22:48:41.023247 kernel: Rude variant of Tasks RCU enabled. Aug 5 22:48:41.023255 kernel: Tracing variant of Tasks RCU enabled. Aug 5 22:48:41.023264 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 22:48:41.023273 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 5 22:48:41.023285 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 5 22:48:41.023293 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 22:48:41.023302 kernel: Console: colour VGA+ 80x25 Aug 5 22:48:41.023310 kernel: printk: console [tty0] enabled Aug 5 22:48:41.023319 kernel: printk: console [ttyS0] enabled Aug 5 22:48:41.023328 kernel: ACPI: Core revision 20230628 Aug 5 22:48:41.023336 kernel: APIC: Switch to symmetric I/O mode setup Aug 5 22:48:41.023345 kernel: x2apic enabled Aug 5 22:48:41.023354 kernel: APIC: Switched APIC routing to: physical x2apic Aug 5 22:48:41.023364 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 5 22:48:41.023373 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 5 22:48:41.023382 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Aug 5 22:48:41.023391 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 5 22:48:41.023399 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 5 22:48:41.023408 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 5 22:48:41.023417 kernel: Spectre V2 : Mitigation: Retpolines Aug 5 22:48:41.023426 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Aug 5 22:48:41.023435 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Aug 5 22:48:41.023446 kernel: Speculative Store Bypass: Vulnerable Aug 5 22:48:41.023454 kernel: x86/fpu: x87 FPU will use FXSAVE Aug 5 22:48:41.023463 kernel: Freeing SMP alternatives memory: 32K Aug 5 22:48:41.023472 kernel: pid_max: default: 32768 minimum: 301 Aug 5 22:48:41.023480 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 22:48:41.023489 kernel: SELinux: Initializing. Aug 5 22:48:41.023497 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 5 22:48:41.023507 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 5 22:48:41.023525 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Aug 5 22:48:41.023535 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:48:41.023545 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:48:41.023555 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:48:41.023566 kernel: Performance Events: AMD PMU driver. Aug 5 22:48:41.023576 kernel: ... version: 0 Aug 5 22:48:41.023585 kernel: ... bit width: 48 Aug 5 22:48:41.023595 kernel: ... generic registers: 4 Aug 5 22:48:41.023605 kernel: ... value mask: 0000ffffffffffff Aug 5 22:48:41.023617 kernel: ... max period: 00007fffffffffff Aug 5 22:48:41.023626 kernel: ... fixed-purpose events: 0 Aug 5 22:48:41.023636 kernel: ... event mask: 000000000000000f Aug 5 22:48:41.023646 kernel: signal: max sigframe size: 1440 Aug 5 22:48:41.023655 kernel: rcu: Hierarchical SRCU implementation. Aug 5 22:48:41.023665 kernel: rcu: Max phase no-delay instances is 400. Aug 5 22:48:41.023675 kernel: smp: Bringing up secondary CPUs ... Aug 5 22:48:41.023685 kernel: smpboot: x86: Booting SMP configuration: Aug 5 22:48:41.023695 kernel: .... node #0, CPUs: #1 Aug 5 22:48:41.023707 kernel: smp: Brought up 1 node, 2 CPUs Aug 5 22:48:41.023717 kernel: smpboot: Max logical packages: 2 Aug 5 22:48:41.023727 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Aug 5 22:48:41.023737 kernel: devtmpfs: initialized Aug 5 22:48:41.023746 kernel: x86/mm: Memory block size: 128MB Aug 5 22:48:41.023756 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 22:48:41.023766 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 5 22:48:41.023776 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 22:48:41.023786 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 22:48:41.023798 kernel: audit: initializing netlink subsys (disabled) Aug 5 22:48:41.023807 kernel: audit: type=2000 audit(1722898120.397:1): state=initialized audit_enabled=0 res=1 Aug 5 22:48:41.023817 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 22:48:41.023829 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 5 22:48:41.023839 kernel: cpuidle: using governor menu Aug 5 22:48:41.023848 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 22:48:41.023857 kernel: dca service started, version 1.12.1 Aug 5 22:48:41.023866 kernel: PCI: Using configuration type 1 for base access Aug 5 22:48:41.023875 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 5 22:48:41.023886 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 22:48:41.023896 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 22:48:41.023905 kernel: ACPI: Added _OSI(Module Device) Aug 5 22:48:41.023914 kernel: ACPI: Added _OSI(Processor Device) Aug 5 22:48:41.023923 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 22:48:41.023934 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 22:48:41.023944 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 22:48:41.023954 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 5 22:48:41.023964 kernel: ACPI: Interpreter enabled Aug 5 22:48:41.023976 kernel: ACPI: PM: (supports S0 S3 S5) Aug 5 22:48:41.023986 kernel: ACPI: Using IOAPIC for interrupt routing Aug 5 22:48:41.023996 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 5 22:48:41.024006 kernel: PCI: Using E820 reservations for host bridge windows Aug 5 22:48:41.024016 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 5 22:48:41.024025 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 5 22:48:41.024199 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 5 22:48:41.024323 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 5 22:48:41.024424 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 5 22:48:41.024439 kernel: acpiphp: Slot [3] registered Aug 5 22:48:41.024449 kernel: acpiphp: Slot [4] registered Aug 5 22:48:41.024459 kernel: acpiphp: Slot [5] registered Aug 5 22:48:41.024468 kernel: acpiphp: Slot [6] registered Aug 5 22:48:41.024478 kernel: acpiphp: Slot [7] registered Aug 5 22:48:41.024488 kernel: acpiphp: Slot [8] registered Aug 5 22:48:41.024498 kernel: acpiphp: Slot [9] registered Aug 5 22:48:41.024512 kernel: acpiphp: Slot [10] registered Aug 5 22:48:41.024522 kernel: acpiphp: Slot [11] registered Aug 5 22:48:41.024531 kernel: acpiphp: Slot [12] registered Aug 5 22:48:41.024541 kernel: acpiphp: Slot [13] registered Aug 5 22:48:41.024551 kernel: acpiphp: Slot [14] registered Aug 5 22:48:41.024560 kernel: acpiphp: Slot [15] registered Aug 5 22:48:41.024570 kernel: acpiphp: Slot [16] registered Aug 5 22:48:41.024580 kernel: acpiphp: Slot [17] registered Aug 5 22:48:41.024589 kernel: acpiphp: Slot [18] registered Aug 5 22:48:41.024599 kernel: acpiphp: Slot [19] registered Aug 5 22:48:41.024611 kernel: acpiphp: Slot [20] registered Aug 5 22:48:41.024621 kernel: acpiphp: Slot [21] registered Aug 5 22:48:41.024631 kernel: acpiphp: Slot [22] registered Aug 5 22:48:41.024640 kernel: acpiphp: Slot [23] registered Aug 5 22:48:41.024650 kernel: acpiphp: Slot [24] registered Aug 5 22:48:41.024660 kernel: acpiphp: Slot [25] registered Aug 5 22:48:41.024669 kernel: acpiphp: Slot [26] registered Aug 5 22:48:41.024679 kernel: acpiphp: Slot [27] registered Aug 5 22:48:41.024689 kernel: acpiphp: Slot [28] registered Aug 5 22:48:41.024700 kernel: acpiphp: Slot [29] registered Aug 5 22:48:41.024710 kernel: acpiphp: Slot [30] registered Aug 5 22:48:41.024719 kernel: acpiphp: Slot [31] registered Aug 5 22:48:41.024729 kernel: PCI host bridge to bus 0000:00 Aug 5 22:48:41.024833 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 5 22:48:41.024921 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 5 22:48:41.025009 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 5 22:48:41.025116 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 5 22:48:41.025210 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 5 22:48:41.025305 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 5 22:48:41.025475 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 5 22:48:41.025586 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 5 22:48:41.025693 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Aug 5 22:48:41.025791 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Aug 5 22:48:41.025893 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 5 22:48:41.025989 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 5 22:48:41.026135 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 5 22:48:41.026235 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 5 22:48:41.026338 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 5 22:48:41.026435 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 5 22:48:41.026530 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 5 22:48:41.026641 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Aug 5 22:48:41.026739 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Aug 5 22:48:41.026835 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Aug 5 22:48:41.026931 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Aug 5 22:48:41.027027 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Aug 5 22:48:41.027153 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 5 22:48:41.027265 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 5 22:48:41.027371 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Aug 5 22:48:41.027468 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Aug 5 22:48:41.027565 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Aug 5 22:48:41.027664 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Aug 5 22:48:41.027773 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Aug 5 22:48:41.027871 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Aug 5 22:48:41.027972 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Aug 5 22:48:41.028068 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Aug 5 22:48:41.028202 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Aug 5 22:48:41.028658 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Aug 5 22:48:41.028963 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Aug 5 22:48:41.029499 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Aug 5 22:48:41.029735 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Aug 5 22:48:41.029986 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Aug 5 22:48:41.030021 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 5 22:48:41.030047 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 5 22:48:41.030071 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 5 22:48:41.030136 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 5 22:48:41.030160 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 5 22:48:41.030183 kernel: iommu: Default domain type: Translated Aug 5 22:48:41.030207 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 5 22:48:41.030230 kernel: PCI: Using ACPI for IRQ routing Aug 5 22:48:41.030261 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 5 22:48:41.030284 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 5 22:48:41.030307 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 5 22:48:41.030571 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 5 22:48:41.030797 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 5 22:48:41.031018 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 5 22:48:41.031051 kernel: vgaarb: loaded Aug 5 22:48:41.031298 kernel: clocksource: Switched to clocksource kvm-clock Aug 5 22:48:41.031347 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 22:48:41.031381 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 22:48:41.031404 kernel: pnp: PnP ACPI init Aug 5 22:48:41.031651 kernel: pnp 00:03: [dma 2] Aug 5 22:48:41.031688 kernel: pnp: PnP ACPI: found 5 devices Aug 5 22:48:41.031711 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 5 22:48:41.031735 kernel: NET: Registered PF_INET protocol family Aug 5 22:48:41.031758 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 22:48:41.031782 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 5 22:48:41.031813 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 22:48:41.031837 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 5 22:48:41.031860 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 5 22:48:41.031884 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 5 22:48:41.031907 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 5 22:48:41.031930 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 5 22:48:41.031954 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 22:48:41.031976 kernel: NET: Registered PF_XDP protocol family Aug 5 22:48:41.032225 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 5 22:48:41.032534 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 5 22:48:41.032731 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 5 22:48:41.032927 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 5 22:48:41.033203 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 5 22:48:41.033442 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 5 22:48:41.033670 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 5 22:48:41.033696 kernel: PCI: CLS 0 bytes, default 64 Aug 5 22:48:41.033714 kernel: Initialise system trusted keyrings Aug 5 22:48:41.033739 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 5 22:48:41.033757 kernel: Key type asymmetric registered Aug 5 22:48:41.033774 kernel: Asymmetric key parser 'x509' registered Aug 5 22:48:41.033791 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 5 22:48:41.033808 kernel: io scheduler mq-deadline registered Aug 5 22:48:41.033826 kernel: io scheduler kyber registered Aug 5 22:48:41.033843 kernel: io scheduler bfq registered Aug 5 22:48:41.033861 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 5 22:48:41.033880 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Aug 5 22:48:41.033901 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 5 22:48:41.033918 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 5 22:48:41.033936 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 5 22:48:41.033954 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 22:48:41.033971 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 5 22:48:41.033988 kernel: random: crng init done Aug 5 22:48:41.034006 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 5 22:48:41.034023 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 5 22:48:41.034041 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 5 22:48:41.034286 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 5 22:48:41.034317 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 5 22:48:41.034466 kernel: rtc_cmos 00:04: registered as rtc0 Aug 5 22:48:41.034617 kernel: rtc_cmos 00:04: setting system clock to 2024-08-05T22:48:40 UTC (1722898120) Aug 5 22:48:41.034765 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Aug 5 22:48:41.034796 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 5 22:48:41.034815 kernel: NET: Registered PF_INET6 protocol family Aug 5 22:48:41.034832 kernel: Segment Routing with IPv6 Aug 5 22:48:41.034856 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 22:48:41.034874 kernel: NET: Registered PF_PACKET protocol family Aug 5 22:48:41.034891 kernel: Key type dns_resolver registered Aug 5 22:48:41.034908 kernel: IPI shorthand broadcast: enabled Aug 5 22:48:41.034926 kernel: sched_clock: Marking stable (946008492, 122355085)->(1071309050, -2945473) Aug 5 22:48:41.034943 kernel: registered taskstats version 1 Aug 5 22:48:41.034960 kernel: Loading compiled-in X.509 certificates Aug 5 22:48:41.034978 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: d8f193b4a33a492a73da7ce4522bbc835ec39532' Aug 5 22:48:41.034995 kernel: Key type .fscrypt registered Aug 5 22:48:41.035015 kernel: Key type fscrypt-provisioning registered Aug 5 22:48:41.035032 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 22:48:41.035049 kernel: ima: Allocated hash algorithm: sha1 Aug 5 22:48:41.035066 kernel: ima: No architecture policies found Aug 5 22:48:41.037147 kernel: clk: Disabling unused clocks Aug 5 22:48:41.037167 kernel: Freeing unused kernel image (initmem) memory: 49372K Aug 5 22:48:41.037185 kernel: Write protecting the kernel read-only data: 36864k Aug 5 22:48:41.037203 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Aug 5 22:48:41.037226 kernel: Run /init as init process Aug 5 22:48:41.037243 kernel: with arguments: Aug 5 22:48:41.037260 kernel: /init Aug 5 22:48:41.037277 kernel: with environment: Aug 5 22:48:41.037294 kernel: HOME=/ Aug 5 22:48:41.037311 kernel: TERM=linux Aug 5 22:48:41.037328 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 22:48:41.037351 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:48:41.037378 systemd[1]: Detected virtualization kvm. Aug 5 22:48:41.037397 systemd[1]: Detected architecture x86-64. Aug 5 22:48:41.037416 systemd[1]: Running in initrd. Aug 5 22:48:41.037434 systemd[1]: No hostname configured, using default hostname. Aug 5 22:48:41.037453 systemd[1]: Hostname set to . Aug 5 22:48:41.037472 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:48:41.037491 systemd[1]: Queued start job for default target initrd.target. Aug 5 22:48:41.037509 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:48:41.037532 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:48:41.037552 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 22:48:41.037571 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:48:41.037590 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 22:48:41.037609 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 22:48:41.037631 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 22:48:41.037651 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 22:48:41.037673 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:48:41.037692 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:48:41.037709 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:48:41.037723 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:48:41.037745 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:48:41.037760 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:48:41.037771 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:48:41.037782 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:48:41.037792 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:48:41.037803 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:48:41.037813 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:48:41.037824 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:48:41.037834 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:48:41.037845 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:48:41.037855 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 22:48:41.037867 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:48:41.037878 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 22:48:41.037888 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 22:48:41.037899 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:48:41.037909 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:48:41.037940 systemd-journald[184]: Collecting audit messages is disabled. Aug 5 22:48:41.037967 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:48:41.037980 systemd-journald[184]: Journal started Aug 5 22:48:41.038005 systemd-journald[184]: Runtime Journal (/run/log/journal/39c377fb9db84561bc928d653d0fb0a5) is 4.9M, max 39.3M, 34.4M free. Aug 5 22:48:41.050122 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:48:41.056732 systemd-modules-load[185]: Inserted module 'overlay' Aug 5 22:48:41.059173 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 22:48:41.060034 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:48:41.064951 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 22:48:41.080306 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:48:41.083771 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:48:41.137140 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 22:48:41.137189 kernel: Bridge firewalling registered Aug 5 22:48:41.093000 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:48:41.100768 systemd-modules-load[185]: Inserted module 'br_netfilter' Aug 5 22:48:41.145543 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:48:41.146454 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:48:41.156485 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:48:41.166341 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:48:41.167870 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:48:41.178053 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:48:41.187775 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:48:41.200383 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:48:41.204697 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:48:41.206312 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:48:41.213042 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 22:48:41.232759 dracut-cmdline[222]: dracut-dracut-053 Aug 5 22:48:41.235738 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:48:41.241441 systemd-resolved[215]: Positive Trust Anchors: Aug 5 22:48:41.241460 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:48:41.241501 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:48:41.245093 systemd-resolved[215]: Defaulting to hostname 'linux'. Aug 5 22:48:41.246211 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:48:41.246859 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:48:41.340141 kernel: SCSI subsystem initialized Aug 5 22:48:41.353112 kernel: Loading iSCSI transport class v2.0-870. Aug 5 22:48:41.368347 kernel: iscsi: registered transport (tcp) Aug 5 22:48:41.398622 kernel: iscsi: registered transport (qla4xxx) Aug 5 22:48:41.399026 kernel: QLogic iSCSI HBA Driver Aug 5 22:48:41.469388 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 22:48:41.475253 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 22:48:41.509523 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 22:48:41.509621 kernel: device-mapper: uevent: version 1.0.3 Aug 5 22:48:41.509657 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 22:48:41.562169 kernel: raid6: sse2x4 gen() 11758 MB/s Aug 5 22:48:41.580148 kernel: raid6: sse2x2 gen() 13797 MB/s Aug 5 22:48:41.598369 kernel: raid6: sse2x1 gen() 9379 MB/s Aug 5 22:48:41.598435 kernel: raid6: using algorithm sse2x2 gen() 13797 MB/s Aug 5 22:48:41.617267 kernel: raid6: .... xor() 8372 MB/s, rmw enabled Aug 5 22:48:41.617442 kernel: raid6: using ssse3x2 recovery algorithm Aug 5 22:48:41.647315 kernel: xor: measuring software checksum speed Aug 5 22:48:41.647401 kernel: prefetch64-sse : 17381 MB/sec Aug 5 22:48:41.650481 kernel: generic_sse : 16545 MB/sec Aug 5 22:48:41.650544 kernel: xor: using function: prefetch64-sse (17381 MB/sec) Aug 5 22:48:41.868136 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 22:48:41.878092 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:48:41.887308 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:48:41.903813 systemd-udevd[404]: Using default interface naming scheme 'v255'. Aug 5 22:48:41.908712 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:48:41.916304 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 22:48:41.929479 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Aug 5 22:48:41.972291 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:48:41.980365 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:48:42.061969 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:48:42.069292 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 22:48:42.095616 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 22:48:42.100933 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:48:42.104280 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:48:42.105994 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:48:42.112275 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 22:48:42.142626 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:48:42.159134 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Aug 5 22:48:42.251641 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Aug 5 22:48:42.251799 kernel: libata version 3.00 loaded. Aug 5 22:48:42.251816 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 5 22:48:42.251962 kernel: scsi host0: ata_piix Aug 5 22:48:42.252134 kernel: scsi host1: ata_piix Aug 5 22:48:42.252289 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Aug 5 22:48:42.252310 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Aug 5 22:48:42.252330 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 22:48:42.252353 kernel: GPT:17805311 != 41943039 Aug 5 22:48:42.252365 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 22:48:42.252377 kernel: GPT:17805311 != 41943039 Aug 5 22:48:42.252388 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 22:48:42.252405 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:48:42.192600 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:48:42.192711 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:48:42.195671 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:48:42.196315 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:48:42.196367 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:48:42.197004 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:48:42.203348 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:48:42.281431 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:48:42.289298 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:48:42.315433 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:48:42.459168 kernel: BTRFS: device fsid 24d7efdf-5582-42d2-aafd-43221656b08f devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (461) Aug 5 22:48:42.474118 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (458) Aug 5 22:48:42.490634 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 5 22:48:42.511452 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 5 22:48:42.512288 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 5 22:48:42.519602 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 5 22:48:42.526883 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 22:48:42.532323 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 22:48:42.567343 disk-uuid[507]: Primary Header is updated. Aug 5 22:48:42.567343 disk-uuid[507]: Secondary Entries is updated. Aug 5 22:48:42.567343 disk-uuid[507]: Secondary Header is updated. Aug 5 22:48:42.589178 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:48:42.606130 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:48:44.097202 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:48:44.100002 disk-uuid[508]: The operation has completed successfully. Aug 5 22:48:44.213628 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 22:48:44.213861 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 22:48:44.239241 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 22:48:44.259611 sh[521]: Success Aug 5 22:48:44.284387 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Aug 5 22:48:44.402652 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 22:48:44.408342 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 22:48:44.413397 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 22:48:44.468711 kernel: BTRFS info (device dm-0): first mount of filesystem 24d7efdf-5582-42d2-aafd-43221656b08f Aug 5 22:48:44.468767 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:48:44.483272 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 22:48:44.487573 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 22:48:44.490923 kernel: BTRFS info (device dm-0): using free space tree Aug 5 22:48:44.528782 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 22:48:44.531384 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 22:48:44.541403 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 22:48:44.548418 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 22:48:44.568142 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:48:44.568261 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:48:44.571130 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:48:44.581183 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:48:44.602238 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 22:48:44.606010 kernel: BTRFS info (device vda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:48:44.619600 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 22:48:44.628436 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 22:48:44.724938 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:48:44.732300 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:48:44.773656 systemd-networkd[706]: lo: Link UP Aug 5 22:48:44.773665 systemd-networkd[706]: lo: Gained carrier Aug 5 22:48:44.774909 systemd-networkd[706]: Enumeration completed Aug 5 22:48:44.776514 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:48:44.777405 systemd[1]: Reached target network.target - Network. Aug 5 22:48:44.778460 systemd-networkd[706]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:48:44.778463 systemd-networkd[706]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:48:44.781818 systemd-networkd[706]: eth0: Link UP Aug 5 22:48:44.781822 systemd-networkd[706]: eth0: Gained carrier Aug 5 22:48:44.781830 systemd-networkd[706]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:48:44.793095 systemd-networkd[706]: eth0: DHCPv4 address 172.24.4.9/24, gateway 172.24.4.1 acquired from 172.24.4.1 Aug 5 22:48:44.798473 ignition[606]: Ignition 2.19.0 Aug 5 22:48:44.798484 ignition[606]: Stage: fetch-offline Aug 5 22:48:44.798547 ignition[606]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:48:44.798569 ignition[606]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 5 22:48:44.798662 ignition[606]: parsed url from cmdline: "" Aug 5 22:48:44.798666 ignition[606]: no config URL provided Aug 5 22:48:44.798671 ignition[606]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:48:44.798679 ignition[606]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:48:44.798684 ignition[606]: failed to fetch config: resource requires networking Aug 5 22:48:44.803262 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:48:44.798877 ignition[606]: Ignition finished successfully Aug 5 22:48:44.819257 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 5 22:48:44.833424 ignition[715]: Ignition 2.19.0 Aug 5 22:48:44.833438 ignition[715]: Stage: fetch Aug 5 22:48:44.833661 ignition[715]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:48:44.833674 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 5 22:48:44.833791 ignition[715]: parsed url from cmdline: "" Aug 5 22:48:44.833797 ignition[715]: no config URL provided Aug 5 22:48:44.833804 ignition[715]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:48:44.833818 ignition[715]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:48:44.833981 ignition[715]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Aug 5 22:48:44.834468 ignition[715]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Aug 5 22:48:44.834542 ignition[715]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Aug 5 22:48:45.046903 ignition[715]: GET result: OK Aug 5 22:48:45.047389 ignition[715]: parsing config with SHA512: df56cc160b7c76a7acff48aaa3865365a3cc51aacb8122bfd6b72d908bb21fc38c571659fd343d512785fde63357e6af885abbad74982217eea1f43f98711604 Aug 5 22:48:45.060558 unknown[715]: fetched base config from "system" Aug 5 22:48:45.060585 unknown[715]: fetched base config from "system" Aug 5 22:48:45.062044 ignition[715]: fetch: fetch complete Aug 5 22:48:45.060611 unknown[715]: fetched user config from "openstack" Aug 5 22:48:45.062065 ignition[715]: fetch: fetch passed Aug 5 22:48:45.066226 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 5 22:48:45.062230 ignition[715]: Ignition finished successfully Aug 5 22:48:45.079428 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 22:48:45.113379 ignition[722]: Ignition 2.19.0 Aug 5 22:48:45.113406 ignition[722]: Stage: kargs Aug 5 22:48:45.113834 ignition[722]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:48:45.113861 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 5 22:48:45.116194 ignition[722]: kargs: kargs passed Aug 5 22:48:45.118561 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 22:48:45.116316 ignition[722]: Ignition finished successfully Aug 5 22:48:45.127373 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 22:48:45.154191 ignition[729]: Ignition 2.19.0 Aug 5 22:48:45.154213 ignition[729]: Stage: disks Aug 5 22:48:45.154602 ignition[729]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:48:45.154634 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 5 22:48:45.160579 ignition[729]: disks: disks passed Aug 5 22:48:45.160669 ignition[729]: Ignition finished successfully Aug 5 22:48:45.162290 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 22:48:45.164913 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 22:48:45.166515 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:48:45.168712 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:48:45.170820 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:48:45.172833 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:48:45.188369 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 22:48:45.220425 systemd-fsck[738]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Aug 5 22:48:45.231927 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 22:48:45.240351 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 22:48:45.420097 kernel: EXT4-fs (vda9): mounted filesystem b6919f21-4a66-43c1-b816-e6fe5d1b75ef r/w with ordered data mode. Quota mode: none. Aug 5 22:48:45.419656 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 22:48:45.421182 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 22:48:45.436330 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:48:45.440932 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 22:48:45.442013 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 22:48:45.445014 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Aug 5 22:48:45.447038 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 22:48:45.448363 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:48:45.455096 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (746) Aug 5 22:48:45.462593 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:48:45.462629 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:48:45.465107 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:48:45.466882 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 22:48:45.475138 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:48:45.478430 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 22:48:45.489343 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:48:45.596380 initrd-setup-root[774]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 22:48:45.601253 initrd-setup-root[781]: cut: /sysroot/etc/group: No such file or directory Aug 5 22:48:45.606395 initrd-setup-root[788]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 22:48:45.613568 initrd-setup-root[795]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 22:48:45.715397 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 22:48:45.727197 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 22:48:45.732688 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 22:48:45.743303 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 22:48:45.749106 kernel: BTRFS info (device vda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:48:45.773595 ignition[862]: INFO : Ignition 2.19.0 Aug 5 22:48:45.773595 ignition[862]: INFO : Stage: mount Aug 5 22:48:45.773595 ignition[862]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:48:45.773595 ignition[862]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 5 22:48:45.776726 ignition[862]: INFO : mount: mount passed Aug 5 22:48:45.776726 ignition[862]: INFO : Ignition finished successfully Aug 5 22:48:45.778887 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 22:48:45.795550 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 22:48:46.177959 systemd-networkd[706]: eth0: Gained IPv6LL Aug 5 22:48:52.678826 coreos-metadata[748]: Aug 05 22:48:52.678 WARN failed to locate config-drive, using the metadata service API instead Aug 5 22:48:52.721579 coreos-metadata[748]: Aug 05 22:48:52.721 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Aug 5 22:48:52.734018 coreos-metadata[748]: Aug 05 22:48:52.733 INFO Fetch successful Aug 5 22:48:52.734018 coreos-metadata[748]: Aug 05 22:48:52.733 INFO wrote hostname ci-4012-1-0-4-e6fc6d4d35.novalocal to /sysroot/etc/hostname Aug 5 22:48:52.738341 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Aug 5 22:48:52.738549 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Aug 5 22:48:52.745214 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 22:48:52.766452 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:48:52.775130 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (880) Aug 5 22:48:52.779460 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:48:52.779546 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:48:52.781465 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:48:52.786119 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:48:52.789508 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:48:52.817897 ignition[897]: INFO : Ignition 2.19.0 Aug 5 22:48:52.818836 ignition[897]: INFO : Stage: files Aug 5 22:48:52.819615 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:48:52.820466 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 5 22:48:52.821722 ignition[897]: DEBUG : files: compiled without relabeling support, skipping Aug 5 22:48:52.822822 ignition[897]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 22:48:52.822822 ignition[897]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 22:48:52.826888 ignition[897]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 22:48:52.828004 ignition[897]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 22:48:52.828004 ignition[897]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 22:48:52.827531 unknown[897]: wrote ssh authorized keys file for user: core Aug 5 22:48:52.833403 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:48:52.833403 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 5 22:48:53.565336 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 22:48:53.906234 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:48:53.906234 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 5 22:48:53.906234 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 22:48:53.906234 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:48:53.915733 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:48:53.915733 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:48:53.915733 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:48:53.915733 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:48:53.915733 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:48:53.915733 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:48:53.915733 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:48:53.915733 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Aug 5 22:48:53.915733 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Aug 5 22:48:53.915733 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Aug 5 22:48:53.915733 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Aug 5 22:48:54.296953 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 5 22:48:56.054930 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Aug 5 22:48:56.054930 ignition[897]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 5 22:48:56.058277 ignition[897]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:48:56.059812 ignition[897]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:48:56.059812 ignition[897]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 5 22:48:56.059812 ignition[897]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 5 22:48:56.059812 ignition[897]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 22:48:56.059812 ignition[897]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:48:56.059812 ignition[897]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:48:56.059812 ignition[897]: INFO : files: files passed Aug 5 22:48:56.059812 ignition[897]: INFO : Ignition finished successfully Aug 5 22:48:56.060628 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 22:48:56.069396 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 22:48:56.073201 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 22:48:56.078841 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 22:48:56.078966 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 22:48:56.088736 initrd-setup-root-after-ignition[927]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:48:56.088736 initrd-setup-root-after-ignition[927]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:48:56.093398 initrd-setup-root-after-ignition[931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:48:56.091277 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:48:56.093371 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 22:48:56.099281 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 22:48:56.155955 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 22:48:56.156226 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 22:48:56.158727 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 22:48:56.159969 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 22:48:56.161847 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 22:48:56.175343 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 22:48:56.190955 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:48:56.205426 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 22:48:56.216646 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:48:56.218690 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:48:56.221463 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 22:48:56.223948 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 22:48:56.224248 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:48:56.226938 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 22:48:56.228414 systemd[1]: Stopped target basic.target - Basic System. Aug 5 22:48:56.230812 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 22:48:56.232959 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:48:56.235061 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 22:48:56.237635 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 22:48:56.240128 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:48:56.242689 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 22:48:56.245034 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 22:48:56.247558 systemd[1]: Stopped target swap.target - Swaps. Aug 5 22:48:56.249844 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 22:48:56.250049 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:48:56.252715 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:48:56.254214 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:48:56.256376 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 22:48:56.257155 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:48:56.259022 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 22:48:56.259291 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 22:48:56.262779 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 22:48:56.263019 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:48:56.264611 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 22:48:56.264823 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 22:48:56.278244 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 22:48:56.283510 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 22:48:56.284669 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 22:48:56.284935 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:48:56.288057 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 22:48:56.288351 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:48:56.303722 ignition[951]: INFO : Ignition 2.19.0 Aug 5 22:48:56.303722 ignition[951]: INFO : Stage: umount Aug 5 22:48:56.305192 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 22:48:56.308468 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:48:56.308468 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 5 22:48:56.308468 ignition[951]: INFO : umount: umount passed Aug 5 22:48:56.308468 ignition[951]: INFO : Ignition finished successfully Aug 5 22:48:56.305282 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 22:48:56.307220 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 22:48:56.307915 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 22:48:56.311783 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 22:48:56.311838 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 22:48:56.314206 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 22:48:56.314247 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 22:48:56.314932 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 5 22:48:56.314971 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 5 22:48:56.316124 systemd[1]: Stopped target network.target - Network. Aug 5 22:48:56.318257 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 22:48:56.318302 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:48:56.318909 systemd[1]: Stopped target paths.target - Path Units. Aug 5 22:48:56.319354 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 22:48:56.319444 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:48:56.320601 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 22:48:56.321605 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 22:48:56.322665 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 22:48:56.322709 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:48:56.323646 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 22:48:56.323680 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:48:56.326615 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 22:48:56.326658 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 22:48:56.327775 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 22:48:56.327814 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 22:48:56.329183 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 22:48:56.330281 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 22:48:56.333115 systemd-networkd[706]: eth0: DHCPv6 lease lost Aug 5 22:48:56.334030 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 22:48:56.334137 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 22:48:56.335455 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 22:48:56.335501 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:48:56.342480 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 22:48:56.343326 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 22:48:56.343383 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:48:56.344093 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:48:56.345660 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 22:48:56.345753 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 22:48:56.354552 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 22:48:56.354689 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:48:56.357385 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 22:48:56.357996 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 22:48:56.360029 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 22:48:56.360114 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 22:48:56.363367 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 22:48:56.363407 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:48:56.364350 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 22:48:56.364398 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:48:56.365037 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 22:48:56.365094 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 22:48:56.365655 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:48:56.365697 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:48:56.380292 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 22:48:56.381163 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:48:56.381217 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:48:56.381767 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 22:48:56.381805 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 22:48:56.382326 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 22:48:56.382363 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:48:56.382900 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 5 22:48:56.382937 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:48:56.383491 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 22:48:56.383529 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:48:56.384682 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 22:48:56.384725 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:48:56.385771 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:48:56.385809 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:48:56.387872 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 22:48:56.391534 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 22:48:56.391751 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 22:48:57.628541 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 22:48:57.628724 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 22:48:57.629949 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 22:48:57.631446 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 22:48:57.631519 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 22:48:57.642414 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 22:48:57.658315 systemd[1]: Switching root. Aug 5 22:48:57.705150 systemd-journald[184]: Journal stopped Aug 5 22:49:00.102524 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Aug 5 22:49:00.102590 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 22:49:00.102611 kernel: SELinux: policy capability open_perms=1 Aug 5 22:49:00.102625 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 22:49:00.102639 kernel: SELinux: policy capability always_check_network=0 Aug 5 22:49:00.102653 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 22:49:00.102667 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 22:49:00.102685 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 22:49:00.102699 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 22:49:00.102720 systemd[1]: Successfully loaded SELinux policy in 73.518ms. Aug 5 22:49:00.102745 kernel: audit: type=1403 audit(1722898138.259:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 22:49:00.102760 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.612ms. Aug 5 22:49:00.102777 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:49:00.102793 systemd[1]: Detected virtualization kvm. Aug 5 22:49:00.102808 systemd[1]: Detected architecture x86-64. Aug 5 22:49:00.102823 systemd[1]: Detected first boot. Aug 5 22:49:00.102840 systemd[1]: Hostname set to . Aug 5 22:49:00.102855 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:49:00.102870 zram_generator::config[993]: No configuration found. Aug 5 22:49:00.102887 systemd[1]: Populated /etc with preset unit settings. Aug 5 22:49:00.102901 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 5 22:49:00.102913 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 5 22:49:00.102926 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 5 22:49:00.102940 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 22:49:00.102956 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 22:49:00.102969 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 22:49:00.102983 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 22:49:00.102995 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 22:49:00.103009 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 22:49:00.103022 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 22:49:00.103035 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 22:49:00.103049 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:49:00.103062 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:49:00.103106 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 22:49:00.103123 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 22:49:00.103136 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 22:49:00.103150 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:49:00.103163 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 5 22:49:00.103176 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:49:00.103188 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 5 22:49:00.103203 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 5 22:49:00.103220 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 5 22:49:00.103233 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 22:49:00.103246 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:49:00.103260 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:49:00.103273 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:49:00.103286 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:49:00.103301 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 22:49:00.103317 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 22:49:00.103330 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:49:00.103343 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:49:00.103356 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:49:00.103369 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 22:49:00.103381 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 22:49:00.103394 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 22:49:00.103407 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 22:49:00.103420 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:49:00.103436 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 22:49:00.103449 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 22:49:00.103462 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 22:49:00.103475 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 22:49:00.103488 systemd[1]: Reached target machines.target - Containers. Aug 5 22:49:00.103501 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 22:49:00.103514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:49:00.103527 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:49:00.103540 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 22:49:00.103555 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:49:00.103568 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:49:00.103583 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:49:00.103596 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 22:49:00.103609 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:49:00.103622 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 22:49:00.103635 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 5 22:49:00.103648 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 5 22:49:00.103663 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 5 22:49:00.103676 systemd[1]: Stopped systemd-fsck-usr.service. Aug 5 22:49:00.103689 kernel: fuse: init (API version 7.39) Aug 5 22:49:00.103701 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:49:00.103714 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:49:00.103727 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 22:49:00.103740 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 22:49:00.103753 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:49:00.103766 systemd[1]: verity-setup.service: Deactivated successfully. Aug 5 22:49:00.103782 systemd[1]: Stopped verity-setup.service. Aug 5 22:49:00.103795 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:49:00.103808 kernel: loop: module loaded Aug 5 22:49:00.103820 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 22:49:00.103833 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 22:49:00.103846 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 22:49:00.103860 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 22:49:00.103873 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 22:49:00.103904 systemd-journald[1074]: Collecting audit messages is disabled. Aug 5 22:49:00.103939 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 22:49:00.103952 systemd-journald[1074]: Journal started Aug 5 22:49:00.103978 systemd-journald[1074]: Runtime Journal (/run/log/journal/39c377fb9db84561bc928d653d0fb0a5) is 4.9M, max 39.3M, 34.4M free. Aug 5 22:48:59.805184 systemd[1]: Queued start job for default target multi-user.target. Aug 5 22:48:59.835421 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 5 22:48:59.835808 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 5 22:49:00.109154 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:49:00.111127 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:49:00.111998 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 22:49:00.112222 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 22:49:00.113029 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:49:00.113233 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:49:00.114510 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:49:00.114656 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:49:00.116624 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 22:49:00.116779 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 22:49:00.117610 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:49:00.117767 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:49:00.118527 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:49:00.119701 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 22:49:00.121448 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 22:49:00.136322 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 22:49:00.145328 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 22:49:00.152153 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 22:49:00.155135 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 22:49:00.155169 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:49:00.159098 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 22:49:00.164839 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 22:49:00.190330 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 22:49:00.191029 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:49:00.193108 kernel: ACPI: bus type drm_connector registered Aug 5 22:49:00.194604 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 22:49:00.197030 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 22:49:00.198409 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:49:00.205958 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 22:49:00.207187 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:49:00.210228 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:49:00.217176 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 22:49:00.219495 systemd-journald[1074]: Time spent on flushing to /var/log/journal/39c377fb9db84561bc928d653d0fb0a5 is 74.831ms for 930 entries. Aug 5 22:49:00.219495 systemd-journald[1074]: System Journal (/var/log/journal/39c377fb9db84561bc928d653d0fb0a5) is 8.0M, max 584.8M, 576.8M free. Aug 5 22:49:00.321046 systemd-journald[1074]: Received client request to flush runtime journal. Aug 5 22:49:00.321227 kernel: loop0: detected capacity change from 0 to 80568 Aug 5 22:49:00.321251 kernel: block loop0: the capability attribute has been deprecated. Aug 5 22:49:00.225358 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:49:00.227744 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:49:00.227905 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:49:00.228698 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 22:49:00.229486 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 22:49:00.230345 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 22:49:00.239338 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 22:49:00.272568 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 22:49:00.273392 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 22:49:00.279300 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 22:49:00.280237 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:49:00.324752 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 22:49:00.336238 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:49:00.342293 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 22:49:00.353227 udevadm[1140]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 5 22:49:00.367282 systemd-tmpfiles[1124]: ACLs are not supported, ignoring. Aug 5 22:49:00.367304 systemd-tmpfiles[1124]: ACLs are not supported, ignoring. Aug 5 22:49:00.373146 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:49:00.380261 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 22:49:00.393877 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 22:49:00.397732 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 22:49:00.406091 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 22:49:00.445127 kernel: loop1: detected capacity change from 0 to 139760 Aug 5 22:49:00.504397 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 22:49:00.518113 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:49:00.568601 kernel: loop2: detected capacity change from 0 to 8 Aug 5 22:49:00.568101 systemd-tmpfiles[1148]: ACLs are not supported, ignoring. Aug 5 22:49:00.568117 systemd-tmpfiles[1148]: ACLs are not supported, ignoring. Aug 5 22:49:00.572990 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:49:00.589105 kernel: loop3: detected capacity change from 0 to 211296 Aug 5 22:49:00.696146 kernel: loop4: detected capacity change from 0 to 80568 Aug 5 22:49:00.891831 kernel: loop5: detected capacity change from 0 to 139760 Aug 5 22:49:00.951164 kernel: loop6: detected capacity change from 0 to 8 Aug 5 22:49:00.956245 kernel: loop7: detected capacity change from 0 to 211296 Aug 5 22:49:01.011456 (sd-merge)[1154]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Aug 5 22:49:01.012654 (sd-merge)[1154]: Merged extensions into '/usr'. Aug 5 22:49:01.025721 systemd[1]: Reloading requested from client PID 1123 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 22:49:01.025742 systemd[1]: Reloading... Aug 5 22:49:01.118103 zram_generator::config[1178]: No configuration found. Aug 5 22:49:01.333106 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:49:01.392673 systemd[1]: Reloading finished in 366 ms. Aug 5 22:49:01.421714 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 22:49:01.423145 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 22:49:01.435298 systemd[1]: Starting ensure-sysext.service... Aug 5 22:49:01.438316 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:49:01.445312 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:49:01.451226 systemd[1]: Reloading requested from client PID 1234 ('systemctl') (unit ensure-sysext.service)... Aug 5 22:49:01.451244 systemd[1]: Reloading... Aug 5 22:49:01.473977 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 22:49:01.474478 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 22:49:01.475420 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 22:49:01.475769 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Aug 5 22:49:01.475841 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Aug 5 22:49:01.482412 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:49:01.482429 systemd-tmpfiles[1235]: Skipping /boot Aug 5 22:49:01.505011 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:49:01.505024 systemd-tmpfiles[1235]: Skipping /boot Aug 5 22:49:01.513551 systemd-udevd[1236]: Using default interface naming scheme 'v255'. Aug 5 22:49:01.545112 zram_generator::config[1261]: No configuration found. Aug 5 22:49:01.625153 ldconfig[1111]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 22:49:01.659098 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1286) Aug 5 22:49:01.698102 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1284) Aug 5 22:49:01.753124 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 5 22:49:01.776417 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 5 22:49:01.780018 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:49:01.798114 kernel: ACPI: button: Power Button [PWRF] Aug 5 22:49:01.844119 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 5 22:49:01.863789 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 5 22:49:01.864381 systemd[1]: Reloading finished in 412 ms. Aug 5 22:49:01.883014 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:49:01.884047 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 22:49:01.895529 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:49:01.912794 systemd[1]: Finished ensure-sysext.service. Aug 5 22:49:01.930572 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 22:49:01.932064 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:49:01.937312 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:49:01.963489 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 22:49:01.967378 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:49:01.970943 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:49:01.975834 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:49:01.985161 kernel: mousedev: PS/2 mouse device common for all mice Aug 5 22:49:01.989454 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:49:02.001278 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:49:02.004920 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:49:02.008835 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 22:49:02.021406 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 22:49:02.028641 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:49:02.035854 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:49:02.047350 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Aug 5 22:49:02.044441 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 5 22:49:02.047657 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 22:49:02.050248 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:49:02.055288 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Aug 5 22:49:02.052698 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:49:02.053522 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:49:02.053708 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:49:02.062516 kernel: Console: switching to colour dummy device 80x25 Aug 5 22:49:02.063008 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:49:02.064851 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 5 22:49:02.064928 kernel: [drm] features: -context_init Aug 5 22:49:02.066538 kernel: [drm] number of scanouts: 1 Aug 5 22:49:02.066597 kernel: [drm] number of cap sets: 0 Aug 5 22:49:02.066642 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:49:02.066993 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:49:02.067196 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:49:02.067463 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:49:02.067947 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:49:02.073129 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Aug 5 22:49:02.082213 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Aug 5 22:49:02.082298 kernel: Console: switching to colour frame buffer device 128x48 Aug 5 22:49:02.078896 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:49:02.079088 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:49:02.088399 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 22:49:02.096942 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Aug 5 22:49:02.105997 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:49:02.106219 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:49:02.111367 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 22:49:02.133442 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:49:02.145382 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 22:49:02.147745 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 22:49:02.155558 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 22:49:02.170130 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 22:49:02.201104 lvm[1384]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:49:02.247362 augenrules[1395]: No rules Aug 5 22:49:02.247491 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 22:49:02.249056 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:49:02.258271 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 22:49:02.262216 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:49:02.270351 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:49:02.283887 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 22:49:02.295252 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 22:49:02.321357 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 5 22:49:02.323571 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 22:49:02.326462 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 22:49:02.327234 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 22:49:02.348934 systemd-networkd[1359]: lo: Link UP Aug 5 22:49:02.349016 systemd-networkd[1359]: lo: Gained carrier Aug 5 22:49:02.350807 systemd-resolved[1360]: Positive Trust Anchors: Aug 5 22:49:02.351057 systemd-resolved[1360]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:49:02.351193 systemd-resolved[1360]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:49:02.351558 systemd-networkd[1359]: Enumeration completed Aug 5 22:49:02.351667 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:49:02.353565 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:49:02.353578 systemd-networkd[1359]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:49:02.357190 systemd-resolved[1360]: Using system hostname 'ci-4012-1-0-4-e6fc6d4d35.novalocal'. Aug 5 22:49:02.357274 systemd-networkd[1359]: eth0: Link UP Aug 5 22:49:02.357279 systemd-networkd[1359]: eth0: Gained carrier Aug 5 22:49:02.357302 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:49:02.359289 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 22:49:02.361392 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:49:02.366290 systemd[1]: Reached target network.target - Network. Aug 5 22:49:02.366865 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:49:02.377153 systemd-networkd[1359]: eth0: DHCPv4 address 172.24.4.9/24, gateway 172.24.4.1 acquired from 172.24.4.1 Aug 5 22:49:02.378524 systemd-timesyncd[1361]: Network configuration changed, trying to establish connection. Aug 5 22:49:02.389202 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:49:02.409972 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 22:49:02.413161 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 22:49:02.413208 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:49:02.413750 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 22:49:02.414228 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 22:49:02.414801 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 22:49:02.416945 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 22:49:02.418441 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 22:49:02.419876 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 22:49:02.419985 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:49:02.421507 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:49:02.425116 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 22:49:02.428219 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 22:49:02.440875 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 22:49:02.442687 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 22:49:02.445128 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:49:02.446413 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:49:02.447948 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:49:02.448058 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:49:02.449470 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 22:49:02.455365 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 5 22:49:02.462524 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 22:49:02.468340 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 22:49:02.472829 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 22:49:02.473535 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 22:49:02.476288 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 22:49:02.484236 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 22:49:02.486993 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 22:49:02.497277 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 22:49:02.505231 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 22:49:02.507280 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 22:49:02.507847 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 22:49:02.508588 jq[1421]: false Aug 5 22:49:02.510243 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 22:49:02.517237 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 22:49:02.527510 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 22:49:02.527699 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 22:49:02.544053 dbus-daemon[1419]: [system] SELinux support is enabled Aug 5 22:49:02.548356 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 22:49:02.548525 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 22:49:02.552304 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 22:49:02.560363 jq[1429]: true Aug 5 22:49:02.569619 update_engine[1428]: I0805 22:49:02.569572 1428 main.cc:92] Flatcar Update Engine starting Aug 5 22:49:02.574882 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 22:49:02.574930 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 22:49:02.577772 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 22:49:02.578776 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 22:49:02.583673 systemd[1]: Started update-engine.service - Update Engine. Aug 5 22:49:02.587838 update_engine[1428]: I0805 22:49:02.587677 1428 update_check_scheduler.cc:74] Next update check in 2m20s Aug 5 22:49:02.588266 tar[1433]: linux-amd64/helm Aug 5 22:49:02.609382 extend-filesystems[1422]: Found loop4 Aug 5 22:49:02.609382 extend-filesystems[1422]: Found loop5 Aug 5 22:49:02.609382 extend-filesystems[1422]: Found loop6 Aug 5 22:49:02.609382 extend-filesystems[1422]: Found loop7 Aug 5 22:49:02.609382 extend-filesystems[1422]: Found vda Aug 5 22:49:02.609382 extend-filesystems[1422]: Found vda1 Aug 5 22:49:02.609382 extend-filesystems[1422]: Found vda2 Aug 5 22:49:02.609382 extend-filesystems[1422]: Found vda3 Aug 5 22:49:02.588700 (ntainerd)[1444]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 22:49:02.664692 extend-filesystems[1422]: Found usr Aug 5 22:49:02.664692 extend-filesystems[1422]: Found vda4 Aug 5 22:49:02.664692 extend-filesystems[1422]: Found vda6 Aug 5 22:49:02.664692 extend-filesystems[1422]: Found vda7 Aug 5 22:49:02.664692 extend-filesystems[1422]: Found vda9 Aug 5 22:49:02.664692 extend-filesystems[1422]: Checking size of /dev/vda9 Aug 5 22:49:02.597253 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 22:49:02.701870 extend-filesystems[1422]: Resized partition /dev/vda9 Aug 5 22:49:02.709168 jq[1446]: true Aug 5 22:49:02.626415 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 22:49:02.709439 extend-filesystems[1465]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 22:49:02.719967 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Aug 5 22:49:02.626580 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 22:49:02.716443 systemd-logind[1427]: New seat seat0. Aug 5 22:49:02.729535 systemd-logind[1427]: Watching system buttons on /dev/input/event1 (Power Button) Aug 5 22:49:02.729563 systemd-logind[1427]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 5 22:49:02.729815 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 22:49:02.755103 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1280) Aug 5 22:49:02.875112 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Aug 5 22:49:02.876971 locksmithd[1451]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 22:49:03.014272 extend-filesystems[1465]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 5 22:49:03.014272 extend-filesystems[1465]: old_desc_blocks = 1, new_desc_blocks = 3 Aug 5 22:49:03.014272 extend-filesystems[1465]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Aug 5 22:49:03.024817 extend-filesystems[1422]: Resized filesystem in /dev/vda9 Aug 5 22:49:03.015543 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 22:49:03.025272 bash[1473]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:49:03.015794 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 22:49:03.026536 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 22:49:03.044050 systemd[1]: Starting sshkeys.service... Aug 5 22:49:03.067306 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 5 22:49:03.075228 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 5 22:49:03.123192 containerd[1444]: time="2024-08-05T22:49:03.123100279Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Aug 5 22:49:03.163677 containerd[1444]: time="2024-08-05T22:49:03.163608603Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 22:49:03.163677 containerd[1444]: time="2024-08-05T22:49:03.163669687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:49:03.165450 containerd[1444]: time="2024-08-05T22:49:03.165258687Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:49:03.165450 containerd[1444]: time="2024-08-05T22:49:03.165303181Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:49:03.165579 containerd[1444]: time="2024-08-05T22:49:03.165537350Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:49:03.165579 containerd[1444]: time="2024-08-05T22:49:03.165557428Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 22:49:03.165669 containerd[1444]: time="2024-08-05T22:49:03.165639762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 22:49:03.165725 containerd[1444]: time="2024-08-05T22:49:03.165704443Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:49:03.165787 containerd[1444]: time="2024-08-05T22:49:03.165725954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 22:49:03.165832 containerd[1444]: time="2024-08-05T22:49:03.165803649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:49:03.166082 containerd[1444]: time="2024-08-05T22:49:03.166039121Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 22:49:03.166142 containerd[1444]: time="2024-08-05T22:49:03.166090608Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 22:49:03.166142 containerd[1444]: time="2024-08-05T22:49:03.166106648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:49:03.166252 containerd[1444]: time="2024-08-05T22:49:03.166226442Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:49:03.166296 containerd[1444]: time="2024-08-05T22:49:03.166250948Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 22:49:03.166325 containerd[1444]: time="2024-08-05T22:49:03.166308576Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 22:49:03.166844 containerd[1444]: time="2024-08-05T22:49:03.166556010Z" level=info msg="metadata content store policy set" policy=shared Aug 5 22:49:03.179263 containerd[1444]: time="2024-08-05T22:49:03.179196678Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 22:49:03.180548 containerd[1444]: time="2024-08-05T22:49:03.180331126Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 22:49:03.180548 containerd[1444]: time="2024-08-05T22:49:03.180384977Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 22:49:03.180548 containerd[1444]: time="2024-08-05T22:49:03.180444037Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 22:49:03.181066 containerd[1444]: time="2024-08-05T22:49:03.180708944Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 22:49:03.181066 containerd[1444]: time="2024-08-05T22:49:03.180735584Z" level=info msg="NRI interface is disabled by configuration." Aug 5 22:49:03.181066 containerd[1444]: time="2024-08-05T22:49:03.180758026Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 22:49:03.181844 containerd[1444]: time="2024-08-05T22:49:03.181683392Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 22:49:03.181844 containerd[1444]: time="2024-08-05T22:49:03.181711304Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 22:49:03.181844 containerd[1444]: time="2024-08-05T22:49:03.181729098Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 22:49:03.181844 containerd[1444]: time="2024-08-05T22:49:03.181772399Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 22:49:03.181844 containerd[1444]: time="2024-08-05T22:49:03.181794630Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 22:49:03.182202 containerd[1444]: time="2024-08-05T22:49:03.181819718Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 22:49:03.182202 containerd[1444]: time="2024-08-05T22:49:03.181948930Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 22:49:03.182202 containerd[1444]: time="2024-08-05T22:49:03.181967224Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 22:49:03.182202 containerd[1444]: time="2024-08-05T22:49:03.181987602Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 22:49:03.182829 containerd[1444]: time="2024-08-05T22:49:03.182404023Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 22:49:03.182829 containerd[1444]: time="2024-08-05T22:49:03.182428830Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 22:49:03.182829 containerd[1444]: time="2024-08-05T22:49:03.182444439Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 22:49:03.183341 containerd[1444]: time="2024-08-05T22:49:03.182978831Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 22:49:03.184268 containerd[1444]: time="2024-08-05T22:49:03.184013141Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 22:49:03.184268 containerd[1444]: time="2024-08-05T22:49:03.184228054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 22:49:03.184686 containerd[1444]: time="2024-08-05T22:49:03.184249404Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 22:49:03.184686 containerd[1444]: time="2024-08-05T22:49:03.184595994Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 22:49:03.184887 containerd[1444]: time="2024-08-05T22:49:03.184796070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 22:49:03.184887 containerd[1444]: time="2024-08-05T22:49:03.184825665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 22:49:03.185490 containerd[1444]: time="2024-08-05T22:49:03.185255982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 22:49:03.185490 containerd[1444]: time="2024-08-05T22:49:03.185283985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 22:49:03.185490 containerd[1444]: time="2024-08-05T22:49:03.185322146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 22:49:03.185490 containerd[1444]: time="2024-08-05T22:49:03.185345169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 22:49:03.185490 containerd[1444]: time="2024-08-05T22:49:03.185361981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 22:49:03.186187 containerd[1444]: time="2024-08-05T22:49:03.185737525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 22:49:03.186187 containerd[1444]: time="2024-08-05T22:49:03.185772401Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 22:49:03.187085 containerd[1444]: time="2024-08-05T22:49:03.186452396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 22:49:03.187481 containerd[1444]: time="2024-08-05T22:49:03.187164021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 22:49:03.187481 containerd[1444]: time="2024-08-05T22:49:03.187191182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 22:49:03.187481 containerd[1444]: time="2024-08-05T22:49:03.187229263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 22:49:03.187481 containerd[1444]: time="2024-08-05T22:49:03.187251455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 22:49:03.187481 containerd[1444]: time="2024-08-05T22:49:03.187272715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 22:49:03.187481 containerd[1444]: time="2024-08-05T22:49:03.187289406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 22:49:03.187481 containerd[1444]: time="2024-08-05T22:49:03.187324542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 22:49:03.188297 containerd[1444]: time="2024-08-05T22:49:03.188089967Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 22:49:03.188916 containerd[1444]: time="2024-08-05T22:49:03.188518090Z" level=info msg="Connect containerd service" Aug 5 22:49:03.188916 containerd[1444]: time="2024-08-05T22:49:03.188841777Z" level=info msg="using legacy CRI server" Aug 5 22:49:03.188916 containerd[1444]: time="2024-08-05T22:49:03.188856204Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 22:49:03.189248 containerd[1444]: time="2024-08-05T22:49:03.189228012Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 22:49:03.192106 containerd[1444]: time="2024-08-05T22:49:03.191759049Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:49:03.192106 containerd[1444]: time="2024-08-05T22:49:03.191834140Z" level=info msg="Start subscribing containerd event" Aug 5 22:49:03.192106 containerd[1444]: time="2024-08-05T22:49:03.191890265Z" level=info msg="Start recovering state" Aug 5 22:49:03.192106 containerd[1444]: time="2024-08-05T22:49:03.191963532Z" level=info msg="Start event monitor" Aug 5 22:49:03.192106 containerd[1444]: time="2024-08-05T22:49:03.191977729Z" level=info msg="Start snapshots syncer" Aug 5 22:49:03.192106 containerd[1444]: time="2024-08-05T22:49:03.191987988Z" level=info msg="Start cni network conf syncer for default" Aug 5 22:49:03.192106 containerd[1444]: time="2024-08-05T22:49:03.191997055Z" level=info msg="Start streaming server" Aug 5 22:49:03.193361 containerd[1444]: time="2024-08-05T22:49:03.192799620Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 22:49:03.193361 containerd[1444]: time="2024-08-05T22:49:03.192836479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 22:49:03.193361 containerd[1444]: time="2024-08-05T22:49:03.193206213Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 22:49:03.193361 containerd[1444]: time="2024-08-05T22:49:03.193228224Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 22:49:03.194022 containerd[1444]: time="2024-08-05T22:49:03.193911776Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 22:49:03.196772 containerd[1444]: time="2024-08-05T22:49:03.194437693Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 22:49:03.197946 containerd[1444]: time="2024-08-05T22:49:03.197924973Z" level=info msg="containerd successfully booted in 0.078424s" Aug 5 22:49:03.198009 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 22:49:03.322828 sshd_keygen[1445]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 22:49:03.346727 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 22:49:03.359439 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 22:49:03.371252 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 22:49:03.371480 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 22:49:03.382465 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 22:49:03.396122 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 22:49:03.407509 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 22:49:03.419649 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 5 22:49:03.423739 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 22:49:03.457258 systemd-networkd[1359]: eth0: Gained IPv6LL Aug 5 22:49:03.457980 systemd-timesyncd[1361]: Network configuration changed, trying to establish connection. Aug 5 22:49:03.460293 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 22:49:03.462641 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 22:49:03.472433 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:49:03.476598 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 22:49:03.534222 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 22:49:03.577860 tar[1433]: linux-amd64/LICENSE Aug 5 22:49:03.577860 tar[1433]: linux-amd64/README.md Aug 5 22:49:03.588908 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 22:49:05.488808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:49:05.501817 (kubelet)[1529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:49:06.259820 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 22:49:06.272260 systemd[1]: Started sshd@0-172.24.4.9:22-172.24.4.1:43852.service - OpenSSH per-connection server daemon (172.24.4.1:43852). Aug 5 22:49:07.025697 kubelet[1529]: E0805 22:49:07.025613 1529 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:49:07.028729 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:49:07.028929 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:49:07.029469 systemd[1]: kubelet.service: Consumed 2.237s CPU time. Aug 5 22:49:07.605917 sshd[1538]: Accepted publickey for core from 172.24.4.1 port 43852 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:49:07.610327 sshd[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:49:07.634779 systemd-logind[1427]: New session 1 of user core. Aug 5 22:49:07.638673 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 22:49:07.654817 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 22:49:07.682504 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 22:49:07.697342 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 22:49:07.719511 (systemd)[1545]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:49:08.132907 systemd[1545]: Queued start job for default target default.target. Aug 5 22:49:08.140596 systemd[1545]: Created slice app.slice - User Application Slice. Aug 5 22:49:08.140635 systemd[1545]: Reached target paths.target - Paths. Aug 5 22:49:08.140659 systemd[1545]: Reached target timers.target - Timers. Aug 5 22:49:08.142033 systemd[1545]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 22:49:08.154215 systemd[1545]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 22:49:08.154383 systemd[1545]: Reached target sockets.target - Sockets. Aug 5 22:49:08.154408 systemd[1545]: Reached target basic.target - Basic System. Aug 5 22:49:08.154469 systemd[1545]: Reached target default.target - Main User Target. Aug 5 22:49:08.154507 systemd[1545]: Startup finished in 421ms. Aug 5 22:49:08.154778 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 22:49:08.164333 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 22:49:08.594863 systemd[1]: Started sshd@1-172.24.4.9:22-172.24.4.1:35732.service - OpenSSH per-connection server daemon (172.24.4.1:35732). Aug 5 22:49:08.699684 login[1507]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 5 22:49:08.704213 login[1508]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 5 22:49:08.710205 systemd-logind[1427]: New session 2 of user core. Aug 5 22:49:08.721493 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 22:49:08.728555 systemd-logind[1427]: New session 3 of user core. Aug 5 22:49:08.735577 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 22:49:09.568422 coreos-metadata[1417]: Aug 05 22:49:09.568 WARN failed to locate config-drive, using the metadata service API instead Aug 5 22:49:09.588560 coreos-metadata[1417]: Aug 05 22:49:09.588 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Aug 5 22:49:09.852758 sshd[1560]: Accepted publickey for core from 172.24.4.1 port 35732 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:49:09.856009 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:49:09.867514 systemd-logind[1427]: New session 4 of user core. Aug 5 22:49:09.880587 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 22:49:09.921434 coreos-metadata[1417]: Aug 05 22:49:09.921 INFO Fetch successful Aug 5 22:49:09.921864 coreos-metadata[1417]: Aug 05 22:49:09.921 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Aug 5 22:49:09.937630 coreos-metadata[1417]: Aug 05 22:49:09.937 INFO Fetch successful Aug 5 22:49:09.937978 coreos-metadata[1417]: Aug 05 22:49:09.937 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Aug 5 22:49:09.953459 coreos-metadata[1417]: Aug 05 22:49:09.953 INFO Fetch successful Aug 5 22:49:09.953459 coreos-metadata[1417]: Aug 05 22:49:09.953 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Aug 5 22:49:09.971661 coreos-metadata[1417]: Aug 05 22:49:09.971 INFO Fetch successful Aug 5 22:49:09.971661 coreos-metadata[1417]: Aug 05 22:49:09.971 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Aug 5 22:49:09.987148 coreos-metadata[1417]: Aug 05 22:49:09.986 INFO Fetch successful Aug 5 22:49:09.987148 coreos-metadata[1417]: Aug 05 22:49:09.986 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Aug 5 22:49:10.000384 coreos-metadata[1417]: Aug 05 22:49:09.999 INFO Fetch successful Aug 5 22:49:10.047297 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 5 22:49:10.048925 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 22:49:10.184632 coreos-metadata[1487]: Aug 05 22:49:10.184 WARN failed to locate config-drive, using the metadata service API instead Aug 5 22:49:10.226908 coreos-metadata[1487]: Aug 05 22:49:10.226 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Aug 5 22:49:10.245216 coreos-metadata[1487]: Aug 05 22:49:10.244 INFO Fetch successful Aug 5 22:49:10.245216 coreos-metadata[1487]: Aug 05 22:49:10.245 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 5 22:49:10.262276 coreos-metadata[1487]: Aug 05 22:49:10.262 INFO Fetch successful Aug 5 22:49:10.268171 unknown[1487]: wrote ssh authorized keys file for user: core Aug 5 22:49:10.311549 update-ssh-keys[1587]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:49:10.314575 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 5 22:49:10.319002 systemd[1]: Finished sshkeys.service. Aug 5 22:49:10.321602 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 22:49:10.321942 systemd[1]: Startup finished in 1.166s (kernel) + 17.437s (initrd) + 12.136s (userspace) = 30.740s. Aug 5 22:49:10.508529 sshd[1560]: pam_unix(sshd:session): session closed for user core Aug 5 22:49:10.518890 systemd[1]: sshd@1-172.24.4.9:22-172.24.4.1:35732.service: Deactivated successfully. Aug 5 22:49:10.521861 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 22:49:10.525715 systemd-logind[1427]: Session 4 logged out. Waiting for processes to exit. Aug 5 22:49:10.531629 systemd[1]: Started sshd@2-172.24.4.9:22-172.24.4.1:35740.service - OpenSSH per-connection server daemon (172.24.4.1:35740). Aug 5 22:49:10.534619 systemd-logind[1427]: Removed session 4. Aug 5 22:49:11.922466 sshd[1594]: Accepted publickey for core from 172.24.4.1 port 35740 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:49:11.924567 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:49:11.932468 systemd-logind[1427]: New session 5 of user core. Aug 5 22:49:11.940854 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 22:49:12.719836 sshd[1594]: pam_unix(sshd:session): session closed for user core Aug 5 22:49:12.727004 systemd-logind[1427]: Session 5 logged out. Waiting for processes to exit. Aug 5 22:49:12.728851 systemd[1]: sshd@2-172.24.4.9:22-172.24.4.1:35740.service: Deactivated successfully. Aug 5 22:49:12.733010 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 22:49:12.735490 systemd-logind[1427]: Removed session 5. Aug 5 22:49:17.127251 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 22:49:17.135640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:49:17.305241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:49:17.307715 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:49:17.808884 kubelet[1608]: E0805 22:49:17.808795 1608 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:49:17.816368 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:49:17.816507 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:49:22.738588 systemd[1]: Started sshd@3-172.24.4.9:22-172.24.4.1:60514.service - OpenSSH per-connection server daemon (172.24.4.1:60514). Aug 5 22:49:23.965327 sshd[1617]: Accepted publickey for core from 172.24.4.1 port 60514 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:49:23.967956 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:49:23.979177 systemd-logind[1427]: New session 6 of user core. Aug 5 22:49:23.986370 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 22:49:24.791036 sshd[1617]: pam_unix(sshd:session): session closed for user core Aug 5 22:49:24.803027 systemd[1]: sshd@3-172.24.4.9:22-172.24.4.1:60514.service: Deactivated successfully. Aug 5 22:49:24.807027 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 22:49:24.811536 systemd-logind[1427]: Session 6 logged out. Waiting for processes to exit. Aug 5 22:49:24.819637 systemd[1]: Started sshd@4-172.24.4.9:22-172.24.4.1:39950.service - OpenSSH per-connection server daemon (172.24.4.1:39950). Aug 5 22:49:24.822678 systemd-logind[1427]: Removed session 6. Aug 5 22:49:26.050124 sshd[1624]: Accepted publickey for core from 172.24.4.1 port 39950 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:49:26.052730 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:49:26.063172 systemd-logind[1427]: New session 7 of user core. Aug 5 22:49:26.066365 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 22:49:26.728824 sshd[1624]: pam_unix(sshd:session): session closed for user core Aug 5 22:49:26.740345 systemd[1]: sshd@4-172.24.4.9:22-172.24.4.1:39950.service: Deactivated successfully. Aug 5 22:49:26.743456 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 22:49:26.747474 systemd-logind[1427]: Session 7 logged out. Waiting for processes to exit. Aug 5 22:49:26.752680 systemd[1]: Started sshd@5-172.24.4.9:22-172.24.4.1:39952.service - OpenSSH per-connection server daemon (172.24.4.1:39952). Aug 5 22:49:26.755747 systemd-logind[1427]: Removed session 7. Aug 5 22:49:27.877185 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 22:49:27.888461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:49:28.242306 sshd[1631]: Accepted publickey for core from 172.24.4.1 port 39952 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:49:28.247220 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:49:28.260258 systemd-logind[1427]: New session 8 of user core. Aug 5 22:49:28.265764 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 22:49:28.303387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:49:28.315995 (kubelet)[1642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:49:28.395842 kubelet[1642]: E0805 22:49:28.395731 1642 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:49:28.401921 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:49:28.402373 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:49:29.027397 sshd[1631]: pam_unix(sshd:session): session closed for user core Aug 5 22:49:29.038441 systemd[1]: sshd@5-172.24.4.9:22-172.24.4.1:39952.service: Deactivated successfully. Aug 5 22:49:29.042021 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 22:49:29.045531 systemd-logind[1427]: Session 8 logged out. Waiting for processes to exit. Aug 5 22:49:29.051801 systemd[1]: Started sshd@6-172.24.4.9:22-172.24.4.1:39968.service - OpenSSH per-connection server daemon (172.24.4.1:39968). Aug 5 22:49:29.054804 systemd-logind[1427]: Removed session 8. Aug 5 22:49:30.610482 sshd[1655]: Accepted publickey for core from 172.24.4.1 port 39968 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:49:30.613672 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:49:30.626212 systemd-logind[1427]: New session 9 of user core. Aug 5 22:49:30.636408 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 22:49:31.236237 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 22:49:31.236853 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:49:31.260869 sudo[1658]: pam_unix(sudo:session): session closed for user root Aug 5 22:49:31.495552 sshd[1655]: pam_unix(sshd:session): session closed for user core Aug 5 22:49:31.504970 systemd[1]: sshd@6-172.24.4.9:22-172.24.4.1:39968.service: Deactivated successfully. Aug 5 22:49:31.508305 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 22:49:31.509850 systemd-logind[1427]: Session 9 logged out. Waiting for processes to exit. Aug 5 22:49:31.522635 systemd[1]: Started sshd@7-172.24.4.9:22-172.24.4.1:39984.service - OpenSSH per-connection server daemon (172.24.4.1:39984). Aug 5 22:49:31.524865 systemd-logind[1427]: Removed session 9. Aug 5 22:49:33.088219 sshd[1663]: Accepted publickey for core from 172.24.4.1 port 39984 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:49:33.091265 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:49:33.107138 systemd-logind[1427]: New session 10 of user core. Aug 5 22:49:33.118492 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 22:49:33.543729 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 22:49:33.545337 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:49:33.553711 sudo[1667]: pam_unix(sudo:session): session closed for user root Aug 5 22:49:33.565268 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 22:49:33.565851 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:49:33.593693 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 22:49:33.599873 auditctl[1670]: No rules Aug 5 22:49:33.600734 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 22:49:33.601232 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 22:49:33.611456 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:49:34.363692 systemd-resolved[1360]: Clock change detected. Flushing caches. Aug 5 22:49:34.365204 systemd-timesyncd[1361]: Contacted time server 162.159.200.1:123 (2.flatcar.pool.ntp.org). Aug 5 22:49:34.365343 systemd-timesyncd[1361]: Initial clock synchronization to Mon 2024-08-05 22:49:34.363598 UTC. Aug 5 22:49:34.392498 augenrules[1688]: No rules Aug 5 22:49:34.395615 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:49:34.397953 sudo[1666]: pam_unix(sudo:session): session closed for user root Aug 5 22:49:34.580667 sshd[1663]: pam_unix(sshd:session): session closed for user core Aug 5 22:49:34.593013 systemd[1]: sshd@7-172.24.4.9:22-172.24.4.1:39984.service: Deactivated successfully. Aug 5 22:49:34.596234 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 22:49:34.600796 systemd-logind[1427]: Session 10 logged out. Waiting for processes to exit. Aug 5 22:49:34.608075 systemd[1]: Started sshd@8-172.24.4.9:22-172.24.4.1:51974.service - OpenSSH per-connection server daemon (172.24.4.1:51974). Aug 5 22:49:34.611450 systemd-logind[1427]: Removed session 10. Aug 5 22:49:36.134732 sshd[1696]: Accepted publickey for core from 172.24.4.1 port 51974 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:49:36.137534 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:49:36.147125 systemd-logind[1427]: New session 11 of user core. Aug 5 22:49:36.156804 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 22:49:36.733172 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 22:49:36.733907 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:49:37.034758 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 22:49:37.051201 (dockerd)[1709]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 22:49:37.675444 dockerd[1709]: time="2024-08-05T22:49:37.675341871Z" level=info msg="Starting up" Aug 5 22:49:37.746136 dockerd[1709]: time="2024-08-05T22:49:37.745884394Z" level=info msg="Loading containers: start." Aug 5 22:49:37.890111 kernel: Initializing XFRM netlink socket Aug 5 22:49:38.045654 systemd-networkd[1359]: docker0: Link UP Aug 5 22:49:38.069314 dockerd[1709]: time="2024-08-05T22:49:38.069277087Z" level=info msg="Loading containers: done." Aug 5 22:49:38.199442 dockerd[1709]: time="2024-08-05T22:49:38.199364826Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 22:49:38.199750 dockerd[1709]: time="2024-08-05T22:49:38.199595880Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 22:49:38.199750 dockerd[1709]: time="2024-08-05T22:49:38.199709373Z" level=info msg="Daemon has completed initialization" Aug 5 22:49:38.251944 dockerd[1709]: time="2024-08-05T22:49:38.251845824Z" level=info msg="API listen on /run/docker.sock" Aug 5 22:49:38.253026 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 22:49:39.344220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 5 22:49:39.355924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:49:39.661797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:49:39.664762 (kubelet)[1842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:49:39.969086 kubelet[1842]: E0805 22:49:39.968889 1842 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:49:39.973920 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:49:39.974668 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:49:40.995356 containerd[1444]: time="2024-08-05T22:49:40.995067586Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.7\"" Aug 5 22:49:41.778031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3889908100.mount: Deactivated successfully. Aug 5 22:49:43.946218 containerd[1444]: time="2024-08-05T22:49:43.945886416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:49:43.948547 containerd[1444]: time="2024-08-05T22:49:43.947714445Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.7: active requests=0, bytes read=35232404" Aug 5 22:49:43.950191 containerd[1444]: time="2024-08-05T22:49:43.950128522Z" level=info msg="ImageCreate event name:\"sha256:a2e0d7fa8464a06b07519d78f53fef101bb1bcf716a85f2ac8b397f1a0025bea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:49:43.957043 containerd[1444]: time="2024-08-05T22:49:43.956966998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7b104771c13b9e3537846c3f6949000785e1fbc66d07f123ebcea22c8eb918b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:49:43.958709 containerd[1444]: time="2024-08-05T22:49:43.958161638Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.7\" with image id \"sha256:a2e0d7fa8464a06b07519d78f53fef101bb1bcf716a85f2ac8b397f1a0025bea\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7b104771c13b9e3537846c3f6949000785e1fbc66d07f123ebcea22c8eb918b3\", size \"35229196\" in 2.96305029s" Aug 5 22:49:43.958709 containerd[1444]: time="2024-08-05T22:49:43.958204939Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.7\" returns image reference \"sha256:a2e0d7fa8464a06b07519d78f53fef101bb1bcf716a85f2ac8b397f1a0025bea\"" Aug 5 22:49:43.993883 containerd[1444]: time="2024-08-05T22:49:43.993846445Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.7\"" Aug 5 22:49:47.268705 containerd[1444]: time="2024-08-05T22:49:47.268361848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:49:47.271525 containerd[1444]: time="2024-08-05T22:49:47.271170686Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.7: active requests=0, bytes read=32204832" Aug 5 22:49:47.273053 containerd[1444]: time="2024-08-05T22:49:47.272959581Z" level=info msg="ImageCreate event name:\"sha256:32fe966e5c2b2a05d6b6a56a63a60e09d4c227ec1742d68f921c0b72e23537f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:49:47.280850 containerd[1444]: time="2024-08-05T22:49:47.280716770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e3356f078f7ce72984385d4ca5e726a8cb05ce355d6b158f41aa9b5dbaff9b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:49:47.284517 containerd[1444]: time="2024-08-05T22:49:47.283769645Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.7\" with image id \"sha256:32fe966e5c2b2a05d6b6a56a63a60e09d4c227ec1742d68f921c0b72e23537f8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e3356f078f7ce72984385d4ca5e726a8cb05ce355d6b158f41aa9b5dbaff9b19\", size \"33754770\" in 3.289738674s" Aug 5 22:49:47.284517 containerd[1444]: time="2024-08-05T22:49:47.283845718Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.7\" returns image reference \"sha256:32fe966e5c2b2a05d6b6a56a63a60e09d4c227ec1742d68f921c0b72e23537f8\"" Aug 5 22:49:47.335537 containerd[1444]: time="2024-08-05T22:49:47.335312684Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.7\"" Aug 5 22:49:48.310443 update_engine[1428]: I0805 22:49:48.310320 1428 update_attempter.cc:509] Updating boot flags... Aug 5 22:49:48.436605 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1935) Aug 5 22:49:48.514318 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1937) Aug 5 22:49:48.567505 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1937) Aug 5 22:49:49.474506 containerd[1444]: time="2024-08-05T22:49:49.474378131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:49:49.477059 containerd[1444]: time="2024-08-05T22:49:49.476955375Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.7: active requests=0, bytes read=17320811" Aug 5 22:49:49.478331 containerd[1444]: time="2024-08-05T22:49:49.478215047Z" level=info msg="ImageCreate event name:\"sha256:9cffb486021b39220589cbd71b6537e6f9cafdede1eba315b4b0dc83e2f4fc8e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:49:49.486248 containerd[1444]: time="2024-08-05T22:49:49.486176669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c6203fbc102cc80a7d934946b7eacb7491480a65db56db203cb3035deecaaa39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:49:49.496505 containerd[1444]: time="2024-08-05T22:49:49.495507018Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.7\" with image id \"sha256:9cffb486021b39220589cbd71b6537e6f9cafdede1eba315b4b0dc83e2f4fc8e\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c6203fbc102cc80a7d934946b7eacb7491480a65db56db203cb3035deecaaa39\", size \"18870767\" in 2.160079889s" Aug 5 22:49:49.496505 containerd[1444]: time="2024-08-05T22:49:49.495604100Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.7\" returns image reference \"sha256:9cffb486021b39220589cbd71b6537e6f9cafdede1eba315b4b0dc83e2f4fc8e\"" Aug 5 22:49:49.551664 containerd[1444]: time="2024-08-05T22:49:49.551588399Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.7\"" Aug 5 22:49:50.094439 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 5 22:49:50.108656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:49:50.247624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:49:50.252221 (kubelet)[1956]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:49:50.390546 kubelet[1956]: E0805 22:49:50.388376 1956 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:49:50.393576 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:49:50.393725 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:49:51.391375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1892762114.mount: Deactivated successfully. Aug 5 22:49:52.370671 containerd[1444]: time="2024-08-05T22:49:52.370551493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:49:52.372656 containerd[1444]: time="2024-08-05T22:49:52.372489989Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.7: active requests=0, bytes read=28600096" Aug 5 22:49:52.374605 containerd[1444]: time="2024-08-05T22:49:52.374493416Z" level=info msg="ImageCreate event name:\"sha256:cc8c46cf9d741d1e8a357e5899f298d2f4ac4d890a2d248026b57e130e91cd07\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:49:52.379906 containerd[1444]: time="2024-08-05T22:49:52.379744385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4d5e787d71c41243379cbb323d2b3a920fa50825cab19d20ef3344a808d18c4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:49:52.382103 containerd[1444]: time="2024-08-05T22:49:52.381832591Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.7\" with image id \"sha256:cc8c46cf9d741d1e8a357e5899f298d2f4ac4d890a2d248026b57e130e91cd07\", repo tag \"registry.k8s.io/kube-proxy:v1.29.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:4d5e787d71c41243379cbb323d2b3a920fa50825cab19d20ef3344a808d18c4e\", size \"28599107\" in 2.830164042s" Aug 5 22:49:52.382103 containerd[1444]: time="2024-08-05T22:49:52.381903825Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.7\" returns image reference \"sha256:cc8c46cf9d741d1e8a357e5899f298d2f4ac4d890a2d248026b57e130e91cd07\"" Aug 5 22:49:52.435786 containerd[1444]: time="2024-08-05T22:49:52.435724996Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Aug 5 22:49:53.138528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3893294335.mount: Deactivated successfully. Aug 5 22:49:55.126250 containerd[1444]: time="2024-08-05T22:49:55.126155899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:49:55.131636 containerd[1444]: time="2024-08-05T22:49:55.131437915Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Aug 5 22:49:55.134334 containerd[1444]: time="2024-08-05T22:49:55.134223960Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:49:55.143453 containerd[1444]: time="2024-08-05T22:49:55.143268654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:49:55.147453 containerd[1444]: time="2024-08-05T22:49:55.146855661Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.710734892s" Aug 5 22:49:55.147453 containerd[1444]: time="2024-08-05T22:49:55.146949196Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Aug 5 22:49:55.200575 containerd[1444]: time="2024-08-05T22:49:55.200432975Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 22:49:55.809967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3511291793.mount: Deactivated successfully. Aug 5 22:49:55.821581 containerd[1444]: time="2024-08-05T22:49:55.821415177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:49:55.823616 containerd[1444]: time="2024-08-05T22:49:55.823392285Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Aug 5 22:49:55.824905 containerd[1444]: time="2024-08-05T22:49:55.824760311Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:49:55.833539 containerd[1444]: time="2024-08-05T22:49:55.833247699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:49:55.835693 containerd[1444]: time="2024-08-05T22:49:55.835365832Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 634.788606ms" Aug 5 22:49:55.835693 containerd[1444]: time="2024-08-05T22:49:55.835489614Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Aug 5 22:49:55.875956 containerd[1444]: time="2024-08-05T22:49:55.875827067Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 5 22:49:56.840821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3800969622.mount: Deactivated successfully. Aug 5 22:50:00.572795 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Aug 5 22:50:00.584936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:50:01.690801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:50:01.705053 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:50:01.946004 kubelet[2087]: E0805 22:50:01.945698 2087 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:50:01.948564 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:50:01.948732 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:50:02.739056 containerd[1444]: time="2024-08-05T22:50:02.738836900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:50:02.753822 containerd[1444]: time="2024-08-05T22:50:02.753674287Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Aug 5 22:50:02.756928 containerd[1444]: time="2024-08-05T22:50:02.756775163Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:50:02.767228 containerd[1444]: time="2024-08-05T22:50:02.767038161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:50:02.771026 containerd[1444]: time="2024-08-05T22:50:02.770713504Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 6.894803822s" Aug 5 22:50:02.771026 containerd[1444]: time="2024-08-05T22:50:02.770798073Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Aug 5 22:50:07.668558 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:50:07.681057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:50:07.722924 systemd[1]: Reloading requested from client PID 2163 ('systemctl') (unit session-11.scope)... Aug 5 22:50:07.722941 systemd[1]: Reloading... Aug 5 22:50:07.845516 zram_generator::config[2197]: No configuration found. Aug 5 22:50:08.178888 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:50:08.264682 systemd[1]: Reloading finished in 541 ms. Aug 5 22:50:08.319439 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 22:50:08.319540 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 22:50:08.320021 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:50:08.334212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:50:08.438772 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:50:08.442027 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:50:08.957962 kubelet[2266]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:50:08.958569 kubelet[2266]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:50:08.958569 kubelet[2266]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:50:08.958569 kubelet[2266]: I0805 22:50:08.958411 2266 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:50:09.891200 kubelet[2266]: I0805 22:50:09.891087 2266 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Aug 5 22:50:09.891200 kubelet[2266]: I0805 22:50:09.891143 2266 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:50:09.891614 kubelet[2266]: I0805 22:50:09.891502 2266 server.go:919] "Client rotation is on, will bootstrap in background" Aug 5 22:50:09.928506 kubelet[2266]: I0805 22:50:09.927191 2266 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:50:09.928506 kubelet[2266]: E0805 22:50:09.928084 2266 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.9:6443: connect: connection refused Aug 5 22:50:09.941531 kubelet[2266]: I0805 22:50:09.941471 2266 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:50:09.943131 kubelet[2266]: I0805 22:50:09.943066 2266 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:50:09.944491 kubelet[2266]: I0805 22:50:09.944413 2266 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:50:09.945153 kubelet[2266]: I0805 22:50:09.945107 2266 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:50:09.945153 kubelet[2266]: I0805 22:50:09.945131 2266 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:50:09.945289 kubelet[2266]: I0805 22:50:09.945278 2266 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:50:09.945437 kubelet[2266]: I0805 22:50:09.945405 2266 kubelet.go:396] "Attempting to sync node with API server" Aug 5 22:50:09.945437 kubelet[2266]: I0805 22:50:09.945423 2266 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:50:09.946023 kubelet[2266]: I0805 22:50:09.945449 2266 kubelet.go:312] "Adding apiserver pod source" Aug 5 22:50:09.946023 kubelet[2266]: I0805 22:50:09.945481 2266 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:50:09.948174 kubelet[2266]: W0805 22:50:09.947989 2266 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.9:6443: connect: connection refused Aug 5 22:50:09.948174 kubelet[2266]: E0805 22:50:09.948062 2266 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.9:6443: connect: connection refused Aug 5 22:50:09.948174 kubelet[2266]: W0805 22:50:09.948118 2266 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012-1-0-4-e6fc6d4d35.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.9:6443: connect: connection refused Aug 5 22:50:09.948174 kubelet[2266]: E0805 22:50:09.948150 2266 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012-1-0-4-e6fc6d4d35.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.9:6443: connect: connection refused Aug 5 22:50:09.948758 kubelet[2266]: I0805 22:50:09.948510 2266 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 22:50:09.959589 kubelet[2266]: I0805 22:50:09.958759 2266 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 22:50:09.959589 kubelet[2266]: W0805 22:50:09.959120 2266 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 22:50:09.960693 kubelet[2266]: I0805 22:50:09.960430 2266 server.go:1256] "Started kubelet" Aug 5 22:50:09.961527 kubelet[2266]: I0805 22:50:09.961098 2266 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:50:09.963257 kubelet[2266]: I0805 22:50:09.962895 2266 server.go:461] "Adding debug handlers to kubelet server" Aug 5 22:50:09.971829 kubelet[2266]: I0805 22:50:09.971730 2266 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 22:50:09.972991 kubelet[2266]: I0805 22:50:09.972499 2266 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:50:09.972991 kubelet[2266]: I0805 22:50:09.972822 2266 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:50:09.977325 kubelet[2266]: E0805 22:50:09.977251 2266 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.9:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.9:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4012-1-0-4-e6fc6d4d35.novalocal.17e8f6cb2cd84c39 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4012-1-0-4-e6fc6d4d35.novalocal,UID:ci-4012-1-0-4-e6fc6d4d35.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4012-1-0-4-e6fc6d4d35.novalocal,},FirstTimestamp:2024-08-05 22:50:09.960381497 +0000 UTC m=+1.513350294,LastTimestamp:2024-08-05 22:50:09.960381497 +0000 UTC m=+1.513350294,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4012-1-0-4-e6fc6d4d35.novalocal,}" Aug 5 22:50:09.987479 kubelet[2266]: I0805 22:50:09.982160 2266 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:50:09.990382 kubelet[2266]: E0805 22:50:09.989881 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012-1-0-4-e6fc6d4d35.novalocal?timeout=10s\": dial tcp 172.24.4.9:6443: connect: connection refused" interval="200ms" Aug 5 22:50:09.990382 kubelet[2266]: I0805 22:50:09.989964 2266 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:50:09.990382 kubelet[2266]: W0805 22:50:09.990322 2266 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.9:6443: connect: connection refused Aug 5 22:50:09.990382 kubelet[2266]: E0805 22:50:09.990364 2266 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.9:6443: connect: connection refused Aug 5 22:50:09.992174 kubelet[2266]: I0805 22:50:09.991740 2266 factory.go:221] Registration of the systemd container factory successfully Aug 5 22:50:09.992174 kubelet[2266]: I0805 22:50:09.991814 2266 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 22:50:09.997954 kubelet[2266]: I0805 22:50:09.997927 2266 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:50:09.998442 kubelet[2266]: I0805 22:50:09.998429 2266 factory.go:221] Registration of the containerd container factory successfully Aug 5 22:50:10.011511 kubelet[2266]: I0805 22:50:10.011358 2266 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:50:10.014654 kubelet[2266]: E0805 22:50:10.013734 2266 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:50:10.017088 kubelet[2266]: I0805 22:50:10.016386 2266 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:50:10.017088 kubelet[2266]: I0805 22:50:10.016436 2266 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:50:10.017088 kubelet[2266]: I0805 22:50:10.016484 2266 kubelet.go:2329] "Starting kubelet main sync loop" Aug 5 22:50:10.017088 kubelet[2266]: E0805 22:50:10.016538 2266 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:50:10.019050 kubelet[2266]: W0805 22:50:10.019008 2266 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.9:6443: connect: connection refused Aug 5 22:50:10.019181 kubelet[2266]: E0805 22:50:10.019169 2266 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.9:6443: connect: connection refused Aug 5 22:50:10.030233 kubelet[2266]: I0805 22:50:10.030206 2266 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:50:10.030233 kubelet[2266]: I0805 22:50:10.030230 2266 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:50:10.030370 kubelet[2266]: I0805 22:50:10.030299 2266 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:50:10.035205 kubelet[2266]: I0805 22:50:10.035175 2266 policy_none.go:49] "None policy: Start" Aug 5 22:50:10.035693 kubelet[2266]: I0805 22:50:10.035671 2266 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 22:50:10.035792 kubelet[2266]: I0805 22:50:10.035726 2266 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:50:10.051155 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 5 22:50:10.061879 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 5 22:50:10.065066 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 5 22:50:10.076327 kubelet[2266]: I0805 22:50:10.076299 2266 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:50:10.076327 kubelet[2266]: I0805 22:50:10.076593 2266 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:50:10.078871 kubelet[2266]: E0805 22:50:10.078800 2266 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4012-1-0-4-e6fc6d4d35.novalocal\" not found" Aug 5 22:50:10.085027 kubelet[2266]: I0805 22:50:10.085012 2266 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:10.085614 kubelet[2266]: E0805 22:50:10.085592 2266 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.9:6443/api/v1/nodes\": dial tcp 172.24.4.9:6443: connect: connection refused" node="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:10.117470 kubelet[2266]: I0805 22:50:10.117418 2266 topology_manager.go:215] "Topology Admit Handler" podUID="2e6cc1e422e679acabeb5ec9196f3bff" podNamespace="kube-system" podName="kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:10.123292 kubelet[2266]: I0805 22:50:10.122860 2266 topology_manager.go:215] "Topology Admit Handler" podUID="075464afd0df46ba5b66ede948ece515" podNamespace="kube-system" podName="kube-scheduler-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:10.126110 kubelet[2266]: I0805 22:50:10.125809 2266 topology_manager.go:215] "Topology Admit Handler" podUID="5081bd23fc08a0fa642a406845a5b731" podNamespace="kube-system" podName="kube-apiserver-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:10.141823 systemd[1]: Created slice kubepods-burstable-pod2e6cc1e422e679acabeb5ec9196f3bff.slice - libcontainer container kubepods-burstable-pod2e6cc1e422e679acabeb5ec9196f3bff.slice. Aug 5 22:50:10.170300 systemd[1]: Created slice kubepods-burstable-pod075464afd0df46ba5b66ede948ece515.slice - libcontainer container kubepods-burstable-pod075464afd0df46ba5b66ede948ece515.slice. Aug 5 22:50:10.190373 systemd[1]: Created slice kubepods-burstable-pod5081bd23fc08a0fa642a406845a5b731.slice - libcontainer container kubepods-burstable-pod5081bd23fc08a0fa642a406845a5b731.slice. Aug 5 22:50:10.191632 kubelet[2266]: E0805 22:50:10.191194 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012-1-0-4-e6fc6d4d35.novalocal?timeout=10s\": dial tcp 172.24.4.9:6443: connect: connection refused" interval="400ms" Aug 5 22:50:10.288688 kubelet[2266]: I0805 22:50:10.288635 2266 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:10.289420 kubelet[2266]: E0805 22:50:10.289353 2266 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.9:6443/api/v1/nodes\": dial tcp 172.24.4.9:6443: connect: connection refused" node="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:10.291668 kubelet[2266]: I0805 22:50:10.291593 2266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5081bd23fc08a0fa642a406845a5b731-ca-certs\") pod \"kube-apiserver-ci-4012-1-0-4-e6fc6d4d35.novalocal\" (UID: \"5081bd23fc08a0fa642a406845a5b731\") " pod="kube-system/kube-apiserver-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:10.291785 kubelet[2266]: I0805 22:50:10.291694 2266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5081bd23fc08a0fa642a406845a5b731-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012-1-0-4-e6fc6d4d35.novalocal\" (UID: \"5081bd23fc08a0fa642a406845a5b731\") " pod="kube-system/kube-apiserver-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:10.291785 kubelet[2266]: I0805 22:50:10.291756 2266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e6cc1e422e679acabeb5ec9196f3bff-ca-certs\") pod \"kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal\" (UID: \"2e6cc1e422e679acabeb5ec9196f3bff\") " pod="kube-system/kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:10.291909 kubelet[2266]: I0805 22:50:10.291818 2266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/075464afd0df46ba5b66ede948ece515-kubeconfig\") pod \"kube-scheduler-ci-4012-1-0-4-e6fc6d4d35.novalocal\" (UID: \"075464afd0df46ba5b66ede948ece515\") " pod="kube-system/kube-scheduler-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:10.291909 kubelet[2266]: I0805 22:50:10.291874 2266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5081bd23fc08a0fa642a406845a5b731-k8s-certs\") pod \"kube-apiserver-ci-4012-1-0-4-e6fc6d4d35.novalocal\" (UID: \"5081bd23fc08a0fa642a406845a5b731\") " pod="kube-system/kube-apiserver-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:10.292018 kubelet[2266]: I0805 22:50:10.291931 2266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2e6cc1e422e679acabeb5ec9196f3bff-flexvolume-dir\") pod \"kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal\" (UID: \"2e6cc1e422e679acabeb5ec9196f3bff\") " pod="kube-system/kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:10.292018 kubelet[2266]: I0805 22:50:10.292005 2266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e6cc1e422e679acabeb5ec9196f3bff-k8s-certs\") pod \"kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal\" (UID: \"2e6cc1e422e679acabeb5ec9196f3bff\") " pod="kube-system/kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:10.292121 kubelet[2266]: I0805 22:50:10.292069 2266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e6cc1e422e679acabeb5ec9196f3bff-kubeconfig\") pod \"kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal\" (UID: \"2e6cc1e422e679acabeb5ec9196f3bff\") " pod="kube-system/kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:10.292185 kubelet[2266]: I0805 22:50:10.292133 2266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e6cc1e422e679acabeb5ec9196f3bff-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal\" (UID: \"2e6cc1e422e679acabeb5ec9196f3bff\") " pod="kube-system/kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:10.466635 containerd[1444]: time="2024-08-05T22:50:10.465826084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal,Uid:2e6cc1e422e679acabeb5ec9196f3bff,Namespace:kube-system,Attempt:0,}" Aug 5 22:50:10.547988 containerd[1444]: time="2024-08-05T22:50:10.547283704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012-1-0-4-e6fc6d4d35.novalocal,Uid:5081bd23fc08a0fa642a406845a5b731,Namespace:kube-system,Attempt:0,}" Aug 5 22:50:10.551203 containerd[1444]: time="2024-08-05T22:50:10.551102308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012-1-0-4-e6fc6d4d35.novalocal,Uid:075464afd0df46ba5b66ede948ece515,Namespace:kube-system,Attempt:0,}" Aug 5 22:50:10.592268 kubelet[2266]: E0805 22:50:10.592200 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012-1-0-4-e6fc6d4d35.novalocal?timeout=10s\": dial tcp 172.24.4.9:6443: connect: connection refused" interval="800ms" Aug 5 22:50:10.694911 kubelet[2266]: I0805 22:50:10.693902 2266 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:10.694911 kubelet[2266]: E0805 22:50:10.694432 2266 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.9:6443/api/v1/nodes\": dial tcp 172.24.4.9:6443: connect: connection refused" node="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:11.244775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3295467388.mount: Deactivated successfully. Aug 5 22:50:11.257967 containerd[1444]: time="2024-08-05T22:50:11.257857479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:50:11.261172 containerd[1444]: time="2024-08-05T22:50:11.261067195Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:50:11.263842 containerd[1444]: time="2024-08-05T22:50:11.263773315Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:50:11.269689 containerd[1444]: time="2024-08-05T22:50:11.269206575Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:50:11.269689 containerd[1444]: time="2024-08-05T22:50:11.269347168Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Aug 5 22:50:11.269689 containerd[1444]: time="2024-08-05T22:50:11.269552656Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:50:11.280196 containerd[1444]: time="2024-08-05T22:50:11.280103750Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:50:11.284299 containerd[1444]: time="2024-08-05T22:50:11.283089085Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 735.536206ms" Aug 5 22:50:11.291545 containerd[1444]: time="2024-08-05T22:50:11.289238142Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 823.196942ms" Aug 5 22:50:11.296265 containerd[1444]: time="2024-08-05T22:50:11.296176171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:50:11.302569 containerd[1444]: time="2024-08-05T22:50:11.302449961Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 751.14887ms" Aug 5 22:50:11.341265 kubelet[2266]: W0805 22:50:11.341154 2266 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.9:6443: connect: connection refused Aug 5 22:50:11.341965 kubelet[2266]: E0805 22:50:11.341277 2266 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.9:6443: connect: connection refused Aug 5 22:50:11.361789 kubelet[2266]: W0805 22:50:11.361699 2266 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.9:6443: connect: connection refused Aug 5 22:50:11.361789 kubelet[2266]: E0805 22:50:11.361786 2266 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.9:6443: connect: connection refused Aug 5 22:50:11.394075 kubelet[2266]: E0805 22:50:11.393962 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012-1-0-4-e6fc6d4d35.novalocal?timeout=10s\": dial tcp 172.24.4.9:6443: connect: connection refused" interval="1.6s" Aug 5 22:50:11.481431 kubelet[2266]: W0805 22:50:11.481280 2266 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012-1-0-4-e6fc6d4d35.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.9:6443: connect: connection refused Aug 5 22:50:11.481815 kubelet[2266]: E0805 22:50:11.481503 2266 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012-1-0-4-e6fc6d4d35.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.9:6443: connect: connection refused Aug 5 22:50:11.498960 kubelet[2266]: I0805 22:50:11.498556 2266 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:11.499421 kubelet[2266]: E0805 22:50:11.499232 2266 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.9:6443/api/v1/nodes\": dial tcp 172.24.4.9:6443: connect: connection refused" node="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:11.500414 kubelet[2266]: W0805 22:50:11.500237 2266 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.9:6443: connect: connection refused Aug 5 22:50:11.500414 kubelet[2266]: E0805 22:50:11.500343 2266 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.9:6443: connect: connection refused Aug 5 22:50:11.730416 containerd[1444]: time="2024-08-05T22:50:11.727767982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:50:11.730416 containerd[1444]: time="2024-08-05T22:50:11.728508775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:50:11.730416 containerd[1444]: time="2024-08-05T22:50:11.728534283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:50:11.730416 containerd[1444]: time="2024-08-05T22:50:11.728546957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:50:11.731555 containerd[1444]: time="2024-08-05T22:50:11.730824361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:50:11.731555 containerd[1444]: time="2024-08-05T22:50:11.730905684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:50:11.731770 containerd[1444]: time="2024-08-05T22:50:11.730932364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:50:11.731853 containerd[1444]: time="2024-08-05T22:50:11.731818020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:50:11.738254 containerd[1444]: time="2024-08-05T22:50:11.738024473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:50:11.738254 containerd[1444]: time="2024-08-05T22:50:11.738090678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:50:11.738254 containerd[1444]: time="2024-08-05T22:50:11.738118450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:50:11.738254 containerd[1444]: time="2024-08-05T22:50:11.738138828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:50:11.766698 systemd[1]: Started cri-containerd-4bc2cf0c8ee09adb9ce907334d8d0ef35bfe5df469b8cc721de51c0310f6dddd.scope - libcontainer container 4bc2cf0c8ee09adb9ce907334d8d0ef35bfe5df469b8cc721de51c0310f6dddd. Aug 5 22:50:11.771198 systemd[1]: Started cri-containerd-111a2364c67aa1b48dc7f8a54a91c64d3a19b3e3eb09eb4957abad03039c8e63.scope - libcontainer container 111a2364c67aa1b48dc7f8a54a91c64d3a19b3e3eb09eb4957abad03039c8e63. Aug 5 22:50:11.777019 systemd[1]: Started cri-containerd-aac9950c5fb47e06fd63e75d77d52984d1cdaf41c634f5cd488a432df403c47d.scope - libcontainer container aac9950c5fb47e06fd63e75d77d52984d1cdaf41c634f5cd488a432df403c47d. Aug 5 22:50:11.858644 containerd[1444]: time="2024-08-05T22:50:11.858599149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal,Uid:2e6cc1e422e679acabeb5ec9196f3bff,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bc2cf0c8ee09adb9ce907334d8d0ef35bfe5df469b8cc721de51c0310f6dddd\"" Aug 5 22:50:11.864666 containerd[1444]: time="2024-08-05T22:50:11.864628780Z" level=info msg="CreateContainer within sandbox \"4bc2cf0c8ee09adb9ce907334d8d0ef35bfe5df469b8cc721de51c0310f6dddd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 22:50:11.866280 containerd[1444]: time="2024-08-05T22:50:11.866227147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012-1-0-4-e6fc6d4d35.novalocal,Uid:075464afd0df46ba5b66ede948ece515,Namespace:kube-system,Attempt:0,} returns sandbox id \"111a2364c67aa1b48dc7f8a54a91c64d3a19b3e3eb09eb4957abad03039c8e63\"" Aug 5 22:50:11.869174 containerd[1444]: time="2024-08-05T22:50:11.869144815Z" level=info msg="CreateContainer within sandbox \"111a2364c67aa1b48dc7f8a54a91c64d3a19b3e3eb09eb4957abad03039c8e63\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 22:50:11.872118 containerd[1444]: time="2024-08-05T22:50:11.872071328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012-1-0-4-e6fc6d4d35.novalocal,Uid:5081bd23fc08a0fa642a406845a5b731,Namespace:kube-system,Attempt:0,} returns sandbox id \"aac9950c5fb47e06fd63e75d77d52984d1cdaf41c634f5cd488a432df403c47d\"" Aug 5 22:50:11.881368 containerd[1444]: time="2024-08-05T22:50:11.881312420Z" level=info msg="CreateContainer within sandbox \"aac9950c5fb47e06fd63e75d77d52984d1cdaf41c634f5cd488a432df403c47d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 22:50:11.921018 containerd[1444]: time="2024-08-05T22:50:11.920957528Z" level=info msg="CreateContainer within sandbox \"111a2364c67aa1b48dc7f8a54a91c64d3a19b3e3eb09eb4957abad03039c8e63\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6e44b3f83ca18ae97938c7129ff3b27b283aef96fc71e2788e43d396eab62065\"" Aug 5 22:50:11.923484 containerd[1444]: time="2024-08-05T22:50:11.922798300Z" level=info msg="StartContainer for \"6e44b3f83ca18ae97938c7129ff3b27b283aef96fc71e2788e43d396eab62065\"" Aug 5 22:50:11.937843 containerd[1444]: time="2024-08-05T22:50:11.937640497Z" level=info msg="CreateContainer within sandbox \"4bc2cf0c8ee09adb9ce907334d8d0ef35bfe5df469b8cc721de51c0310f6dddd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f4abafe83637d6a06e7f531947cca3a6f4a52a8a7d6ceb1bdb30479696c50b84\"" Aug 5 22:50:11.939274 containerd[1444]: time="2024-08-05T22:50:11.939223844Z" level=info msg="StartContainer for \"f4abafe83637d6a06e7f531947cca3a6f4a52a8a7d6ceb1bdb30479696c50b84\"" Aug 5 22:50:11.941089 containerd[1444]: time="2024-08-05T22:50:11.941033308Z" level=info msg="CreateContainer within sandbox \"aac9950c5fb47e06fd63e75d77d52984d1cdaf41c634f5cd488a432df403c47d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3b1694f76980802af0166db9a1189dd43d9d73acd02d051ca8e467582d81c180\"" Aug 5 22:50:11.942905 containerd[1444]: time="2024-08-05T22:50:11.942691146Z" level=info msg="StartContainer for \"3b1694f76980802af0166db9a1189dd43d9d73acd02d051ca8e467582d81c180\"" Aug 5 22:50:11.957402 systemd[1]: Started cri-containerd-6e44b3f83ca18ae97938c7129ff3b27b283aef96fc71e2788e43d396eab62065.scope - libcontainer container 6e44b3f83ca18ae97938c7129ff3b27b283aef96fc71e2788e43d396eab62065. Aug 5 22:50:11.960086 kubelet[2266]: E0805 22:50:11.960040 2266 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.9:6443: connect: connection refused Aug 5 22:50:12.005650 systemd[1]: Started cri-containerd-3b1694f76980802af0166db9a1189dd43d9d73acd02d051ca8e467582d81c180.scope - libcontainer container 3b1694f76980802af0166db9a1189dd43d9d73acd02d051ca8e467582d81c180. Aug 5 22:50:12.016691 systemd[1]: Started cri-containerd-f4abafe83637d6a06e7f531947cca3a6f4a52a8a7d6ceb1bdb30479696c50b84.scope - libcontainer container f4abafe83637d6a06e7f531947cca3a6f4a52a8a7d6ceb1bdb30479696c50b84. Aug 5 22:50:12.047154 containerd[1444]: time="2024-08-05T22:50:12.047043355Z" level=info msg="StartContainer for \"6e44b3f83ca18ae97938c7129ff3b27b283aef96fc71e2788e43d396eab62065\" returns successfully" Aug 5 22:50:12.110403 containerd[1444]: time="2024-08-05T22:50:12.109815139Z" level=info msg="StartContainer for \"3b1694f76980802af0166db9a1189dd43d9d73acd02d051ca8e467582d81c180\" returns successfully" Aug 5 22:50:12.110403 containerd[1444]: time="2024-08-05T22:50:12.109918674Z" level=info msg="StartContainer for \"f4abafe83637d6a06e7f531947cca3a6f4a52a8a7d6ceb1bdb30479696c50b84\" returns successfully" Aug 5 22:50:13.101506 kubelet[2266]: I0805 22:50:13.101115 2266 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:14.457031 kubelet[2266]: E0805 22:50:14.456978 2266 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4012-1-0-4-e6fc6d4d35.novalocal\" not found" node="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:14.499540 kubelet[2266]: I0805 22:50:14.499238 2266 kubelet_node_status.go:76] "Successfully registered node" node="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:14.951624 kubelet[2266]: I0805 22:50:14.951074 2266 apiserver.go:52] "Watching apiserver" Aug 5 22:50:14.999114 kubelet[2266]: I0805 22:50:14.999030 2266 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:50:15.088682 kubelet[2266]: E0805 22:50:15.088426 2266 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4012-1-0-4-e6fc6d4d35.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:15.091616 kubelet[2266]: E0805 22:50:15.091583 2266 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:17.640696 systemd[1]: Reloading requested from client PID 2538 ('systemctl') (unit session-11.scope)... Aug 5 22:50:17.641276 systemd[1]: Reloading... Aug 5 22:50:17.762525 zram_generator::config[2578]: No configuration found. Aug 5 22:50:17.919205 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:50:18.028238 systemd[1]: Reloading finished in 386 ms. Aug 5 22:50:18.070751 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:50:18.071763 kubelet[2266]: I0805 22:50:18.071242 2266 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:50:18.080952 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 22:50:18.081166 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:50:18.081220 systemd[1]: kubelet.service: Consumed 1.744s CPU time, 109.4M memory peak, 0B memory swap peak. Aug 5 22:50:18.095957 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:50:18.600803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:50:18.612013 (kubelet)[2639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:50:18.764478 kubelet[2639]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:50:18.764478 kubelet[2639]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:50:18.764478 kubelet[2639]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:50:18.764478 kubelet[2639]: I0805 22:50:18.763927 2639 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:50:18.771433 kubelet[2639]: I0805 22:50:18.771403 2639 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Aug 5 22:50:18.772156 kubelet[2639]: I0805 22:50:18.772135 2639 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:50:18.772738 kubelet[2639]: I0805 22:50:18.772715 2639 server.go:919] "Client rotation is on, will bootstrap in background" Aug 5 22:50:18.774841 kubelet[2639]: I0805 22:50:18.774493 2639 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 22:50:18.779728 kubelet[2639]: I0805 22:50:18.779191 2639 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:50:18.787149 kubelet[2639]: I0805 22:50:18.786768 2639 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:50:18.787352 kubelet[2639]: I0805 22:50:18.787317 2639 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:50:18.787588 kubelet[2639]: I0805 22:50:18.787562 2639 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:50:18.787701 kubelet[2639]: I0805 22:50:18.787591 2639 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:50:18.787701 kubelet[2639]: I0805 22:50:18.787636 2639 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:50:18.787701 kubelet[2639]: I0805 22:50:18.787670 2639 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:50:18.787802 kubelet[2639]: I0805 22:50:18.787781 2639 kubelet.go:396] "Attempting to sync node with API server" Aug 5 22:50:18.787832 kubelet[2639]: I0805 22:50:18.787803 2639 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:50:18.788862 kubelet[2639]: I0805 22:50:18.788372 2639 kubelet.go:312] "Adding apiserver pod source" Aug 5 22:50:18.788862 kubelet[2639]: I0805 22:50:18.788398 2639 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:50:18.792204 kubelet[2639]: I0805 22:50:18.790575 2639 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 22:50:18.792204 kubelet[2639]: I0805 22:50:18.790816 2639 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 22:50:18.792204 kubelet[2639]: I0805 22:50:18.791324 2639 server.go:1256] "Started kubelet" Aug 5 22:50:18.796081 kubelet[2639]: I0805 22:50:18.795531 2639 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:50:18.798815 kubelet[2639]: I0805 22:50:18.798612 2639 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:50:18.802448 kubelet[2639]: I0805 22:50:18.800554 2639 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 22:50:18.802448 kubelet[2639]: I0805 22:50:18.800894 2639 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:50:18.804024 kubelet[2639]: I0805 22:50:18.803275 2639 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:50:18.804121 kubelet[2639]: I0805 22:50:18.804101 2639 server.go:461] "Adding debug handlers to kubelet server" Aug 5 22:50:18.811860 kubelet[2639]: I0805 22:50:18.811826 2639 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:50:18.812114 kubelet[2639]: I0805 22:50:18.812095 2639 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:50:18.820762 kubelet[2639]: I0805 22:50:18.820739 2639 factory.go:221] Registration of the systemd container factory successfully Aug 5 22:50:18.820989 kubelet[2639]: I0805 22:50:18.820968 2639 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 22:50:18.842716 kubelet[2639]: I0805 22:50:18.842685 2639 factory.go:221] Registration of the containerd container factory successfully Aug 5 22:50:18.844955 kubelet[2639]: I0805 22:50:18.844921 2639 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:50:18.848405 kubelet[2639]: I0805 22:50:18.848366 2639 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:50:18.848405 kubelet[2639]: I0805 22:50:18.848401 2639 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:50:18.848549 kubelet[2639]: I0805 22:50:18.848424 2639 kubelet.go:2329] "Starting kubelet main sync loop" Aug 5 22:50:18.848549 kubelet[2639]: E0805 22:50:18.848500 2639 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:50:18.904730 kubelet[2639]: I0805 22:50:18.903354 2639 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:50:18.904730 kubelet[2639]: I0805 22:50:18.904511 2639 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:50:18.904730 kubelet[2639]: I0805 22:50:18.904532 2639 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:50:18.905066 kubelet[2639]: I0805 22:50:18.904961 2639 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 22:50:18.905066 kubelet[2639]: I0805 22:50:18.904992 2639 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 22:50:18.905066 kubelet[2639]: I0805 22:50:18.905000 2639 policy_none.go:49] "None policy: Start" Aug 5 22:50:18.907200 kubelet[2639]: I0805 22:50:18.907175 2639 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 22:50:18.907266 kubelet[2639]: I0805 22:50:18.907227 2639 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:50:18.907778 kubelet[2639]: I0805 22:50:18.907754 2639 state_mem.go:75] "Updated machine memory state" Aug 5 22:50:18.916397 kubelet[2639]: I0805 22:50:18.916317 2639 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:18.927317 kubelet[2639]: I0805 22:50:18.927274 2639 kubelet_node_status.go:112] "Node was previously registered" node="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:18.927734 kubelet[2639]: I0805 22:50:18.927706 2639 kubelet_node_status.go:76] "Successfully registered node" node="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:18.932647 kubelet[2639]: I0805 22:50:18.931976 2639 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:50:18.933429 kubelet[2639]: I0805 22:50:18.932762 2639 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:50:18.949500 kubelet[2639]: I0805 22:50:18.949442 2639 topology_manager.go:215] "Topology Admit Handler" podUID="5081bd23fc08a0fa642a406845a5b731" podNamespace="kube-system" podName="kube-apiserver-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:18.949644 kubelet[2639]: I0805 22:50:18.949556 2639 topology_manager.go:215] "Topology Admit Handler" podUID="2e6cc1e422e679acabeb5ec9196f3bff" podNamespace="kube-system" podName="kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:18.949644 kubelet[2639]: I0805 22:50:18.949597 2639 topology_manager.go:215] "Topology Admit Handler" podUID="075464afd0df46ba5b66ede948ece515" podNamespace="kube-system" podName="kube-scheduler-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:18.956812 kubelet[2639]: W0805 22:50:18.956740 2639 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 5 22:50:18.958924 kubelet[2639]: W0805 22:50:18.958610 2639 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 5 22:50:18.963685 kubelet[2639]: W0805 22:50:18.963605 2639 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 5 22:50:19.015585 kubelet[2639]: I0805 22:50:19.015553 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5081bd23fc08a0fa642a406845a5b731-k8s-certs\") pod \"kube-apiserver-ci-4012-1-0-4-e6fc6d4d35.novalocal\" (UID: \"5081bd23fc08a0fa642a406845a5b731\") " pod="kube-system/kube-apiserver-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:19.016039 kubelet[2639]: I0805 22:50:19.015749 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5081bd23fc08a0fa642a406845a5b731-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012-1-0-4-e6fc6d4d35.novalocal\" (UID: \"5081bd23fc08a0fa642a406845a5b731\") " pod="kube-system/kube-apiserver-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:19.016039 kubelet[2639]: I0805 22:50:19.015784 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2e6cc1e422e679acabeb5ec9196f3bff-flexvolume-dir\") pod \"kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal\" (UID: \"2e6cc1e422e679acabeb5ec9196f3bff\") " pod="kube-system/kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:19.016039 kubelet[2639]: I0805 22:50:19.015816 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e6cc1e422e679acabeb5ec9196f3bff-kubeconfig\") pod \"kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal\" (UID: \"2e6cc1e422e679acabeb5ec9196f3bff\") " pod="kube-system/kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:19.016039 kubelet[2639]: I0805 22:50:19.015845 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/075464afd0df46ba5b66ede948ece515-kubeconfig\") pod \"kube-scheduler-ci-4012-1-0-4-e6fc6d4d35.novalocal\" (UID: \"075464afd0df46ba5b66ede948ece515\") " pod="kube-system/kube-scheduler-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:19.016162 kubelet[2639]: I0805 22:50:19.015868 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5081bd23fc08a0fa642a406845a5b731-ca-certs\") pod \"kube-apiserver-ci-4012-1-0-4-e6fc6d4d35.novalocal\" (UID: \"5081bd23fc08a0fa642a406845a5b731\") " pod="kube-system/kube-apiserver-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:19.016162 kubelet[2639]: I0805 22:50:19.015890 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e6cc1e422e679acabeb5ec9196f3bff-ca-certs\") pod \"kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal\" (UID: \"2e6cc1e422e679acabeb5ec9196f3bff\") " pod="kube-system/kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:19.016162 kubelet[2639]: I0805 22:50:19.015930 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e6cc1e422e679acabeb5ec9196f3bff-k8s-certs\") pod \"kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal\" (UID: \"2e6cc1e422e679acabeb5ec9196f3bff\") " pod="kube-system/kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:19.016162 kubelet[2639]: I0805 22:50:19.015959 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e6cc1e422e679acabeb5ec9196f3bff-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal\" (UID: \"2e6cc1e422e679acabeb5ec9196f3bff\") " pod="kube-system/kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:19.802943 kubelet[2639]: I0805 22:50:19.802451 2639 apiserver.go:52] "Watching apiserver" Aug 5 22:50:19.812128 kubelet[2639]: I0805 22:50:19.812068 2639 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:50:19.891084 kubelet[2639]: W0805 22:50:19.890864 2639 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 5 22:50:19.891084 kubelet[2639]: E0805 22:50:19.890980 2639 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4012-1-0-4-e6fc6d4d35.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:50:19.920004 kubelet[2639]: I0805 22:50:19.919954 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4012-1-0-4-e6fc6d4d35.novalocal" podStartSLOduration=1.919892334 podStartE2EDuration="1.919892334s" podCreationTimestamp="2024-08-05 22:50:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:50:19.906218685 +0000 UTC m=+1.215681628" watchObservedRunningTime="2024-08-05 22:50:19.919892334 +0000 UTC m=+1.229355267" Aug 5 22:50:19.928983 kubelet[2639]: I0805 22:50:19.928736 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4012-1-0-4-e6fc6d4d35.novalocal" podStartSLOduration=1.928694772 podStartE2EDuration="1.928694772s" podCreationTimestamp="2024-08-05 22:50:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:50:19.920154266 +0000 UTC m=+1.229617209" watchObservedRunningTime="2024-08-05 22:50:19.928694772 +0000 UTC m=+1.238157705" Aug 5 22:50:19.937700 kubelet[2639]: I0805 22:50:19.937617 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4012-1-0-4-e6fc6d4d35.novalocal" podStartSLOduration=1.937532024 podStartE2EDuration="1.937532024s" podCreationTimestamp="2024-08-05 22:50:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:50:19.929384978 +0000 UTC m=+1.238847951" watchObservedRunningTime="2024-08-05 22:50:19.937532024 +0000 UTC m=+1.246994967" Aug 5 22:50:24.668545 sudo[1699]: pam_unix(sudo:session): session closed for user root Aug 5 22:50:24.846213 sshd[1696]: pam_unix(sshd:session): session closed for user core Aug 5 22:50:24.853600 systemd[1]: sshd@8-172.24.4.9:22-172.24.4.1:51974.service: Deactivated successfully. Aug 5 22:50:24.859035 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 22:50:24.859752 systemd[1]: session-11.scope: Consumed 7.878s CPU time, 135.3M memory peak, 0B memory swap peak. Aug 5 22:50:24.861432 systemd-logind[1427]: Session 11 logged out. Waiting for processes to exit. Aug 5 22:50:24.864817 systemd-logind[1427]: Removed session 11. Aug 5 22:50:31.683078 kubelet[2639]: I0805 22:50:31.682857 2639 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 22:50:31.685240 kubelet[2639]: I0805 22:50:31.684293 2639 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 22:50:31.685326 containerd[1444]: time="2024-08-05T22:50:31.684050541Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 22:50:32.376969 kubelet[2639]: I0805 22:50:32.376912 2639 topology_manager.go:215] "Topology Admit Handler" podUID="4c75599e-4f87-499a-bb27-5aedbc68a966" podNamespace="kube-system" podName="kube-proxy-x9hw8" Aug 5 22:50:32.401206 systemd[1]: Created slice kubepods-besteffort-pod4c75599e_4f87_499a_bb27_5aedbc68a966.slice - libcontainer container kubepods-besteffort-pod4c75599e_4f87_499a_bb27_5aedbc68a966.slice. Aug 5 22:50:32.410058 kubelet[2639]: I0805 22:50:32.409863 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4c75599e-4f87-499a-bb27-5aedbc68a966-kube-proxy\") pod \"kube-proxy-x9hw8\" (UID: \"4c75599e-4f87-499a-bb27-5aedbc68a966\") " pod="kube-system/kube-proxy-x9hw8" Aug 5 22:50:32.410058 kubelet[2639]: I0805 22:50:32.409922 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c75599e-4f87-499a-bb27-5aedbc68a966-xtables-lock\") pod \"kube-proxy-x9hw8\" (UID: \"4c75599e-4f87-499a-bb27-5aedbc68a966\") " pod="kube-system/kube-proxy-x9hw8" Aug 5 22:50:32.410058 kubelet[2639]: I0805 22:50:32.409951 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjjwx\" (UniqueName: \"kubernetes.io/projected/4c75599e-4f87-499a-bb27-5aedbc68a966-kube-api-access-pjjwx\") pod \"kube-proxy-x9hw8\" (UID: \"4c75599e-4f87-499a-bb27-5aedbc68a966\") " pod="kube-system/kube-proxy-x9hw8" Aug 5 22:50:32.410058 kubelet[2639]: I0805 22:50:32.409978 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c75599e-4f87-499a-bb27-5aedbc68a966-lib-modules\") pod \"kube-proxy-x9hw8\" (UID: \"4c75599e-4f87-499a-bb27-5aedbc68a966\") " pod="kube-system/kube-proxy-x9hw8" Aug 5 22:50:32.715414 containerd[1444]: time="2024-08-05T22:50:32.715246849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x9hw8,Uid:4c75599e-4f87-499a-bb27-5aedbc68a966,Namespace:kube-system,Attempt:0,}" Aug 5 22:50:32.783825 kubelet[2639]: I0805 22:50:32.782772 2639 topology_manager.go:215] "Topology Admit Handler" podUID="fad5aee8-86eb-4722-ae44-cabe7d22794f" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-sfngw" Aug 5 22:50:32.791477 containerd[1444]: time="2024-08-05T22:50:32.785796703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:50:32.791477 containerd[1444]: time="2024-08-05T22:50:32.785865913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:50:32.791477 containerd[1444]: time="2024-08-05T22:50:32.785894356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:50:32.791477 containerd[1444]: time="2024-08-05T22:50:32.785913452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:50:32.808019 systemd[1]: Created slice kubepods-besteffort-podfad5aee8_86eb_4722_ae44_cabe7d22794f.slice - libcontainer container kubepods-besteffort-podfad5aee8_86eb_4722_ae44_cabe7d22794f.slice. Aug 5 22:50:32.814015 kubelet[2639]: I0805 22:50:32.813994 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m79sx\" (UniqueName: \"kubernetes.io/projected/fad5aee8-86eb-4722-ae44-cabe7d22794f-kube-api-access-m79sx\") pod \"tigera-operator-76c4974c85-sfngw\" (UID: \"fad5aee8-86eb-4722-ae44-cabe7d22794f\") " pod="tigera-operator/tigera-operator-76c4974c85-sfngw" Aug 5 22:50:32.814494 kubelet[2639]: I0805 22:50:32.814165 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fad5aee8-86eb-4722-ae44-cabe7d22794f-var-lib-calico\") pod \"tigera-operator-76c4974c85-sfngw\" (UID: \"fad5aee8-86eb-4722-ae44-cabe7d22794f\") " pod="tigera-operator/tigera-operator-76c4974c85-sfngw" Aug 5 22:50:32.824673 systemd[1]: Started cri-containerd-7a660409f0acc219250d1f110ae2af2c89b9ff5db88e1e44e479aac23447fa75.scope - libcontainer container 7a660409f0acc219250d1f110ae2af2c89b9ff5db88e1e44e479aac23447fa75. Aug 5 22:50:32.852583 containerd[1444]: time="2024-08-05T22:50:32.852539722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x9hw8,Uid:4c75599e-4f87-499a-bb27-5aedbc68a966,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a660409f0acc219250d1f110ae2af2c89b9ff5db88e1e44e479aac23447fa75\"" Aug 5 22:50:32.856129 containerd[1444]: time="2024-08-05T22:50:32.855561884Z" level=info msg="CreateContainer within sandbox \"7a660409f0acc219250d1f110ae2af2c89b9ff5db88e1e44e479aac23447fa75\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 22:50:32.895366 containerd[1444]: time="2024-08-05T22:50:32.895286653Z" level=info msg="CreateContainer within sandbox \"7a660409f0acc219250d1f110ae2af2c89b9ff5db88e1e44e479aac23447fa75\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"60dd77fd86db5eec0099cbfb4e8b0c298f213f04179790fb2e3721dc90046098\"" Aug 5 22:50:32.898253 containerd[1444]: time="2024-08-05T22:50:32.896651996Z" level=info msg="StartContainer for \"60dd77fd86db5eec0099cbfb4e8b0c298f213f04179790fb2e3721dc90046098\"" Aug 5 22:50:32.935935 systemd[1]: Started cri-containerd-60dd77fd86db5eec0099cbfb4e8b0c298f213f04179790fb2e3721dc90046098.scope - libcontainer container 60dd77fd86db5eec0099cbfb4e8b0c298f213f04179790fb2e3721dc90046098. Aug 5 22:50:32.978824 containerd[1444]: time="2024-08-05T22:50:32.978714988Z" level=info msg="StartContainer for \"60dd77fd86db5eec0099cbfb4e8b0c298f213f04179790fb2e3721dc90046098\" returns successfully" Aug 5 22:50:33.123064 containerd[1444]: time="2024-08-05T22:50:33.122988323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-sfngw,Uid:fad5aee8-86eb-4722-ae44-cabe7d22794f,Namespace:tigera-operator,Attempt:0,}" Aug 5 22:50:33.175745 containerd[1444]: time="2024-08-05T22:50:33.175509674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:50:33.175745 containerd[1444]: time="2024-08-05T22:50:33.175607398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:50:33.175745 containerd[1444]: time="2024-08-05T22:50:33.175634228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:50:33.175745 containerd[1444]: time="2024-08-05T22:50:33.175652823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:50:33.200867 systemd[1]: Started cri-containerd-a62ad06756b0b4502c2c8586003437d050b6f187e73757e425e3feb479217842.scope - libcontainer container a62ad06756b0b4502c2c8586003437d050b6f187e73757e425e3feb479217842. Aug 5 22:50:33.251224 containerd[1444]: time="2024-08-05T22:50:33.250832310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-sfngw,Uid:fad5aee8-86eb-4722-ae44-cabe7d22794f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a62ad06756b0b4502c2c8586003437d050b6f187e73757e425e3feb479217842\"" Aug 5 22:50:33.264303 containerd[1444]: time="2024-08-05T22:50:33.264060727Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 5 22:50:33.955108 kubelet[2639]: I0805 22:50:33.954824 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-x9hw8" podStartSLOduration=1.95473818 podStartE2EDuration="1.95473818s" podCreationTimestamp="2024-08-05 22:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:50:33.952217248 +0000 UTC m=+15.261680292" watchObservedRunningTime="2024-08-05 22:50:33.95473818 +0000 UTC m=+15.264201153" Aug 5 22:50:35.129487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount244515015.mount: Deactivated successfully. Aug 5 22:50:35.866857 containerd[1444]: time="2024-08-05T22:50:35.866784279Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:50:35.868501 containerd[1444]: time="2024-08-05T22:50:35.868371127Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076060" Aug 5 22:50:35.870069 containerd[1444]: time="2024-08-05T22:50:35.870008720Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:50:35.874132 containerd[1444]: time="2024-08-05T22:50:35.874015910Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:50:35.875049 containerd[1444]: time="2024-08-05T22:50:35.874899027Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.610795801s" Aug 5 22:50:35.875049 containerd[1444]: time="2024-08-05T22:50:35.874938972Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Aug 5 22:50:35.884521 containerd[1444]: time="2024-08-05T22:50:35.884440513Z" level=info msg="CreateContainer within sandbox \"a62ad06756b0b4502c2c8586003437d050b6f187e73757e425e3feb479217842\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 5 22:50:35.907836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2740457868.mount: Deactivated successfully. Aug 5 22:50:35.920278 containerd[1444]: time="2024-08-05T22:50:35.920196194Z" level=info msg="CreateContainer within sandbox \"a62ad06756b0b4502c2c8586003437d050b6f187e73757e425e3feb479217842\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7164592b7c0a9e680d4958dfea2890ec9c6590436a60eebd231c632b7896a8ca\"" Aug 5 22:50:35.921563 containerd[1444]: time="2024-08-05T22:50:35.920672388Z" level=info msg="StartContainer for \"7164592b7c0a9e680d4958dfea2890ec9c6590436a60eebd231c632b7896a8ca\"" Aug 5 22:50:35.968631 systemd[1]: Started cri-containerd-7164592b7c0a9e680d4958dfea2890ec9c6590436a60eebd231c632b7896a8ca.scope - libcontainer container 7164592b7c0a9e680d4958dfea2890ec9c6590436a60eebd231c632b7896a8ca. Aug 5 22:50:36.004400 containerd[1444]: time="2024-08-05T22:50:36.004356332Z" level=info msg="StartContainer for \"7164592b7c0a9e680d4958dfea2890ec9c6590436a60eebd231c632b7896a8ca\" returns successfully" Aug 5 22:50:39.620383 kubelet[2639]: I0805 22:50:39.620286 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-sfngw" podStartSLOduration=4.9911204510000005 podStartE2EDuration="7.620168757s" podCreationTimestamp="2024-08-05 22:50:32 +0000 UTC" firstStartedPulling="2024-08-05 22:50:33.252977145 +0000 UTC m=+14.562440078" lastFinishedPulling="2024-08-05 22:50:35.882025441 +0000 UTC m=+17.191488384" observedRunningTime="2024-08-05 22:50:36.989526884 +0000 UTC m=+18.298989917" watchObservedRunningTime="2024-08-05 22:50:39.620168757 +0000 UTC m=+20.929631720" Aug 5 22:50:39.621803 kubelet[2639]: I0805 22:50:39.621557 2639 topology_manager.go:215] "Topology Admit Handler" podUID="905289e8-61f6-4179-9d83-eec9641096a7" podNamespace="calico-system" podName="calico-typha-76497d9f57-ds8g6" Aug 5 22:50:39.652954 systemd[1]: Created slice kubepods-besteffort-pod905289e8_61f6_4179_9d83_eec9641096a7.slice - libcontainer container kubepods-besteffort-pod905289e8_61f6_4179_9d83_eec9641096a7.slice. Aug 5 22:50:39.763991 kubelet[2639]: I0805 22:50:39.763930 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/905289e8-61f6-4179-9d83-eec9641096a7-typha-certs\") pod \"calico-typha-76497d9f57-ds8g6\" (UID: \"905289e8-61f6-4179-9d83-eec9641096a7\") " pod="calico-system/calico-typha-76497d9f57-ds8g6" Aug 5 22:50:39.763991 kubelet[2639]: I0805 22:50:39.763994 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9hzf\" (UniqueName: \"kubernetes.io/projected/905289e8-61f6-4179-9d83-eec9641096a7-kube-api-access-s9hzf\") pod \"calico-typha-76497d9f57-ds8g6\" (UID: \"905289e8-61f6-4179-9d83-eec9641096a7\") " pod="calico-system/calico-typha-76497d9f57-ds8g6" Aug 5 22:50:39.764183 kubelet[2639]: I0805 22:50:39.764026 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/905289e8-61f6-4179-9d83-eec9641096a7-tigera-ca-bundle\") pod \"calico-typha-76497d9f57-ds8g6\" (UID: \"905289e8-61f6-4179-9d83-eec9641096a7\") " pod="calico-system/calico-typha-76497d9f57-ds8g6" Aug 5 22:50:39.793056 kubelet[2639]: I0805 22:50:39.792340 2639 topology_manager.go:215] "Topology Admit Handler" podUID="57b6c657-f75f-4a10-bb2e-4d7fc5301c06" podNamespace="calico-system" podName="calico-node-5rhj8" Aug 5 22:50:39.802566 systemd[1]: Created slice kubepods-besteffort-pod57b6c657_f75f_4a10_bb2e_4d7fc5301c06.slice - libcontainer container kubepods-besteffort-pod57b6c657_f75f_4a10_bb2e_4d7fc5301c06.slice. Aug 5 22:50:39.965988 kubelet[2639]: I0805 22:50:39.965852 2639 topology_manager.go:215] "Topology Admit Handler" podUID="a3054ae9-283f-4e4a-bf3e-fdb7b75b0214" podNamespace="calico-system" podName="csi-node-driver-7trhv" Aug 5 22:50:39.969162 kubelet[2639]: E0805 22:50:39.968868 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7trhv" podUID="a3054ae9-283f-4e4a-bf3e-fdb7b75b0214" Aug 5 22:50:39.990005 kubelet[2639]: I0805 22:50:39.989952 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-cni-net-dir\") pod \"calico-node-5rhj8\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " pod="calico-system/calico-node-5rhj8" Aug 5 22:50:39.990880 kubelet[2639]: I0805 22:50:39.990240 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-cni-log-dir\") pod \"calico-node-5rhj8\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " pod="calico-system/calico-node-5rhj8" Aug 5 22:50:39.990880 kubelet[2639]: I0805 22:50:39.990276 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-tigera-ca-bundle\") pod \"calico-node-5rhj8\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " pod="calico-system/calico-node-5rhj8" Aug 5 22:50:39.990880 kubelet[2639]: I0805 22:50:39.990302 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-node-certs\") pod \"calico-node-5rhj8\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " pod="calico-system/calico-node-5rhj8" Aug 5 22:50:39.990880 kubelet[2639]: I0805 22:50:39.990327 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-lib-modules\") pod \"calico-node-5rhj8\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " pod="calico-system/calico-node-5rhj8" Aug 5 22:50:39.990880 kubelet[2639]: I0805 22:50:39.990375 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-xtables-lock\") pod \"calico-node-5rhj8\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " pod="calico-system/calico-node-5rhj8" Aug 5 22:50:39.991051 kubelet[2639]: I0805 22:50:39.990398 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-policysync\") pod \"calico-node-5rhj8\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " pod="calico-system/calico-node-5rhj8" Aug 5 22:50:39.991051 kubelet[2639]: I0805 22:50:39.990422 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-cni-bin-dir\") pod \"calico-node-5rhj8\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " pod="calico-system/calico-node-5rhj8" Aug 5 22:50:39.991051 kubelet[2639]: I0805 22:50:39.990446 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-flexvol-driver-host\") pod \"calico-node-5rhj8\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " pod="calico-system/calico-node-5rhj8" Aug 5 22:50:39.991051 kubelet[2639]: I0805 22:50:39.990497 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-var-run-calico\") pod \"calico-node-5rhj8\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " pod="calico-system/calico-node-5rhj8" Aug 5 22:50:39.991051 kubelet[2639]: I0805 22:50:39.990525 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-var-lib-calico\") pod \"calico-node-5rhj8\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " pod="calico-system/calico-node-5rhj8" Aug 5 22:50:39.991179 kubelet[2639]: I0805 22:50:39.990552 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2jjb\" (UniqueName: \"kubernetes.io/projected/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-kube-api-access-w2jjb\") pod \"calico-node-5rhj8\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " pod="calico-system/calico-node-5rhj8" Aug 5 22:50:40.000904 containerd[1444]: time="2024-08-05T22:50:40.000818702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76497d9f57-ds8g6,Uid:905289e8-61f6-4179-9d83-eec9641096a7,Namespace:calico-system,Attempt:0,}" Aug 5 22:50:40.091514 kubelet[2639]: I0805 22:50:40.090971 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a3054ae9-283f-4e4a-bf3e-fdb7b75b0214-varrun\") pod \"csi-node-driver-7trhv\" (UID: \"a3054ae9-283f-4e4a-bf3e-fdb7b75b0214\") " pod="calico-system/csi-node-driver-7trhv" Aug 5 22:50:40.091514 kubelet[2639]: I0805 22:50:40.091027 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3054ae9-283f-4e4a-bf3e-fdb7b75b0214-kubelet-dir\") pod \"csi-node-driver-7trhv\" (UID: \"a3054ae9-283f-4e4a-bf3e-fdb7b75b0214\") " pod="calico-system/csi-node-driver-7trhv" Aug 5 22:50:40.091514 kubelet[2639]: I0805 22:50:40.091069 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a3054ae9-283f-4e4a-bf3e-fdb7b75b0214-registration-dir\") pod \"csi-node-driver-7trhv\" (UID: \"a3054ae9-283f-4e4a-bf3e-fdb7b75b0214\") " pod="calico-system/csi-node-driver-7trhv" Aug 5 22:50:40.091514 kubelet[2639]: I0805 22:50:40.091298 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmqsz\" (UniqueName: \"kubernetes.io/projected/a3054ae9-283f-4e4a-bf3e-fdb7b75b0214-kube-api-access-pmqsz\") pod \"csi-node-driver-7trhv\" (UID: \"a3054ae9-283f-4e4a-bf3e-fdb7b75b0214\") " pod="calico-system/csi-node-driver-7trhv" Aug 5 22:50:40.091514 kubelet[2639]: I0805 22:50:40.091329 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a3054ae9-283f-4e4a-bf3e-fdb7b75b0214-socket-dir\") pod \"csi-node-driver-7trhv\" (UID: \"a3054ae9-283f-4e4a-bf3e-fdb7b75b0214\") " pod="calico-system/csi-node-driver-7trhv" Aug 5 22:50:40.113746 kubelet[2639]: E0805 22:50:40.113702 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.113968 kubelet[2639]: W0805 22:50:40.113837 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.113968 kubelet[2639]: E0805 22:50:40.113868 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.185728 kubelet[2639]: E0805 22:50:40.185685 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.185728 kubelet[2639]: W0805 22:50:40.185721 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.185902 kubelet[2639]: E0805 22:50:40.185757 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.192424 kubelet[2639]: E0805 22:50:40.192241 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.192424 kubelet[2639]: W0805 22:50:40.192265 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.192424 kubelet[2639]: E0805 22:50:40.192288 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.193025 kubelet[2639]: E0805 22:50:40.192876 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.193025 kubelet[2639]: W0805 22:50:40.192889 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.193025 kubelet[2639]: E0805 22:50:40.192913 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.193412 kubelet[2639]: E0805 22:50:40.193359 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.193412 kubelet[2639]: W0805 22:50:40.193372 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.193603 kubelet[2639]: E0805 22:50:40.193479 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.194565 kubelet[2639]: E0805 22:50:40.194524 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.194565 kubelet[2639]: W0805 22:50:40.194556 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.194704 kubelet[2639]: E0805 22:50:40.194589 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.194861 kubelet[2639]: E0805 22:50:40.194841 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.194861 kubelet[2639]: W0805 22:50:40.194861 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.195096 kubelet[2639]: E0805 22:50:40.194967 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.195096 kubelet[2639]: E0805 22:50:40.195060 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.195096 kubelet[2639]: W0805 22:50:40.195069 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.195186 containerd[1444]: time="2024-08-05T22:50:40.192840658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:50:40.195186 containerd[1444]: time="2024-08-05T22:50:40.194717268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:50:40.195186 containerd[1444]: time="2024-08-05T22:50:40.194741554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:50:40.195186 containerd[1444]: time="2024-08-05T22:50:40.194759227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:50:40.195541 kubelet[2639]: E0805 22:50:40.195250 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.195541 kubelet[2639]: E0805 22:50:40.195377 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.195541 kubelet[2639]: W0805 22:50:40.195388 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.195639 kubelet[2639]: E0805 22:50:40.195602 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.196070 kubelet[2639]: E0805 22:50:40.196046 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.196070 kubelet[2639]: W0805 22:50:40.196063 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.196222 kubelet[2639]: E0805 22:50:40.196106 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.196443 kubelet[2639]: E0805 22:50:40.196424 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.196443 kubelet[2639]: W0805 22:50:40.196441 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.196631 kubelet[2639]: E0805 22:50:40.196565 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.198509 kubelet[2639]: E0805 22:50:40.197667 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.198509 kubelet[2639]: W0805 22:50:40.197686 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.198509 kubelet[2639]: E0805 22:50:40.197926 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.198509 kubelet[2639]: W0805 22:50:40.197957 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.198509 kubelet[2639]: E0805 22:50:40.198091 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.198509 kubelet[2639]: W0805 22:50:40.198099 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.198509 kubelet[2639]: E0805 22:50:40.198244 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.198509 kubelet[2639]: W0805 22:50:40.198252 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.198509 kubelet[2639]: E0805 22:50:40.198431 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.198509 kubelet[2639]: W0805 22:50:40.198439 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.198830 kubelet[2639]: E0805 22:50:40.198502 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.198830 kubelet[2639]: E0805 22:50:40.198540 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.198830 kubelet[2639]: E0805 22:50:40.198683 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.198830 kubelet[2639]: W0805 22:50:40.198692 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.198830 kubelet[2639]: E0805 22:50:40.198729 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.198964 kubelet[2639]: E0805 22:50:40.198902 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.198964 kubelet[2639]: W0805 22:50:40.198914 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.198964 kubelet[2639]: E0805 22:50:40.198926 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.198964 kubelet[2639]: E0805 22:50:40.198944 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.199490 kubelet[2639]: E0805 22:50:40.199115 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.199490 kubelet[2639]: E0805 22:50:40.199139 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.199490 kubelet[2639]: W0805 22:50:40.199150 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.199490 kubelet[2639]: E0805 22:50:40.199154 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.199490 kubelet[2639]: E0805 22:50:40.199164 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.199490 kubelet[2639]: E0805 22:50:40.199382 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.199490 kubelet[2639]: W0805 22:50:40.199393 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.199490 kubelet[2639]: E0805 22:50:40.199418 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.200795 kubelet[2639]: E0805 22:50:40.200754 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.200795 kubelet[2639]: W0805 22:50:40.200774 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.200795 kubelet[2639]: E0805 22:50:40.200796 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.200972 kubelet[2639]: E0805 22:50:40.200952 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.200972 kubelet[2639]: W0805 22:50:40.200968 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.201066 kubelet[2639]: E0805 22:50:40.200981 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.201142 kubelet[2639]: E0805 22:50:40.201122 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.201142 kubelet[2639]: W0805 22:50:40.201137 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.201227 kubelet[2639]: E0805 22:50:40.201214 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.201838 kubelet[2639]: E0805 22:50:40.201798 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.202527 kubelet[2639]: W0805 22:50:40.202410 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.202823 kubelet[2639]: E0805 22:50:40.202667 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.202823 kubelet[2639]: E0805 22:50:40.202746 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.202823 kubelet[2639]: W0805 22:50:40.202756 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.202823 kubelet[2639]: E0805 22:50:40.202784 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.204144 kubelet[2639]: E0805 22:50:40.204107 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.204144 kubelet[2639]: W0805 22:50:40.204125 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.204144 kubelet[2639]: E0805 22:50:40.204140 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.219646 systemd[1]: Started cri-containerd-1ec415744433471b48e891dca30ec57045df33e072a395562dbe62847be382ec.scope - libcontainer container 1ec415744433471b48e891dca30ec57045df33e072a395562dbe62847be382ec. Aug 5 22:50:40.227856 kubelet[2639]: E0805 22:50:40.227798 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.227856 kubelet[2639]: W0805 22:50:40.227821 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.227856 kubelet[2639]: E0805 22:50:40.227843 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.277496 containerd[1444]: time="2024-08-05T22:50:40.277423202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76497d9f57-ds8g6,Uid:905289e8-61f6-4179-9d83-eec9641096a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"1ec415744433471b48e891dca30ec57045df33e072a395562dbe62847be382ec\"" Aug 5 22:50:40.293837 kubelet[2639]: E0805 22:50:40.293798 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:40.293837 kubelet[2639]: W0805 22:50:40.293824 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:40.293837 kubelet[2639]: E0805 22:50:40.293849 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:40.386542 containerd[1444]: time="2024-08-05T22:50:40.386362437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 5 22:50:40.407165 containerd[1444]: time="2024-08-05T22:50:40.407082475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5rhj8,Uid:57b6c657-f75f-4a10-bb2e-4d7fc5301c06,Namespace:calico-system,Attempt:0,}" Aug 5 22:50:40.632486 containerd[1444]: time="2024-08-05T22:50:40.632309775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:50:40.632670 containerd[1444]: time="2024-08-05T22:50:40.632533164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:50:40.632670 containerd[1444]: time="2024-08-05T22:50:40.632622702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:50:40.632733 containerd[1444]: time="2024-08-05T22:50:40.632679879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:50:40.656701 systemd[1]: Started cri-containerd-cd3f42ba8f1f7eead500f222b580b053b31993c20b009c2b60dbda7cd9313086.scope - libcontainer container cd3f42ba8f1f7eead500f222b580b053b31993c20b009c2b60dbda7cd9313086. Aug 5 22:50:40.706720 containerd[1444]: time="2024-08-05T22:50:40.706674458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5rhj8,Uid:57b6c657-f75f-4a10-bb2e-4d7fc5301c06,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd3f42ba8f1f7eead500f222b580b053b31993c20b009c2b60dbda7cd9313086\"" Aug 5 22:50:41.850693 kubelet[2639]: E0805 22:50:41.850616 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7trhv" podUID="a3054ae9-283f-4e4a-bf3e-fdb7b75b0214" Aug 5 22:50:43.850065 kubelet[2639]: E0805 22:50:43.849989 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7trhv" podUID="a3054ae9-283f-4e4a-bf3e-fdb7b75b0214" Aug 5 22:50:44.659470 containerd[1444]: time="2024-08-05T22:50:44.659386227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:50:44.661772 containerd[1444]: time="2024-08-05T22:50:44.661722039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Aug 5 22:50:44.663415 containerd[1444]: time="2024-08-05T22:50:44.663384679Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:50:44.672162 containerd[1444]: time="2024-08-05T22:50:44.672103075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:50:44.673046 containerd[1444]: time="2024-08-05T22:50:44.673006020Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 4.286564734s" Aug 5 22:50:44.673046 containerd[1444]: time="2024-08-05T22:50:44.673039562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Aug 5 22:50:44.674506 containerd[1444]: time="2024-08-05T22:50:44.674286652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 5 22:50:44.717875 containerd[1444]: time="2024-08-05T22:50:44.717819937Z" level=info msg="CreateContainer within sandbox \"1ec415744433471b48e891dca30ec57045df33e072a395562dbe62847be382ec\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 22:50:44.758225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3694732908.mount: Deactivated successfully. Aug 5 22:50:44.771560 containerd[1444]: time="2024-08-05T22:50:44.771479299Z" level=info msg="CreateContainer within sandbox \"1ec415744433471b48e891dca30ec57045df33e072a395562dbe62847be382ec\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c\"" Aug 5 22:50:44.772048 containerd[1444]: time="2024-08-05T22:50:44.772009083Z" level=info msg="StartContainer for \"fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c\"" Aug 5 22:50:44.811614 systemd[1]: Started cri-containerd-fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c.scope - libcontainer container fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c. Aug 5 22:50:44.862616 containerd[1444]: time="2024-08-05T22:50:44.862451445Z" level=info msg="StartContainer for \"fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c\" returns successfully" Aug 5 22:50:45.031602 containerd[1444]: time="2024-08-05T22:50:45.031222392Z" level=info msg="StopContainer for \"fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c\" with timeout 300 (s)" Aug 5 22:50:45.033024 containerd[1444]: time="2024-08-05T22:50:45.032754296Z" level=info msg="Stop container \"fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c\" with signal terminated" Aug 5 22:50:45.061827 systemd[1]: cri-containerd-fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c.scope: Deactivated successfully. Aug 5 22:50:45.527911 containerd[1444]: time="2024-08-05T22:50:45.527828614Z" level=info msg="shim disconnected" id=fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c namespace=k8s.io Aug 5 22:50:45.527911 containerd[1444]: time="2024-08-05T22:50:45.527886152Z" level=warning msg="cleaning up after shim disconnected" id=fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c namespace=k8s.io Aug 5 22:50:45.527911 containerd[1444]: time="2024-08-05T22:50:45.527897042Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:50:45.567759 containerd[1444]: time="2024-08-05T22:50:45.567514729Z" level=info msg="StopContainer for \"fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c\" returns successfully" Aug 5 22:50:45.569222 containerd[1444]: time="2024-08-05T22:50:45.568047428Z" level=info msg="StopPodSandbox for \"1ec415744433471b48e891dca30ec57045df33e072a395562dbe62847be382ec\"" Aug 5 22:50:45.569222 containerd[1444]: time="2024-08-05T22:50:45.568076703Z" level=info msg="Container to stop \"fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 22:50:45.577932 systemd[1]: cri-containerd-1ec415744433471b48e891dca30ec57045df33e072a395562dbe62847be382ec.scope: Deactivated successfully. Aug 5 22:50:45.624806 containerd[1444]: time="2024-08-05T22:50:45.624166684Z" level=info msg="shim disconnected" id=1ec415744433471b48e891dca30ec57045df33e072a395562dbe62847be382ec namespace=k8s.io Aug 5 22:50:45.624806 containerd[1444]: time="2024-08-05T22:50:45.624242636Z" level=warning msg="cleaning up after shim disconnected" id=1ec415744433471b48e891dca30ec57045df33e072a395562dbe62847be382ec namespace=k8s.io Aug 5 22:50:45.624806 containerd[1444]: time="2024-08-05T22:50:45.624255190Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:50:45.644819 containerd[1444]: time="2024-08-05T22:50:45.644559954Z" level=info msg="TearDown network for sandbox \"1ec415744433471b48e891dca30ec57045df33e072a395562dbe62847be382ec\" successfully" Aug 5 22:50:45.644819 containerd[1444]: time="2024-08-05T22:50:45.644593857Z" level=info msg="StopPodSandbox for \"1ec415744433471b48e891dca30ec57045df33e072a395562dbe62847be382ec\" returns successfully" Aug 5 22:50:45.675824 kubelet[2639]: I0805 22:50:45.674677 2639 topology_manager.go:215] "Topology Admit Handler" podUID="95162d48-5fa5-4491-b758-4a83d68ac3a3" podNamespace="calico-system" podName="calico-typha-69d94dd9db-k56px" Aug 5 22:50:45.675824 kubelet[2639]: E0805 22:50:45.674741 2639 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="905289e8-61f6-4179-9d83-eec9641096a7" containerName="calico-typha" Aug 5 22:50:45.675824 kubelet[2639]: I0805 22:50:45.674767 2639 memory_manager.go:354] "RemoveStaleState removing state" podUID="905289e8-61f6-4179-9d83-eec9641096a7" containerName="calico-typha" Aug 5 22:50:45.683962 systemd[1]: Created slice kubepods-besteffort-pod95162d48_5fa5_4491_b758_4a83d68ac3a3.slice - libcontainer container kubepods-besteffort-pod95162d48_5fa5_4491_b758_4a83d68ac3a3.slice. Aug 5 22:50:45.691352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c-rootfs.mount: Deactivated successfully. Aug 5 22:50:45.691500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ec415744433471b48e891dca30ec57045df33e072a395562dbe62847be382ec-rootfs.mount: Deactivated successfully. Aug 5 22:50:45.691579 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ec415744433471b48e891dca30ec57045df33e072a395562dbe62847be382ec-shm.mount: Deactivated successfully. Aug 5 22:50:45.737661 kubelet[2639]: E0805 22:50:45.737530 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.738101 kubelet[2639]: W0805 22:50:45.737916 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.738101 kubelet[2639]: E0805 22:50:45.737942 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.738717 kubelet[2639]: E0805 22:50:45.738601 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.738717 kubelet[2639]: W0805 22:50:45.738612 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.738717 kubelet[2639]: E0805 22:50:45.738625 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.738884 kubelet[2639]: E0805 22:50:45.738873 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.739036 kubelet[2639]: W0805 22:50:45.738934 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.739036 kubelet[2639]: E0805 22:50:45.738953 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.740439 kubelet[2639]: E0805 22:50:45.740314 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.740439 kubelet[2639]: W0805 22:50:45.740325 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.740439 kubelet[2639]: E0805 22:50:45.740338 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.740674 kubelet[2639]: E0805 22:50:45.740622 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.740674 kubelet[2639]: W0805 22:50:45.740633 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.740674 kubelet[2639]: E0805 22:50:45.740645 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.741062 kubelet[2639]: E0805 22:50:45.740962 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.741062 kubelet[2639]: W0805 22:50:45.740973 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.741062 kubelet[2639]: E0805 22:50:45.740985 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.741394 kubelet[2639]: E0805 22:50:45.741317 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.741394 kubelet[2639]: W0805 22:50:45.741326 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.741597 kubelet[2639]: E0805 22:50:45.741338 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.742067 kubelet[2639]: E0805 22:50:45.742056 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.742248 kubelet[2639]: W0805 22:50:45.742140 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.742248 kubelet[2639]: E0805 22:50:45.742159 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.742557 kubelet[2639]: E0805 22:50:45.742437 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.742557 kubelet[2639]: W0805 22:50:45.742448 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.742862 kubelet[2639]: E0805 22:50:45.742683 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.743228 kubelet[2639]: E0805 22:50:45.743102 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.743228 kubelet[2639]: W0805 22:50:45.743113 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.743228 kubelet[2639]: E0805 22:50:45.743124 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.743796 kubelet[2639]: E0805 22:50:45.743683 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.743796 kubelet[2639]: W0805 22:50:45.743694 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.743796 kubelet[2639]: E0805 22:50:45.743706 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.744871 kubelet[2639]: E0805 22:50:45.743955 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.744871 kubelet[2639]: W0805 22:50:45.743969 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.744871 kubelet[2639]: E0805 22:50:45.743981 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.839923 kubelet[2639]: E0805 22:50:45.838610 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.839923 kubelet[2639]: W0805 22:50:45.838628 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.839923 kubelet[2639]: E0805 22:50:45.838649 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.839923 kubelet[2639]: I0805 22:50:45.838705 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/905289e8-61f6-4179-9d83-eec9641096a7-tigera-ca-bundle\") pod \"905289e8-61f6-4179-9d83-eec9641096a7\" (UID: \"905289e8-61f6-4179-9d83-eec9641096a7\") " Aug 5 22:50:45.839923 kubelet[2639]: E0805 22:50:45.838908 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.839923 kubelet[2639]: W0805 22:50:45.838931 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.839923 kubelet[2639]: E0805 22:50:45.838945 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.839923 kubelet[2639]: I0805 22:50:45.838971 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9hzf\" (UniqueName: \"kubernetes.io/projected/905289e8-61f6-4179-9d83-eec9641096a7-kube-api-access-s9hzf\") pod \"905289e8-61f6-4179-9d83-eec9641096a7\" (UID: \"905289e8-61f6-4179-9d83-eec9641096a7\") " Aug 5 22:50:45.839923 kubelet[2639]: E0805 22:50:45.839133 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.840565 kubelet[2639]: W0805 22:50:45.839143 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.840565 kubelet[2639]: E0805 22:50:45.839154 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.840565 kubelet[2639]: I0805 22:50:45.839177 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/905289e8-61f6-4179-9d83-eec9641096a7-typha-certs\") pod \"905289e8-61f6-4179-9d83-eec9641096a7\" (UID: \"905289e8-61f6-4179-9d83-eec9641096a7\") " Aug 5 22:50:45.840565 kubelet[2639]: E0805 22:50:45.839394 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.840565 kubelet[2639]: W0805 22:50:45.839403 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.840565 kubelet[2639]: E0805 22:50:45.839416 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.840565 kubelet[2639]: I0805 22:50:45.839442 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95162d48-5fa5-4491-b758-4a83d68ac3a3-tigera-ca-bundle\") pod \"calico-typha-69d94dd9db-k56px\" (UID: \"95162d48-5fa5-4491-b758-4a83d68ac3a3\") " pod="calico-system/calico-typha-69d94dd9db-k56px" Aug 5 22:50:45.840565 kubelet[2639]: E0805 22:50:45.839638 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.841131 kubelet[2639]: W0805 22:50:45.839648 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.841131 kubelet[2639]: E0805 22:50:45.839659 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.841131 kubelet[2639]: I0805 22:50:45.839679 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/95162d48-5fa5-4491-b758-4a83d68ac3a3-typha-certs\") pod \"calico-typha-69d94dd9db-k56px\" (UID: \"95162d48-5fa5-4491-b758-4a83d68ac3a3\") " pod="calico-system/calico-typha-69d94dd9db-k56px" Aug 5 22:50:45.841131 kubelet[2639]: E0805 22:50:45.839850 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.841131 kubelet[2639]: W0805 22:50:45.839860 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.841131 kubelet[2639]: E0805 22:50:45.839872 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.841131 kubelet[2639]: I0805 22:50:45.839892 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h86wz\" (UniqueName: \"kubernetes.io/projected/95162d48-5fa5-4491-b758-4a83d68ac3a3-kube-api-access-h86wz\") pod \"calico-typha-69d94dd9db-k56px\" (UID: \"95162d48-5fa5-4491-b758-4a83d68ac3a3\") " pod="calico-system/calico-typha-69d94dd9db-k56px" Aug 5 22:50:45.841131 kubelet[2639]: E0805 22:50:45.840106 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.847910 kubelet[2639]: W0805 22:50:45.840116 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.847910 kubelet[2639]: E0805 22:50:45.840128 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.847910 kubelet[2639]: E0805 22:50:45.840282 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.847910 kubelet[2639]: W0805 22:50:45.840291 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.847910 kubelet[2639]: E0805 22:50:45.840303 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.847910 kubelet[2639]: E0805 22:50:45.840491 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.847910 kubelet[2639]: W0805 22:50:45.840500 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.847910 kubelet[2639]: E0805 22:50:45.840512 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.849562 kubelet[2639]: E0805 22:50:45.848884 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.849562 kubelet[2639]: W0805 22:50:45.848904 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.849562 kubelet[2639]: E0805 22:50:45.848926 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.851312 kubelet[2639]: E0805 22:50:45.850799 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7trhv" podUID="a3054ae9-283f-4e4a-bf3e-fdb7b75b0214" Aug 5 22:50:45.851312 kubelet[2639]: I0805 22:50:45.851168 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/905289e8-61f6-4179-9d83-eec9641096a7-kube-api-access-s9hzf" (OuterVolumeSpecName: "kube-api-access-s9hzf") pod "905289e8-61f6-4179-9d83-eec9641096a7" (UID: "905289e8-61f6-4179-9d83-eec9641096a7"). InnerVolumeSpecName "kube-api-access-s9hzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 5 22:50:45.854156 systemd[1]: var-lib-kubelet-pods-905289e8\x2d61f6\x2d4179\x2d9d83\x2deec9641096a7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds9hzf.mount: Deactivated successfully. Aug 5 22:50:45.856177 kubelet[2639]: E0805 22:50:45.856155 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.856300 kubelet[2639]: W0805 22:50:45.856285 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.856401 kubelet[2639]: E0805 22:50:45.856389 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.856851 kubelet[2639]: E0805 22:50:45.856773 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.856851 kubelet[2639]: W0805 22:50:45.856785 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.856851 kubelet[2639]: E0805 22:50:45.856799 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.857520 kubelet[2639]: E0805 22:50:45.857412 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.857520 kubelet[2639]: W0805 22:50:45.857425 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.857520 kubelet[2639]: E0805 22:50:45.857439 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.857819 kubelet[2639]: E0805 22:50:45.857791 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.857966 kubelet[2639]: W0805 22:50:45.857802 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.857966 kubelet[2639]: E0805 22:50:45.857906 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.858609 kubelet[2639]: E0805 22:50:45.858415 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.858609 kubelet[2639]: W0805 22:50:45.858428 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.858609 kubelet[2639]: E0805 22:50:45.858442 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.859155 kubelet[2639]: I0805 22:50:45.859115 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/905289e8-61f6-4179-9d83-eec9641096a7-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "905289e8-61f6-4179-9d83-eec9641096a7" (UID: "905289e8-61f6-4179-9d83-eec9641096a7"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 5 22:50:45.861499 kubelet[2639]: I0805 22:50:45.861397 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/905289e8-61f6-4179-9d83-eec9641096a7-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "905289e8-61f6-4179-9d83-eec9641096a7" (UID: "905289e8-61f6-4179-9d83-eec9641096a7"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 5 22:50:45.864226 systemd[1]: var-lib-kubelet-pods-905289e8\x2d61f6\x2d4179\x2d9d83\x2deec9641096a7-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Aug 5 22:50:45.868930 systemd[1]: var-lib-kubelet-pods-905289e8\x2d61f6\x2d4179\x2d9d83\x2deec9641096a7-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Aug 5 22:50:45.941515 kubelet[2639]: E0805 22:50:45.940506 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.941515 kubelet[2639]: W0805 22:50:45.941509 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.941707 kubelet[2639]: E0805 22:50:45.941534 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.941853 kubelet[2639]: E0805 22:50:45.941838 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.941853 kubelet[2639]: W0805 22:50:45.941851 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.941939 kubelet[2639]: E0805 22:50:45.941870 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.942104 kubelet[2639]: E0805 22:50:45.942089 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.942104 kubelet[2639]: W0805 22:50:45.942101 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.942250 kubelet[2639]: E0805 22:50:45.942118 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.942250 kubelet[2639]: I0805 22:50:45.942174 2639 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/905289e8-61f6-4179-9d83-eec9641096a7-typha-certs\") on node \"ci-4012-1-0-4-e6fc6d4d35.novalocal\" DevicePath \"\"" Aug 5 22:50:45.942250 kubelet[2639]: I0805 22:50:45.942191 2639 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/905289e8-61f6-4179-9d83-eec9641096a7-tigera-ca-bundle\") on node \"ci-4012-1-0-4-e6fc6d4d35.novalocal\" DevicePath \"\"" Aug 5 22:50:45.942250 kubelet[2639]: I0805 22:50:45.942206 2639 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-s9hzf\" (UniqueName: \"kubernetes.io/projected/905289e8-61f6-4179-9d83-eec9641096a7-kube-api-access-s9hzf\") on node \"ci-4012-1-0-4-e6fc6d4d35.novalocal\" DevicePath \"\"" Aug 5 22:50:45.942505 kubelet[2639]: E0805 22:50:45.942384 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.942505 kubelet[2639]: W0805 22:50:45.942393 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.942505 kubelet[2639]: E0805 22:50:45.942404 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.942729 kubelet[2639]: E0805 22:50:45.942605 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.942729 kubelet[2639]: W0805 22:50:45.942613 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.942729 kubelet[2639]: E0805 22:50:45.942630 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.943425 kubelet[2639]: E0805 22:50:45.942798 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.943425 kubelet[2639]: W0805 22:50:45.942806 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.943425 kubelet[2639]: E0805 22:50:45.942884 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.943425 kubelet[2639]: E0805 22:50:45.943025 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.943425 kubelet[2639]: W0805 22:50:45.943048 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.943425 kubelet[2639]: E0805 22:50:45.943168 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.943425 kubelet[2639]: E0805 22:50:45.943363 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.943425 kubelet[2639]: W0805 22:50:45.943373 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.943726 kubelet[2639]: E0805 22:50:45.943511 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.945477 kubelet[2639]: E0805 22:50:45.944594 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.945477 kubelet[2639]: W0805 22:50:45.944608 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.945477 kubelet[2639]: E0805 22:50:45.944627 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.945477 kubelet[2639]: E0805 22:50:45.945084 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.945477 kubelet[2639]: W0805 22:50:45.945093 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.945477 kubelet[2639]: E0805 22:50:45.945107 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.945477 kubelet[2639]: E0805 22:50:45.945328 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.945477 kubelet[2639]: W0805 22:50:45.945336 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.945477 kubelet[2639]: E0805 22:50:45.945347 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.945477 kubelet[2639]: E0805 22:50:45.945573 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.945899 kubelet[2639]: W0805 22:50:45.945581 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.945899 kubelet[2639]: E0805 22:50:45.945752 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.945899 kubelet[2639]: W0805 22:50:45.945760 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.945899 kubelet[2639]: E0805 22:50:45.945793 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.946000 kubelet[2639]: E0805 22:50:45.945936 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.946000 kubelet[2639]: W0805 22:50:45.945966 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.946000 kubelet[2639]: E0805 22:50:45.945979 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.946478 kubelet[2639]: E0805 22:50:45.946100 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.946478 kubelet[2639]: E0805 22:50:45.946149 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.946478 kubelet[2639]: W0805 22:50:45.946157 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.946478 kubelet[2639]: E0805 22:50:45.946169 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.947695 kubelet[2639]: E0805 22:50:45.947679 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.947856 kubelet[2639]: W0805 22:50:45.947802 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.948367 kubelet[2639]: E0805 22:50:45.948352 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.955962 kubelet[2639]: E0805 22:50:45.955940 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.956132 kubelet[2639]: W0805 22:50:45.956079 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.956132 kubelet[2639]: E0805 22:50:45.956104 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.969311 kubelet[2639]: E0805 22:50:45.969279 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:50:45.969311 kubelet[2639]: W0805 22:50:45.969304 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:50:45.969515 kubelet[2639]: E0805 22:50:45.969331 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:50:45.992116 containerd[1444]: time="2024-08-05T22:50:45.992065098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69d94dd9db-k56px,Uid:95162d48-5fa5-4491-b758-4a83d68ac3a3,Namespace:calico-system,Attempt:0,}" Aug 5 22:50:46.041228 kubelet[2639]: I0805 22:50:46.041156 2639 scope.go:117] "RemoveContainer" containerID="fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c" Aug 5 22:50:46.044119 containerd[1444]: time="2024-08-05T22:50:46.042113752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:50:46.044119 containerd[1444]: time="2024-08-05T22:50:46.042206927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:50:46.044119 containerd[1444]: time="2024-08-05T22:50:46.042235020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:50:46.044119 containerd[1444]: time="2024-08-05T22:50:46.042254847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:50:46.052769 containerd[1444]: time="2024-08-05T22:50:46.051204587Z" level=info msg="RemoveContainer for \"fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c\"" Aug 5 22:50:46.055966 systemd[1]: Removed slice kubepods-besteffort-pod905289e8_61f6_4179_9d83_eec9641096a7.slice - libcontainer container kubepods-besteffort-pod905289e8_61f6_4179_9d83_eec9641096a7.slice. Aug 5 22:50:46.064838 containerd[1444]: time="2024-08-05T22:50:46.064344629Z" level=info msg="RemoveContainer for \"fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c\" returns successfully" Aug 5 22:50:46.066195 kubelet[2639]: I0805 22:50:46.065272 2639 scope.go:117] "RemoveContainer" containerID="fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c" Aug 5 22:50:46.066360 containerd[1444]: time="2024-08-05T22:50:46.066288416Z" level=error msg="ContainerStatus for \"fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c\": not found" Aug 5 22:50:46.066919 kubelet[2639]: E0805 22:50:46.066650 2639 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c\": not found" containerID="fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c" Aug 5 22:50:46.066919 kubelet[2639]: I0805 22:50:46.066723 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c"} err="failed to get container status \"fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd1f088b15eb3c70c4b96c9a130c3a46c5a29d406a956c085224bccb4e4e015c\": not found" Aug 5 22:50:46.086686 systemd[1]: Started cri-containerd-f356892f82252a59a3dfdfa673b1c29dfeaaac6af56f0637998e69e02f4df03d.scope - libcontainer container f356892f82252a59a3dfdfa673b1c29dfeaaac6af56f0637998e69e02f4df03d. Aug 5 22:50:46.154252 containerd[1444]: time="2024-08-05T22:50:46.154167822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69d94dd9db-k56px,Uid:95162d48-5fa5-4491-b758-4a83d68ac3a3,Namespace:calico-system,Attempt:0,} returns sandbox id \"f356892f82252a59a3dfdfa673b1c29dfeaaac6af56f0637998e69e02f4df03d\"" Aug 5 22:50:46.165237 containerd[1444]: time="2024-08-05T22:50:46.165114669Z" level=info msg="CreateContainer within sandbox \"f356892f82252a59a3dfdfa673b1c29dfeaaac6af56f0637998e69e02f4df03d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 22:50:46.186152 containerd[1444]: time="2024-08-05T22:50:46.186070855Z" level=info msg="CreateContainer within sandbox \"f356892f82252a59a3dfdfa673b1c29dfeaaac6af56f0637998e69e02f4df03d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"edc6663f847b0b6ed9f36d850a6c8ec904280e10c7b09e2e0e1d3a9bcd255f3c\"" Aug 5 22:50:46.187842 containerd[1444]: time="2024-08-05T22:50:46.187801461Z" level=info msg="StartContainer for \"edc6663f847b0b6ed9f36d850a6c8ec904280e10c7b09e2e0e1d3a9bcd255f3c\"" Aug 5 22:50:46.220629 systemd[1]: Started cri-containerd-edc6663f847b0b6ed9f36d850a6c8ec904280e10c7b09e2e0e1d3a9bcd255f3c.scope - libcontainer container edc6663f847b0b6ed9f36d850a6c8ec904280e10c7b09e2e0e1d3a9bcd255f3c. Aug 5 22:50:46.295430 containerd[1444]: time="2024-08-05T22:50:46.294677808Z" level=info msg="StartContainer for \"edc6663f847b0b6ed9f36d850a6c8ec904280e10c7b09e2e0e1d3a9bcd255f3c\" returns successfully" Aug 5 22:50:46.595517 containerd[1444]: time="2024-08-05T22:50:46.595337323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:50:46.597385 containerd[1444]: time="2024-08-05T22:50:46.597324982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Aug 5 22:50:46.599138 containerd[1444]: time="2024-08-05T22:50:46.598575829Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:50:46.604409 containerd[1444]: time="2024-08-05T22:50:46.604359600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:50:46.605138 containerd[1444]: time="2024-08-05T22:50:46.605099709Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.930711636s" Aug 5 22:50:46.605256 containerd[1444]: time="2024-08-05T22:50:46.605218852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Aug 5 22:50:46.612976 containerd[1444]: time="2024-08-05T22:50:46.612931742Z" level=info msg="CreateContainer within sandbox \"cd3f42ba8f1f7eead500f222b580b053b31993c20b009c2b60dbda7cd9313086\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 22:50:46.644026 containerd[1444]: time="2024-08-05T22:50:46.643976404Z" level=info msg="CreateContainer within sandbox \"cd3f42ba8f1f7eead500f222b580b053b31993c20b009c2b60dbda7cd9313086\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"35945faeef38e100938a0e355ccfcc4e251cf2bc0fec60f2a824324dba4634a1\"" Aug 5 22:50:46.645212 containerd[1444]: time="2024-08-05T22:50:46.644938037Z" level=info msg="StartContainer for \"35945faeef38e100938a0e355ccfcc4e251cf2bc0fec60f2a824324dba4634a1\"" Aug 5 22:50:46.688627 systemd[1]: Started cri-containerd-35945faeef38e100938a0e355ccfcc4e251cf2bc0fec60f2a824324dba4634a1.scope - libcontainer container 35945faeef38e100938a0e355ccfcc4e251cf2bc0fec60f2a824324dba4634a1. Aug 5 22:50:46.738487 containerd[1444]: time="2024-08-05T22:50:46.738298738Z" level=info msg="StartContainer for \"35945faeef38e100938a0e355ccfcc4e251cf2bc0fec60f2a824324dba4634a1\" returns successfully" Aug 5 22:50:46.757286 systemd[1]: cri-containerd-35945faeef38e100938a0e355ccfcc4e251cf2bc0fec60f2a824324dba4634a1.scope: Deactivated successfully. Aug 5 22:50:46.790335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35945faeef38e100938a0e355ccfcc4e251cf2bc0fec60f2a824324dba4634a1-rootfs.mount: Deactivated successfully. Aug 5 22:50:46.873711 kubelet[2639]: I0805 22:50:46.873044 2639 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="905289e8-61f6-4179-9d83-eec9641096a7" path="/var/lib/kubelet/pods/905289e8-61f6-4179-9d83-eec9641096a7/volumes" Aug 5 22:50:46.891584 containerd[1444]: time="2024-08-05T22:50:46.891421666Z" level=info msg="shim disconnected" id=35945faeef38e100938a0e355ccfcc4e251cf2bc0fec60f2a824324dba4634a1 namespace=k8s.io Aug 5 22:50:46.891584 containerd[1444]: time="2024-08-05T22:50:46.891568602Z" level=warning msg="cleaning up after shim disconnected" id=35945faeef38e100938a0e355ccfcc4e251cf2bc0fec60f2a824324dba4634a1 namespace=k8s.io Aug 5 22:50:46.891863 containerd[1444]: time="2024-08-05T22:50:46.891592937Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:50:47.055231 containerd[1444]: time="2024-08-05T22:50:47.055142935Z" level=info msg="StopPodSandbox for \"cd3f42ba8f1f7eead500f222b580b053b31993c20b009c2b60dbda7cd9313086\"" Aug 5 22:50:47.058927 containerd[1444]: time="2024-08-05T22:50:47.055229678Z" level=info msg="Container to stop \"35945faeef38e100938a0e355ccfcc4e251cf2bc0fec60f2a824324dba4634a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 22:50:47.064787 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cd3f42ba8f1f7eead500f222b580b053b31993c20b009c2b60dbda7cd9313086-shm.mount: Deactivated successfully. Aug 5 22:50:47.088171 systemd[1]: cri-containerd-cd3f42ba8f1f7eead500f222b580b053b31993c20b009c2b60dbda7cd9313086.scope: Deactivated successfully. Aug 5 22:50:47.130334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd3f42ba8f1f7eead500f222b580b053b31993c20b009c2b60dbda7cd9313086-rootfs.mount: Deactivated successfully. Aug 5 22:50:47.137663 kubelet[2639]: I0805 22:50:47.136908 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-69d94dd9db-k56px" podStartSLOduration=7.136850782 podStartE2EDuration="7.136850782s" podCreationTimestamp="2024-08-05 22:50:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:50:47.103139889 +0000 UTC m=+28.412602882" watchObservedRunningTime="2024-08-05 22:50:47.136850782 +0000 UTC m=+28.446313715" Aug 5 22:50:47.141591 containerd[1444]: time="2024-08-05T22:50:47.140307266Z" level=info msg="shim disconnected" id=cd3f42ba8f1f7eead500f222b580b053b31993c20b009c2b60dbda7cd9313086 namespace=k8s.io Aug 5 22:50:47.141591 containerd[1444]: time="2024-08-05T22:50:47.141582178Z" level=warning msg="cleaning up after shim disconnected" id=cd3f42ba8f1f7eead500f222b580b053b31993c20b009c2b60dbda7cd9313086 namespace=k8s.io Aug 5 22:50:47.141591 containerd[1444]: time="2024-08-05T22:50:47.141597116Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:50:47.157259 containerd[1444]: time="2024-08-05T22:50:47.157182785Z" level=info msg="TearDown network for sandbox \"cd3f42ba8f1f7eead500f222b580b053b31993c20b009c2b60dbda7cd9313086\" successfully" Aug 5 22:50:47.157259 containerd[1444]: time="2024-08-05T22:50:47.157229302Z" level=info msg="StopPodSandbox for \"cd3f42ba8f1f7eead500f222b580b053b31993c20b009c2b60dbda7cd9313086\" returns successfully" Aug 5 22:50:47.252342 kubelet[2639]: I0805 22:50:47.252260 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-flexvol-driver-host\") pod \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " Aug 5 22:50:47.252669 kubelet[2639]: I0805 22:50:47.252356 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-cni-net-dir\") pod \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " Aug 5 22:50:47.252669 kubelet[2639]: I0805 22:50:47.252406 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-lib-modules\") pod \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " Aug 5 22:50:47.252669 kubelet[2639]: I0805 22:50:47.252451 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-cni-bin-dir\") pod \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " Aug 5 22:50:47.252669 kubelet[2639]: I0805 22:50:47.252509 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "57b6c657-f75f-4a10-bb2e-4d7fc5301c06" (UID: "57b6c657-f75f-4a10-bb2e-4d7fc5301c06"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:50:47.252669 kubelet[2639]: I0805 22:50:47.252601 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "57b6c657-f75f-4a10-bb2e-4d7fc5301c06" (UID: "57b6c657-f75f-4a10-bb2e-4d7fc5301c06"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:50:47.252815 kubelet[2639]: I0805 22:50:47.252618 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-tigera-ca-bundle\") pod \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " Aug 5 22:50:47.252815 kubelet[2639]: I0805 22:50:47.252668 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-policysync\") pod \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " Aug 5 22:50:47.252815 kubelet[2639]: I0805 22:50:47.252715 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-var-run-calico\") pod \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " Aug 5 22:50:47.252815 kubelet[2639]: I0805 22:50:47.252760 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-var-lib-calico\") pod \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " Aug 5 22:50:47.252815 kubelet[2639]: I0805 22:50:47.252811 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-node-certs\") pod \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " Aug 5 22:50:47.252939 kubelet[2639]: I0805 22:50:47.252865 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2jjb\" (UniqueName: \"kubernetes.io/projected/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-kube-api-access-w2jjb\") pod \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " Aug 5 22:50:47.252939 kubelet[2639]: I0805 22:50:47.252920 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-cni-log-dir\") pod \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " Aug 5 22:50:47.252994 kubelet[2639]: I0805 22:50:47.252966 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-xtables-lock\") pod \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\" (UID: \"57b6c657-f75f-4a10-bb2e-4d7fc5301c06\") " Aug 5 22:50:47.253375 kubelet[2639]: I0805 22:50:47.253035 2639 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-cni-net-dir\") on node \"ci-4012-1-0-4-e6fc6d4d35.novalocal\" DevicePath \"\"" Aug 5 22:50:47.253375 kubelet[2639]: I0805 22:50:47.253040 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "57b6c657-f75f-4a10-bb2e-4d7fc5301c06" (UID: "57b6c657-f75f-4a10-bb2e-4d7fc5301c06"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:50:47.253375 kubelet[2639]: I0805 22:50:47.253078 2639 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-flexvol-driver-host\") on node \"ci-4012-1-0-4-e6fc6d4d35.novalocal\" DevicePath \"\"" Aug 5 22:50:47.253375 kubelet[2639]: I0805 22:50:47.253090 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "57b6c657-f75f-4a10-bb2e-4d7fc5301c06" (UID: "57b6c657-f75f-4a10-bb2e-4d7fc5301c06"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:50:47.253375 kubelet[2639]: I0805 22:50:47.253133 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "57b6c657-f75f-4a10-bb2e-4d7fc5301c06" (UID: "57b6c657-f75f-4a10-bb2e-4d7fc5301c06"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:50:47.253541 kubelet[2639]: I0805 22:50:47.253144 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "57b6c657-f75f-4a10-bb2e-4d7fc5301c06" (UID: "57b6c657-f75f-4a10-bb2e-4d7fc5301c06"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:50:47.253541 kubelet[2639]: I0805 22:50:47.253203 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "57b6c657-f75f-4a10-bb2e-4d7fc5301c06" (UID: "57b6c657-f75f-4a10-bb2e-4d7fc5301c06"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:50:47.254087 kubelet[2639]: I0805 22:50:47.253823 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "57b6c657-f75f-4a10-bb2e-4d7fc5301c06" (UID: "57b6c657-f75f-4a10-bb2e-4d7fc5301c06"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 5 22:50:47.254087 kubelet[2639]: I0805 22:50:47.253861 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-policysync" (OuterVolumeSpecName: "policysync") pod "57b6c657-f75f-4a10-bb2e-4d7fc5301c06" (UID: "57b6c657-f75f-4a10-bb2e-4d7fc5301c06"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:50:47.255801 kubelet[2639]: I0805 22:50:47.255635 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "57b6c657-f75f-4a10-bb2e-4d7fc5301c06" (UID: "57b6c657-f75f-4a10-bb2e-4d7fc5301c06"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:50:47.258111 kubelet[2639]: I0805 22:50:47.258041 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-kube-api-access-w2jjb" (OuterVolumeSpecName: "kube-api-access-w2jjb") pod "57b6c657-f75f-4a10-bb2e-4d7fc5301c06" (UID: "57b6c657-f75f-4a10-bb2e-4d7fc5301c06"). InnerVolumeSpecName "kube-api-access-w2jjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 5 22:50:47.259143 systemd[1]: var-lib-kubelet-pods-57b6c657\x2df75f\x2d4a10\x2dbb2e\x2d4d7fc5301c06-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw2jjb.mount: Deactivated successfully. Aug 5 22:50:47.261682 kubelet[2639]: I0805 22:50:47.261610 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-node-certs" (OuterVolumeSpecName: "node-certs") pod "57b6c657-f75f-4a10-bb2e-4d7fc5301c06" (UID: "57b6c657-f75f-4a10-bb2e-4d7fc5301c06"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 5 22:50:47.354329 kubelet[2639]: I0805 22:50:47.354245 2639 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-tigera-ca-bundle\") on node \"ci-4012-1-0-4-e6fc6d4d35.novalocal\" DevicePath \"\"" Aug 5 22:50:47.354621 kubelet[2639]: I0805 22:50:47.354399 2639 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-policysync\") on node \"ci-4012-1-0-4-e6fc6d4d35.novalocal\" DevicePath \"\"" Aug 5 22:50:47.354621 kubelet[2639]: I0805 22:50:47.354443 2639 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-var-run-calico\") on node \"ci-4012-1-0-4-e6fc6d4d35.novalocal\" DevicePath \"\"" Aug 5 22:50:47.354621 kubelet[2639]: I0805 22:50:47.354517 2639 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-var-lib-calico\") on node \"ci-4012-1-0-4-e6fc6d4d35.novalocal\" DevicePath \"\"" Aug 5 22:50:47.354621 kubelet[2639]: I0805 22:50:47.354551 2639 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-node-certs\") on node \"ci-4012-1-0-4-e6fc6d4d35.novalocal\" DevicePath \"\"" Aug 5 22:50:47.354621 kubelet[2639]: I0805 22:50:47.354596 2639 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-w2jjb\" (UniqueName: \"kubernetes.io/projected/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-kube-api-access-w2jjb\") on node \"ci-4012-1-0-4-e6fc6d4d35.novalocal\" DevicePath \"\"" Aug 5 22:50:47.354621 kubelet[2639]: I0805 22:50:47.354626 2639 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-cni-log-dir\") on node \"ci-4012-1-0-4-e6fc6d4d35.novalocal\" DevicePath \"\"" Aug 5 22:50:47.355185 kubelet[2639]: I0805 22:50:47.354657 2639 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-xtables-lock\") on node \"ci-4012-1-0-4-e6fc6d4d35.novalocal\" DevicePath \"\"" Aug 5 22:50:47.355185 kubelet[2639]: I0805 22:50:47.354690 2639 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-lib-modules\") on node \"ci-4012-1-0-4-e6fc6d4d35.novalocal\" DevicePath \"\"" Aug 5 22:50:47.355185 kubelet[2639]: I0805 22:50:47.354719 2639 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/57b6c657-f75f-4a10-bb2e-4d7fc5301c06-cni-bin-dir\") on node \"ci-4012-1-0-4-e6fc6d4d35.novalocal\" DevicePath \"\"" Aug 5 22:50:47.694503 systemd[1]: var-lib-kubelet-pods-57b6c657\x2df75f\x2d4a10\x2dbb2e\x2d4d7fc5301c06-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Aug 5 22:50:47.849997 kubelet[2639]: E0805 22:50:47.849886 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7trhv" podUID="a3054ae9-283f-4e4a-bf3e-fdb7b75b0214" Aug 5 22:50:48.062600 kubelet[2639]: I0805 22:50:48.062349 2639 scope.go:117] "RemoveContainer" containerID="35945faeef38e100938a0e355ccfcc4e251cf2bc0fec60f2a824324dba4634a1" Aug 5 22:50:48.072202 containerd[1444]: time="2024-08-05T22:50:48.071959698Z" level=info msg="RemoveContainer for \"35945faeef38e100938a0e355ccfcc4e251cf2bc0fec60f2a824324dba4634a1\"" Aug 5 22:50:48.075339 systemd[1]: Removed slice kubepods-besteffort-pod57b6c657_f75f_4a10_bb2e_4d7fc5301c06.slice - libcontainer container kubepods-besteffort-pod57b6c657_f75f_4a10_bb2e_4d7fc5301c06.slice. Aug 5 22:50:48.085986 containerd[1444]: time="2024-08-05T22:50:48.085827183Z" level=info msg="RemoveContainer for \"35945faeef38e100938a0e355ccfcc4e251cf2bc0fec60f2a824324dba4634a1\" returns successfully" Aug 5 22:50:48.160176 kubelet[2639]: I0805 22:50:48.158750 2639 topology_manager.go:215] "Topology Admit Handler" podUID="ea96b095-4ad9-4ac9-97e6-93b68ed677bf" podNamespace="calico-system" podName="calico-node-t8w54" Aug 5 22:50:48.160176 kubelet[2639]: E0805 22:50:48.158814 2639 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="57b6c657-f75f-4a10-bb2e-4d7fc5301c06" containerName="flexvol-driver" Aug 5 22:50:48.160176 kubelet[2639]: I0805 22:50:48.158842 2639 memory_manager.go:354] "RemoveStaleState removing state" podUID="57b6c657-f75f-4a10-bb2e-4d7fc5301c06" containerName="flexvol-driver" Aug 5 22:50:48.173169 systemd[1]: Created slice kubepods-besteffort-podea96b095_4ad9_4ac9_97e6_93b68ed677bf.slice - libcontainer container kubepods-besteffort-podea96b095_4ad9_4ac9_97e6_93b68ed677bf.slice. Aug 5 22:50:48.260204 kubelet[2639]: I0805 22:50:48.260147 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ea96b095-4ad9-4ac9-97e6-93b68ed677bf-cni-bin-dir\") pod \"calico-node-t8w54\" (UID: \"ea96b095-4ad9-4ac9-97e6-93b68ed677bf\") " pod="calico-system/calico-node-t8w54" Aug 5 22:50:48.260339 kubelet[2639]: I0805 22:50:48.260218 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brh8t\" (UniqueName: \"kubernetes.io/projected/ea96b095-4ad9-4ac9-97e6-93b68ed677bf-kube-api-access-brh8t\") pod \"calico-node-t8w54\" (UID: \"ea96b095-4ad9-4ac9-97e6-93b68ed677bf\") " pod="calico-system/calico-node-t8w54" Aug 5 22:50:48.260339 kubelet[2639]: I0805 22:50:48.260251 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea96b095-4ad9-4ac9-97e6-93b68ed677bf-tigera-ca-bundle\") pod \"calico-node-t8w54\" (UID: \"ea96b095-4ad9-4ac9-97e6-93b68ed677bf\") " pod="calico-system/calico-node-t8w54" Aug 5 22:50:48.260339 kubelet[2639]: I0805 22:50:48.260284 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ea96b095-4ad9-4ac9-97e6-93b68ed677bf-node-certs\") pod \"calico-node-t8w54\" (UID: \"ea96b095-4ad9-4ac9-97e6-93b68ed677bf\") " pod="calico-system/calico-node-t8w54" Aug 5 22:50:48.260339 kubelet[2639]: I0805 22:50:48.260316 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ea96b095-4ad9-4ac9-97e6-93b68ed677bf-var-run-calico\") pod \"calico-node-t8w54\" (UID: \"ea96b095-4ad9-4ac9-97e6-93b68ed677bf\") " pod="calico-system/calico-node-t8w54" Aug 5 22:50:48.260472 kubelet[2639]: I0805 22:50:48.260345 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ea96b095-4ad9-4ac9-97e6-93b68ed677bf-flexvol-driver-host\") pod \"calico-node-t8w54\" (UID: \"ea96b095-4ad9-4ac9-97e6-93b68ed677bf\") " pod="calico-system/calico-node-t8w54" Aug 5 22:50:48.260472 kubelet[2639]: I0805 22:50:48.260383 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea96b095-4ad9-4ac9-97e6-93b68ed677bf-xtables-lock\") pod \"calico-node-t8w54\" (UID: \"ea96b095-4ad9-4ac9-97e6-93b68ed677bf\") " pod="calico-system/calico-node-t8w54" Aug 5 22:50:48.260472 kubelet[2639]: I0805 22:50:48.260419 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ea96b095-4ad9-4ac9-97e6-93b68ed677bf-policysync\") pod \"calico-node-t8w54\" (UID: \"ea96b095-4ad9-4ac9-97e6-93b68ed677bf\") " pod="calico-system/calico-node-t8w54" Aug 5 22:50:48.260472 kubelet[2639]: I0805 22:50:48.260445 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ea96b095-4ad9-4ac9-97e6-93b68ed677bf-var-lib-calico\") pod \"calico-node-t8w54\" (UID: \"ea96b095-4ad9-4ac9-97e6-93b68ed677bf\") " pod="calico-system/calico-node-t8w54" Aug 5 22:50:48.260617 kubelet[2639]: I0805 22:50:48.260486 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ea96b095-4ad9-4ac9-97e6-93b68ed677bf-cni-net-dir\") pod \"calico-node-t8w54\" (UID: \"ea96b095-4ad9-4ac9-97e6-93b68ed677bf\") " pod="calico-system/calico-node-t8w54" Aug 5 22:50:48.260617 kubelet[2639]: I0805 22:50:48.260518 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ea96b095-4ad9-4ac9-97e6-93b68ed677bf-cni-log-dir\") pod \"calico-node-t8w54\" (UID: \"ea96b095-4ad9-4ac9-97e6-93b68ed677bf\") " pod="calico-system/calico-node-t8w54" Aug 5 22:50:48.260617 kubelet[2639]: I0805 22:50:48.260543 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea96b095-4ad9-4ac9-97e6-93b68ed677bf-lib-modules\") pod \"calico-node-t8w54\" (UID: \"ea96b095-4ad9-4ac9-97e6-93b68ed677bf\") " pod="calico-system/calico-node-t8w54" Aug 5 22:50:48.478217 containerd[1444]: time="2024-08-05T22:50:48.478117738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t8w54,Uid:ea96b095-4ad9-4ac9-97e6-93b68ed677bf,Namespace:calico-system,Attempt:0,}" Aug 5 22:50:48.513903 containerd[1444]: time="2024-08-05T22:50:48.513480157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:50:48.514608 containerd[1444]: time="2024-08-05T22:50:48.513879345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:50:48.515108 containerd[1444]: time="2024-08-05T22:50:48.514529416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:50:48.515298 containerd[1444]: time="2024-08-05T22:50:48.514955845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:50:48.545640 systemd[1]: Started cri-containerd-c5e27998679cc488b4141d341d26896abed4b377a32016573413094a8e9a5e3c.scope - libcontainer container c5e27998679cc488b4141d341d26896abed4b377a32016573413094a8e9a5e3c. Aug 5 22:50:48.599593 containerd[1444]: time="2024-08-05T22:50:48.599130396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t8w54,Uid:ea96b095-4ad9-4ac9-97e6-93b68ed677bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"c5e27998679cc488b4141d341d26896abed4b377a32016573413094a8e9a5e3c\"" Aug 5 22:50:48.606009 containerd[1444]: time="2024-08-05T22:50:48.604688303Z" level=info msg="CreateContainer within sandbox \"c5e27998679cc488b4141d341d26896abed4b377a32016573413094a8e9a5e3c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 22:50:48.637006 containerd[1444]: time="2024-08-05T22:50:48.636948773Z" level=info msg="CreateContainer within sandbox \"c5e27998679cc488b4141d341d26896abed4b377a32016573413094a8e9a5e3c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"964cee7871f44f7d3ac4e9ea9daf24a9228a1b8fc664de4590b41d4600aca3a6\"" Aug 5 22:50:48.640483 containerd[1444]: time="2024-08-05T22:50:48.638138555Z" level=info msg="StartContainer for \"964cee7871f44f7d3ac4e9ea9daf24a9228a1b8fc664de4590b41d4600aca3a6\"" Aug 5 22:50:48.707961 systemd[1]: Started cri-containerd-964cee7871f44f7d3ac4e9ea9daf24a9228a1b8fc664de4590b41d4600aca3a6.scope - libcontainer container 964cee7871f44f7d3ac4e9ea9daf24a9228a1b8fc664de4590b41d4600aca3a6. Aug 5 22:50:48.777739 containerd[1444]: time="2024-08-05T22:50:48.777605064Z" level=info msg="StartContainer for \"964cee7871f44f7d3ac4e9ea9daf24a9228a1b8fc664de4590b41d4600aca3a6\" returns successfully" Aug 5 22:50:48.793341 systemd[1]: cri-containerd-964cee7871f44f7d3ac4e9ea9daf24a9228a1b8fc664de4590b41d4600aca3a6.scope: Deactivated successfully. Aug 5 22:50:48.829202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-964cee7871f44f7d3ac4e9ea9daf24a9228a1b8fc664de4590b41d4600aca3a6-rootfs.mount: Deactivated successfully. Aug 5 22:50:48.853945 kubelet[2639]: I0805 22:50:48.853170 2639 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="57b6c657-f75f-4a10-bb2e-4d7fc5301c06" path="/var/lib/kubelet/pods/57b6c657-f75f-4a10-bb2e-4d7fc5301c06/volumes" Aug 5 22:50:48.854537 containerd[1444]: time="2024-08-05T22:50:48.854255149Z" level=info msg="shim disconnected" id=964cee7871f44f7d3ac4e9ea9daf24a9228a1b8fc664de4590b41d4600aca3a6 namespace=k8s.io Aug 5 22:50:48.854537 containerd[1444]: time="2024-08-05T22:50:48.854307717Z" level=warning msg="cleaning up after shim disconnected" id=964cee7871f44f7d3ac4e9ea9daf24a9228a1b8fc664de4590b41d4600aca3a6 namespace=k8s.io Aug 5 22:50:48.854537 containerd[1444]: time="2024-08-05T22:50:48.854318548Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:50:49.067210 containerd[1444]: time="2024-08-05T22:50:49.066971681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 5 22:50:49.850023 kubelet[2639]: E0805 22:50:49.849951 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7trhv" podUID="a3054ae9-283f-4e4a-bf3e-fdb7b75b0214" Aug 5 22:50:51.849452 kubelet[2639]: E0805 22:50:51.849317 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7trhv" podUID="a3054ae9-283f-4e4a-bf3e-fdb7b75b0214" Aug 5 22:50:53.849447 kubelet[2639]: E0805 22:50:53.849276 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7trhv" podUID="a3054ae9-283f-4e4a-bf3e-fdb7b75b0214" Aug 5 22:50:55.029030 containerd[1444]: time="2024-08-05T22:50:55.028833967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:50:55.031535 containerd[1444]: time="2024-08-05T22:50:55.031096662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Aug 5 22:50:55.037299 containerd[1444]: time="2024-08-05T22:50:55.033141798Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:50:55.043865 containerd[1444]: time="2024-08-05T22:50:55.043697539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:50:55.046694 containerd[1444]: time="2024-08-05T22:50:55.046627554Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 5.979597122s" Aug 5 22:50:55.046945 containerd[1444]: time="2024-08-05T22:50:55.046903633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Aug 5 22:50:55.053378 containerd[1444]: time="2024-08-05T22:50:55.053278711Z" level=info msg="CreateContainer within sandbox \"c5e27998679cc488b4141d341d26896abed4b377a32016573413094a8e9a5e3c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 5 22:50:55.164095 containerd[1444]: time="2024-08-05T22:50:55.163997138Z" level=info msg="CreateContainer within sandbox \"c5e27998679cc488b4141d341d26896abed4b377a32016573413094a8e9a5e3c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"54806dfd49f4749f1d079c39c0594384aafc6755c8b286e7b9cb23cf02c89ab8\"" Aug 5 22:50:55.166591 containerd[1444]: time="2024-08-05T22:50:55.166525711Z" level=info msg="StartContainer for \"54806dfd49f4749f1d079c39c0594384aafc6755c8b286e7b9cb23cf02c89ab8\"" Aug 5 22:50:55.286687 systemd[1]: run-containerd-runc-k8s.io-54806dfd49f4749f1d079c39c0594384aafc6755c8b286e7b9cb23cf02c89ab8-runc.WOpEKi.mount: Deactivated successfully. Aug 5 22:50:55.297625 systemd[1]: Started cri-containerd-54806dfd49f4749f1d079c39c0594384aafc6755c8b286e7b9cb23cf02c89ab8.scope - libcontainer container 54806dfd49f4749f1d079c39c0594384aafc6755c8b286e7b9cb23cf02c89ab8. Aug 5 22:50:55.331418 containerd[1444]: time="2024-08-05T22:50:55.331369207Z" level=info msg="StartContainer for \"54806dfd49f4749f1d079c39c0594384aafc6755c8b286e7b9cb23cf02c89ab8\" returns successfully" Aug 5 22:50:55.849261 kubelet[2639]: E0805 22:50:55.849104 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7trhv" podUID="a3054ae9-283f-4e4a-bf3e-fdb7b75b0214" Aug 5 22:50:57.635586 systemd[1]: cri-containerd-54806dfd49f4749f1d079c39c0594384aafc6755c8b286e7b9cb23cf02c89ab8.scope: Deactivated successfully. Aug 5 22:50:57.666931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54806dfd49f4749f1d079c39c0594384aafc6755c8b286e7b9cb23cf02c89ab8-rootfs.mount: Deactivated successfully. Aug 5 22:50:57.681491 containerd[1444]: time="2024-08-05T22:50:57.681165392Z" level=info msg="shim disconnected" id=54806dfd49f4749f1d079c39c0594384aafc6755c8b286e7b9cb23cf02c89ab8 namespace=k8s.io Aug 5 22:50:57.681491 containerd[1444]: time="2024-08-05T22:50:57.681244220Z" level=warning msg="cleaning up after shim disconnected" id=54806dfd49f4749f1d079c39c0594384aafc6755c8b286e7b9cb23cf02c89ab8 namespace=k8s.io Aug 5 22:50:57.681491 containerd[1444]: time="2024-08-05T22:50:57.681254519Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:50:57.727305 kubelet[2639]: I0805 22:50:57.726720 2639 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Aug 5 22:50:57.756499 kubelet[2639]: I0805 22:50:57.754950 2639 topology_manager.go:215] "Topology Admit Handler" podUID="5365d583-874a-419c-9670-661f7a11e6f5" podNamespace="calico-system" podName="calico-kube-controllers-8b48f45f5-9rkmv" Aug 5 22:50:57.762409 kubelet[2639]: I0805 22:50:57.762189 2639 topology_manager.go:215] "Topology Admit Handler" podUID="03e56f3f-28a0-40b8-af65-62461e1e06ab" podNamespace="kube-system" podName="coredns-76f75df574-4sl6k" Aug 5 22:50:57.763852 kubelet[2639]: I0805 22:50:57.763775 2639 topology_manager.go:215] "Topology Admit Handler" podUID="2bed6644-fbd4-4e3b-8155-30ecff2fffc6" podNamespace="kube-system" podName="coredns-76f75df574-c78g6" Aug 5 22:50:57.764867 systemd[1]: Created slice kubepods-besteffort-pod5365d583_874a_419c_9670_661f7a11e6f5.slice - libcontainer container kubepods-besteffort-pod5365d583_874a_419c_9670_661f7a11e6f5.slice. Aug 5 22:50:57.776738 systemd[1]: Created slice kubepods-burstable-pod2bed6644_fbd4_4e3b_8155_30ecff2fffc6.slice - libcontainer container kubepods-burstable-pod2bed6644_fbd4_4e3b_8155_30ecff2fffc6.slice. Aug 5 22:50:57.785793 systemd[1]: Created slice kubepods-burstable-pod03e56f3f_28a0_40b8_af65_62461e1e06ab.slice - libcontainer container kubepods-burstable-pod03e56f3f_28a0_40b8_af65_62461e1e06ab.slice. Aug 5 22:50:57.836653 kubelet[2639]: I0805 22:50:57.836625 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9lqx\" (UniqueName: \"kubernetes.io/projected/5365d583-874a-419c-9670-661f7a11e6f5-kube-api-access-r9lqx\") pod \"calico-kube-controllers-8b48f45f5-9rkmv\" (UID: \"5365d583-874a-419c-9670-661f7a11e6f5\") " pod="calico-system/calico-kube-controllers-8b48f45f5-9rkmv" Aug 5 22:50:57.836936 kubelet[2639]: I0805 22:50:57.836923 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5365d583-874a-419c-9670-661f7a11e6f5-tigera-ca-bundle\") pod \"calico-kube-controllers-8b48f45f5-9rkmv\" (UID: \"5365d583-874a-419c-9670-661f7a11e6f5\") " pod="calico-system/calico-kube-controllers-8b48f45f5-9rkmv" Aug 5 22:50:57.854299 systemd[1]: Created slice kubepods-besteffort-poda3054ae9_283f_4e4a_bf3e_fdb7b75b0214.slice - libcontainer container kubepods-besteffort-poda3054ae9_283f_4e4a_bf3e_fdb7b75b0214.slice. Aug 5 22:50:57.857180 containerd[1444]: time="2024-08-05T22:50:57.856905561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7trhv,Uid:a3054ae9-283f-4e4a-bf3e-fdb7b75b0214,Namespace:calico-system,Attempt:0,}" Aug 5 22:50:57.938553 kubelet[2639]: I0805 22:50:57.937812 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bed6644-fbd4-4e3b-8155-30ecff2fffc6-config-volume\") pod \"coredns-76f75df574-c78g6\" (UID: \"2bed6644-fbd4-4e3b-8155-30ecff2fffc6\") " pod="kube-system/coredns-76f75df574-c78g6" Aug 5 22:50:57.938553 kubelet[2639]: I0805 22:50:57.937974 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gdc8\" (UniqueName: \"kubernetes.io/projected/03e56f3f-28a0-40b8-af65-62461e1e06ab-kube-api-access-4gdc8\") pod \"coredns-76f75df574-4sl6k\" (UID: \"03e56f3f-28a0-40b8-af65-62461e1e06ab\") " pod="kube-system/coredns-76f75df574-4sl6k" Aug 5 22:50:57.938553 kubelet[2639]: I0805 22:50:57.938085 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03e56f3f-28a0-40b8-af65-62461e1e06ab-config-volume\") pod \"coredns-76f75df574-4sl6k\" (UID: \"03e56f3f-28a0-40b8-af65-62461e1e06ab\") " pod="kube-system/coredns-76f75df574-4sl6k" Aug 5 22:50:57.938553 kubelet[2639]: I0805 22:50:57.938191 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd2hm\" (UniqueName: \"kubernetes.io/projected/2bed6644-fbd4-4e3b-8155-30ecff2fffc6-kube-api-access-vd2hm\") pod \"coredns-76f75df574-c78g6\" (UID: \"2bed6644-fbd4-4e3b-8155-30ecff2fffc6\") " pod="kube-system/coredns-76f75df574-c78g6" Aug 5 22:50:58.075422 containerd[1444]: time="2024-08-05T22:50:58.073814924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b48f45f5-9rkmv,Uid:5365d583-874a-419c-9670-661f7a11e6f5,Namespace:calico-system,Attempt:0,}" Aug 5 22:50:58.081291 containerd[1444]: time="2024-08-05T22:50:58.081252624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c78g6,Uid:2bed6644-fbd4-4e3b-8155-30ecff2fffc6,Namespace:kube-system,Attempt:0,}" Aug 5 22:50:58.091027 containerd[1444]: time="2024-08-05T22:50:58.090976374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4sl6k,Uid:03e56f3f-28a0-40b8-af65-62461e1e06ab,Namespace:kube-system,Attempt:0,}" Aug 5 22:50:58.175168 containerd[1444]: time="2024-08-05T22:50:58.174197036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 5 22:50:58.297327 containerd[1444]: time="2024-08-05T22:50:58.297175875Z" level=error msg="Failed to destroy network for sandbox \"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:50:58.307796 containerd[1444]: time="2024-08-05T22:50:58.307733918Z" level=error msg="encountered an error cleaning up failed sandbox \"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:50:58.307912 containerd[1444]: time="2024-08-05T22:50:58.307830600Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7trhv,Uid:a3054ae9-283f-4e4a-bf3e-fdb7b75b0214,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:50:58.308357 kubelet[2639]: E0805 22:50:58.308335 2639 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:50:58.308516 kubelet[2639]: E0805 22:50:58.308502 2639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7trhv" Aug 5 22:50:58.308607 kubelet[2639]: E0805 22:50:58.308598 2639 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7trhv" Aug 5 22:50:58.308736 kubelet[2639]: E0805 22:50:58.308724 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7trhv_calico-system(a3054ae9-283f-4e4a-bf3e-fdb7b75b0214)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7trhv_calico-system(a3054ae9-283f-4e4a-bf3e-fdb7b75b0214)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7trhv" podUID="a3054ae9-283f-4e4a-bf3e-fdb7b75b0214" Aug 5 22:50:58.340355 containerd[1444]: time="2024-08-05T22:50:58.340290337Z" level=error msg="Failed to destroy network for sandbox \"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:50:58.340751 containerd[1444]: time="2024-08-05T22:50:58.340713260Z" level=error msg="encountered an error cleaning up failed sandbox \"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:50:58.340799 containerd[1444]: time="2024-08-05T22:50:58.340771439Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4sl6k,Uid:03e56f3f-28a0-40b8-af65-62461e1e06ab,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:50:58.341030 kubelet[2639]: E0805 22:50:58.341005 2639 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:50:58.341155 kubelet[2639]: E0805 22:50:58.341143 2639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4sl6k" Aug 5 22:50:58.341256 kubelet[2639]: E0805 22:50:58.341245 2639 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4sl6k" Aug 5 22:50:58.341380 kubelet[2639]: E0805 22:50:58.341369 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-4sl6k_kube-system(03e56f3f-28a0-40b8-af65-62461e1e06ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-4sl6k_kube-system(03e56f3f-28a0-40b8-af65-62461e1e06ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-4sl6k" podUID="03e56f3f-28a0-40b8-af65-62461e1e06ab" Aug 5 22:50:58.359792 containerd[1444]: time="2024-08-05T22:50:58.359722196Z" level=error msg="Failed to destroy network for sandbox \"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:50:58.360116 containerd[1444]: time="2024-08-05T22:50:58.360064929Z" level=error msg="encountered an error cleaning up failed sandbox \"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:50:58.361549 containerd[1444]: time="2024-08-05T22:50:58.360134109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c78g6,Uid:2bed6644-fbd4-4e3b-8155-30ecff2fffc6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:50:58.361600 kubelet[2639]: E0805 22:50:58.360404 2639 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:50:58.361600 kubelet[2639]: E0805 22:50:58.360535 2639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-c78g6" Aug 5 22:50:58.361600 kubelet[2639]: E0805 22:50:58.360566 2639 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-c78g6" Aug 5 22:50:58.362425 kubelet[2639]: E0805 22:50:58.360651 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-c78g6_kube-system(2bed6644-fbd4-4e3b-8155-30ecff2fffc6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-c78g6_kube-system(2bed6644-fbd4-4e3b-8155-30ecff2fffc6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-c78g6" podUID="2bed6644-fbd4-4e3b-8155-30ecff2fffc6" Aug 5 22:50:58.366171 containerd[1444]: time="2024-08-05T22:50:58.366130256Z" level=error msg="Failed to destroy network for sandbox \"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:50:58.366736 containerd[1444]: time="2024-08-05T22:50:58.366662555Z" level=error msg="encountered an error cleaning up failed sandbox \"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:50:58.366882 containerd[1444]: time="2024-08-05T22:50:58.366805774Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b48f45f5-9rkmv,Uid:5365d583-874a-419c-9670-661f7a11e6f5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:50:58.367482 kubelet[2639]: E0805 22:50:58.367133 2639 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:50:58.367482 kubelet[2639]: E0805 22:50:58.367178 2639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8b48f45f5-9rkmv" Aug 5 22:50:58.367482 kubelet[2639]: E0805 22:50:58.367202 2639 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8b48f45f5-9rkmv" Aug 5 22:50:58.368442 kubelet[2639]: E0805 22:50:58.367258 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8b48f45f5-9rkmv_calico-system(5365d583-874a-419c-9670-661f7a11e6f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8b48f45f5-9rkmv_calico-system(5365d583-874a-419c-9670-661f7a11e6f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8b48f45f5-9rkmv" podUID="5365d583-874a-419c-9670-661f7a11e6f5" Aug 5 22:50:58.676800 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106-shm.mount: Deactivated successfully. Aug 5 22:50:59.165872 kubelet[2639]: I0805 22:50:59.165768 2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Aug 5 22:50:59.178246 containerd[1444]: time="2024-08-05T22:50:59.175738123Z" level=info msg="StopPodSandbox for \"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\"" Aug 5 22:50:59.178246 containerd[1444]: time="2024-08-05T22:50:59.177808215Z" level=info msg="Ensure that sandbox 19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e in task-service has been cleanup successfully" Aug 5 22:50:59.179947 kubelet[2639]: I0805 22:50:59.176220 2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Aug 5 22:50:59.188516 containerd[1444]: time="2024-08-05T22:50:59.187766274Z" level=info msg="StopPodSandbox for \"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\"" Aug 5 22:50:59.188516 containerd[1444]: time="2024-08-05T22:50:59.188254791Z" level=info msg="Ensure that sandbox e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4 in task-service has been cleanup successfully" Aug 5 22:50:59.198535 kubelet[2639]: I0805 22:50:59.197530 2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Aug 5 22:50:59.201359 containerd[1444]: time="2024-08-05T22:50:59.201177390Z" level=info msg="StopPodSandbox for \"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\"" Aug 5 22:50:59.203500 containerd[1444]: time="2024-08-05T22:50:59.203369551Z" level=info msg="Ensure that sandbox d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106 in task-service has been cleanup successfully" Aug 5 22:50:59.213557 kubelet[2639]: I0805 22:50:59.211741 2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Aug 5 22:50:59.219536 containerd[1444]: time="2024-08-05T22:50:59.219410871Z" level=info msg="StopPodSandbox for \"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\"" Aug 5 22:50:59.226498 containerd[1444]: time="2024-08-05T22:50:59.226397025Z" level=info msg="Ensure that sandbox a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9 in task-service has been cleanup successfully" Aug 5 22:50:59.291849 containerd[1444]: time="2024-08-05T22:50:59.291646577Z" level=error msg="StopPodSandbox for \"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\" failed" error="failed to destroy network for sandbox \"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:50:59.292149 kubelet[2639]: E0805 22:50:59.292125 2639 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Aug 5 22:50:59.292285 kubelet[2639]: E0805 22:50:59.292273 2639 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9"} Aug 5 22:50:59.292410 kubelet[2639]: E0805 22:50:59.292399 2639 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2bed6644-fbd4-4e3b-8155-30ecff2fffc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:50:59.293215 kubelet[2639]: E0805 22:50:59.293152 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2bed6644-fbd4-4e3b-8155-30ecff2fffc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-c78g6" podUID="2bed6644-fbd4-4e3b-8155-30ecff2fffc6" Aug 5 22:50:59.299358 containerd[1444]: time="2024-08-05T22:50:59.299282731Z" level=error msg="StopPodSandbox for \"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\" failed" error="failed to destroy network for sandbox \"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:50:59.299841 kubelet[2639]: E0805 22:50:59.299556 2639 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Aug 5 22:50:59.299841 kubelet[2639]: E0805 22:50:59.299602 2639 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e"} Aug 5 22:50:59.299841 kubelet[2639]: E0805 22:50:59.299650 2639 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5365d583-874a-419c-9670-661f7a11e6f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:50:59.299841 kubelet[2639]: E0805 22:50:59.299694 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5365d583-874a-419c-9670-661f7a11e6f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8b48f45f5-9rkmv" podUID="5365d583-874a-419c-9670-661f7a11e6f5" Aug 5 22:50:59.301027 kubelet[2639]: E0805 22:50:59.300745 2639 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Aug 5 22:50:59.301027 kubelet[2639]: E0805 22:50:59.300782 2639 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106"} Aug 5 22:50:59.301027 kubelet[2639]: E0805 22:50:59.300821 2639 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a3054ae9-283f-4e4a-bf3e-fdb7b75b0214\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:50:59.301027 kubelet[2639]: E0805 22:50:59.300853 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a3054ae9-283f-4e4a-bf3e-fdb7b75b0214\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7trhv" podUID="a3054ae9-283f-4e4a-bf3e-fdb7b75b0214" Aug 5 22:50:59.301231 containerd[1444]: time="2024-08-05T22:50:59.300499843Z" level=error msg="StopPodSandbox for \"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\" failed" error="failed to destroy network for sandbox \"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:50:59.306498 containerd[1444]: time="2024-08-05T22:50:59.306200285Z" level=error msg="StopPodSandbox for \"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\" failed" error="failed to destroy network for sandbox \"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:50:59.306555 kubelet[2639]: E0805 22:50:59.306364 2639 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Aug 5 22:50:59.306555 kubelet[2639]: E0805 22:50:59.306386 2639 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4"} Aug 5 22:50:59.306555 kubelet[2639]: E0805 22:50:59.306420 2639 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"03e56f3f-28a0-40b8-af65-62461e1e06ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:50:59.306555 kubelet[2639]: E0805 22:50:59.306446 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"03e56f3f-28a0-40b8-af65-62461e1e06ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-4sl6k" podUID="03e56f3f-28a0-40b8-af65-62461e1e06ab" Aug 5 22:51:06.327195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1874539004.mount: Deactivated successfully. Aug 5 22:51:06.475314 containerd[1444]: time="2024-08-05T22:51:06.475179765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:51:06.479053 containerd[1444]: time="2024-08-05T22:51:06.478965717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Aug 5 22:51:06.493261 containerd[1444]: time="2024-08-05T22:51:06.493127900Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:51:06.500357 containerd[1444]: time="2024-08-05T22:51:06.500197279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:51:06.501966 containerd[1444]: time="2024-08-05T22:51:06.501011046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 8.326748847s" Aug 5 22:51:06.501966 containerd[1444]: time="2024-08-05T22:51:06.501060999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Aug 5 22:51:06.580089 containerd[1444]: time="2024-08-05T22:51:06.579865108Z" level=info msg="CreateContainer within sandbox \"c5e27998679cc488b4141d341d26896abed4b377a32016573413094a8e9a5e3c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 5 22:51:06.679995 containerd[1444]: time="2024-08-05T22:51:06.679895080Z" level=info msg="CreateContainer within sandbox \"c5e27998679cc488b4141d341d26896abed4b377a32016573413094a8e9a5e3c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3f092259eb6601a5c5e21175832bc2541c88d56b5818432c54ef36e5ada3d402\"" Aug 5 22:51:06.682266 containerd[1444]: time="2024-08-05T22:51:06.680677758Z" level=info msg="StartContainer for \"3f092259eb6601a5c5e21175832bc2541c88d56b5818432c54ef36e5ada3d402\"" Aug 5 22:51:06.984429 systemd[1]: Started cri-containerd-3f092259eb6601a5c5e21175832bc2541c88d56b5818432c54ef36e5ada3d402.scope - libcontainer container 3f092259eb6601a5c5e21175832bc2541c88d56b5818432c54ef36e5ada3d402. Aug 5 22:51:07.060817 containerd[1444]: time="2024-08-05T22:51:07.060681984Z" level=info msg="StartContainer for \"3f092259eb6601a5c5e21175832bc2541c88d56b5818432c54ef36e5ada3d402\" returns successfully" Aug 5 22:51:07.172356 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 5 22:51:07.173898 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 5 22:51:07.448506 systemd[1]: run-containerd-runc-k8s.io-3f092259eb6601a5c5e21175832bc2541c88d56b5818432c54ef36e5ada3d402-runc.aK7HvC.mount: Deactivated successfully. Aug 5 22:51:09.852520 containerd[1444]: time="2024-08-05T22:51:09.851652560Z" level=info msg="StopPodSandbox for \"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\"" Aug 5 22:51:09.856498 containerd[1444]: time="2024-08-05T22:51:09.853861463Z" level=info msg="StopPodSandbox for \"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\"" Aug 5 22:51:10.122058 kubelet[2639]: I0805 22:51:10.121939 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-t8w54" podStartSLOduration=4.554266768 podStartE2EDuration="21.989052792s" podCreationTimestamp="2024-08-05 22:50:48 +0000 UTC" firstStartedPulling="2024-08-05 22:50:49.066639609 +0000 UTC m=+30.376102572" lastFinishedPulling="2024-08-05 22:51:06.501425643 +0000 UTC m=+47.810888596" observedRunningTime="2024-08-05 22:51:07.262099202 +0000 UTC m=+48.571562155" watchObservedRunningTime="2024-08-05 22:51:09.989052792 +0000 UTC m=+51.298515735" Aug 5 22:51:10.854442 containerd[1444]: time="2024-08-05T22:51:10.854005175Z" level=info msg="StopPodSandbox for \"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\"" Aug 5 22:51:11.366999 kubelet[2639]: I0805 22:51:11.366949 2639 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:51:11.483262 containerd[1444]: 2024-08-05 22:51:09.987 [INFO][4111] k8s.go 608: Cleaning up netns ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Aug 5 22:51:11.483262 containerd[1444]: 2024-08-05 22:51:09.988 [INFO][4111] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" iface="eth0" netns="/var/run/netns/cni-0a53dc14-ab16-7968-7068-880476c62860" Aug 5 22:51:11.483262 containerd[1444]: 2024-08-05 22:51:09.989 [INFO][4111] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" iface="eth0" netns="/var/run/netns/cni-0a53dc14-ab16-7968-7068-880476c62860" Aug 5 22:51:11.483262 containerd[1444]: 2024-08-05 22:51:09.989 [INFO][4111] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" iface="eth0" netns="/var/run/netns/cni-0a53dc14-ab16-7968-7068-880476c62860" Aug 5 22:51:11.483262 containerd[1444]: 2024-08-05 22:51:09.989 [INFO][4111] k8s.go 615: Releasing IP address(es) ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Aug 5 22:51:11.483262 containerd[1444]: 2024-08-05 22:51:09.989 [INFO][4111] utils.go 188: Calico CNI releasing IP address ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Aug 5 22:51:11.483262 containerd[1444]: 2024-08-05 22:51:11.450 [INFO][4130] ipam_plugin.go 411: Releasing address using handleID ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" HandleID="k8s-pod-network.d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0" Aug 5 22:51:11.483262 containerd[1444]: 2024-08-05 22:51:11.451 [INFO][4130] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:51:11.483262 containerd[1444]: 2024-08-05 22:51:11.453 [INFO][4130] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:51:11.483262 containerd[1444]: 2024-08-05 22:51:11.470 [WARNING][4130] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" HandleID="k8s-pod-network.d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0" Aug 5 22:51:11.483262 containerd[1444]: 2024-08-05 22:51:11.470 [INFO][4130] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" HandleID="k8s-pod-network.d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0" Aug 5 22:51:11.483262 containerd[1444]: 2024-08-05 22:51:11.479 [INFO][4130] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:51:11.483262 containerd[1444]: 2024-08-05 22:51:11.481 [INFO][4111] k8s.go 621: Teardown processing complete. ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Aug 5 22:51:11.485407 containerd[1444]: time="2024-08-05T22:51:11.483487713Z" level=info msg="TearDown network for sandbox \"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\" successfully" Aug 5 22:51:11.485407 containerd[1444]: time="2024-08-05T22:51:11.483517488Z" level=info msg="StopPodSandbox for \"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\" returns successfully" Aug 5 22:51:11.488265 systemd[1]: run-netns-cni\x2d0a53dc14\x2dab16\x2d7968\x2d7068\x2d880476c62860.mount: Deactivated successfully. Aug 5 22:51:11.503730 containerd[1444]: time="2024-08-05T22:51:11.503643107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7trhv,Uid:a3054ae9-283f-4e4a-bf3e-fdb7b75b0214,Namespace:calico-system,Attempt:1,}" Aug 5 22:51:11.529678 containerd[1444]: 2024-08-05 22:51:10.000 [INFO][4113] k8s.go 608: Cleaning up netns ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Aug 5 22:51:11.529678 containerd[1444]: 2024-08-05 22:51:10.001 [INFO][4113] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" iface="eth0" netns="/var/run/netns/cni-18c3202d-7dba-75c3-0e2d-85831e7126d3" Aug 5 22:51:11.529678 containerd[1444]: 2024-08-05 22:51:10.002 [INFO][4113] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" iface="eth0" netns="/var/run/netns/cni-18c3202d-7dba-75c3-0e2d-85831e7126d3" Aug 5 22:51:11.529678 containerd[1444]: 2024-08-05 22:51:10.005 [INFO][4113] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" iface="eth0" netns="/var/run/netns/cni-18c3202d-7dba-75c3-0e2d-85831e7126d3" Aug 5 22:51:11.529678 containerd[1444]: 2024-08-05 22:51:10.005 [INFO][4113] k8s.go 615: Releasing IP address(es) ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Aug 5 22:51:11.529678 containerd[1444]: 2024-08-05 22:51:10.005 [INFO][4113] utils.go 188: Calico CNI releasing IP address ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Aug 5 22:51:11.529678 containerd[1444]: 2024-08-05 22:51:11.450 [INFO][4135] ipam_plugin.go 411: Releasing address using handleID ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" HandleID="k8s-pod-network.e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0" Aug 5 22:51:11.529678 containerd[1444]: 2024-08-05 22:51:11.453 [INFO][4135] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:51:11.529678 containerd[1444]: 2024-08-05 22:51:11.505 [INFO][4135] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:51:11.529678 containerd[1444]: 2024-08-05 22:51:11.515 [WARNING][4135] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" HandleID="k8s-pod-network.e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0" Aug 5 22:51:11.529678 containerd[1444]: 2024-08-05 22:51:11.515 [INFO][4135] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" HandleID="k8s-pod-network.e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0" Aug 5 22:51:11.529678 containerd[1444]: 2024-08-05 22:51:11.517 [INFO][4135] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:51:11.529678 containerd[1444]: 2024-08-05 22:51:11.522 [INFO][4113] k8s.go 621: Teardown processing complete. ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Aug 5 22:51:11.533305 containerd[1444]: time="2024-08-05T22:51:11.532514951Z" level=info msg="TearDown network for sandbox \"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\" successfully" Aug 5 22:51:11.533305 containerd[1444]: time="2024-08-05T22:51:11.532572549Z" level=info msg="StopPodSandbox for \"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\" returns successfully" Aug 5 22:51:11.533176 systemd[1]: run-netns-cni\x2d18c3202d\x2d7dba\x2d75c3\x2d0e2d\x2d85831e7126d3.mount: Deactivated successfully. Aug 5 22:51:11.538080 containerd[1444]: time="2024-08-05T22:51:11.535227055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4sl6k,Uid:03e56f3f-28a0-40b8-af65-62461e1e06ab,Namespace:kube-system,Attempt:1,}" Aug 5 22:51:11.555890 containerd[1444]: 2024-08-05 22:51:11.170 [INFO][4163] k8s.go 608: Cleaning up netns ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Aug 5 22:51:11.555890 containerd[1444]: 2024-08-05 22:51:11.171 [INFO][4163] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" iface="eth0" netns="/var/run/netns/cni-22d5a734-eddc-0c79-82be-7f23abeaee9f" Aug 5 22:51:11.555890 containerd[1444]: 2024-08-05 22:51:11.171 [INFO][4163] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" iface="eth0" netns="/var/run/netns/cni-22d5a734-eddc-0c79-82be-7f23abeaee9f" Aug 5 22:51:11.555890 containerd[1444]: 2024-08-05 22:51:11.172 [INFO][4163] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" iface="eth0" netns="/var/run/netns/cni-22d5a734-eddc-0c79-82be-7f23abeaee9f" Aug 5 22:51:11.555890 containerd[1444]: 2024-08-05 22:51:11.172 [INFO][4163] k8s.go 615: Releasing IP address(es) ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Aug 5 22:51:11.555890 containerd[1444]: 2024-08-05 22:51:11.172 [INFO][4163] utils.go 188: Calico CNI releasing IP address ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Aug 5 22:51:11.555890 containerd[1444]: 2024-08-05 22:51:11.450 [INFO][4169] ipam_plugin.go 411: Releasing address using handleID ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" HandleID="k8s-pod-network.19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0" Aug 5 22:51:11.555890 containerd[1444]: 2024-08-05 22:51:11.452 [INFO][4169] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:51:11.555890 containerd[1444]: 2024-08-05 22:51:11.479 [INFO][4169] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:51:11.555890 containerd[1444]: 2024-08-05 22:51:11.497 [WARNING][4169] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" HandleID="k8s-pod-network.19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0" Aug 5 22:51:11.555890 containerd[1444]: 2024-08-05 22:51:11.497 [INFO][4169] ipam_plugin.go 439: Releasing address using workloadID ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" HandleID="k8s-pod-network.19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0" Aug 5 22:51:11.555890 containerd[1444]: 2024-08-05 22:51:11.505 [INFO][4169] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:51:11.555890 containerd[1444]: 2024-08-05 22:51:11.526 [INFO][4163] k8s.go 621: Teardown processing complete. ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Aug 5 22:51:11.558976 containerd[1444]: time="2024-08-05T22:51:11.558922829Z" level=info msg="TearDown network for sandbox \"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\" successfully" Aug 5 22:51:11.559107 containerd[1444]: time="2024-08-05T22:51:11.559072710Z" level=info msg="StopPodSandbox for \"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\" returns successfully" Aug 5 22:51:11.562681 systemd[1]: run-netns-cni\x2d22d5a734\x2deddc\x2d0c79\x2d82be\x2d7f23abeaee9f.mount: Deactivated successfully. Aug 5 22:51:11.564174 containerd[1444]: time="2024-08-05T22:51:11.564108487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b48f45f5-9rkmv,Uid:5365d583-874a-419c-9670-661f7a11e6f5,Namespace:calico-system,Attempt:1,}" Aug 5 22:51:12.065324 systemd-networkd[1359]: cali79b5b5a4ae5: Link UP Aug 5 22:51:12.067243 systemd-networkd[1359]: cali79b5b5a4ae5: Gained carrier Aug 5 22:51:12.094229 containerd[1444]: 2024-08-05 22:51:11.716 [INFO][4209] utils.go 100: File /var/lib/calico/mtu does not exist Aug 5 22:51:12.094229 containerd[1444]: 2024-08-05 22:51:11.769 [INFO][4209] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0 csi-node-driver- calico-system a3054ae9-283f-4e4a-bf3e-fdb7b75b0214 807 0 2024-08-05 22:50:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4012-1-0-4-e6fc6d4d35.novalocal csi-node-driver-7trhv eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali79b5b5a4ae5 [] []}} ContainerID="52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056" Namespace="calico-system" Pod="csi-node-driver-7trhv" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-" Aug 5 22:51:12.094229 containerd[1444]: 2024-08-05 22:51:11.769 [INFO][4209] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056" Namespace="calico-system" Pod="csi-node-driver-7trhv" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0" Aug 5 22:51:12.094229 containerd[1444]: 2024-08-05 22:51:11.875 [INFO][4253] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056" HandleID="k8s-pod-network.52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0" Aug 5 22:51:12.094229 containerd[1444]: 2024-08-05 22:51:11.957 [INFO][4253] ipam_plugin.go 264: Auto assigning IP ContainerID="52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056" HandleID="k8s-pod-network.52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000340d40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012-1-0-4-e6fc6d4d35.novalocal", "pod":"csi-node-driver-7trhv", "timestamp":"2024-08-05 22:51:11.875069038 +0000 UTC"}, Hostname:"ci-4012-1-0-4-e6fc6d4d35.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:51:12.094229 containerd[1444]: 2024-08-05 22:51:11.958 [INFO][4253] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:51:12.094229 containerd[1444]: 2024-08-05 22:51:11.958 [INFO][4253] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:51:12.094229 containerd[1444]: 2024-08-05 22:51:11.958 [INFO][4253] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012-1-0-4-e6fc6d4d35.novalocal' Aug 5 22:51:12.094229 containerd[1444]: 2024-08-05 22:51:11.963 [INFO][4253] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.094229 containerd[1444]: 2024-08-05 22:51:12.008 [INFO][4253] ipam.go 372: Looking up existing affinities for host host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.094229 containerd[1444]: 2024-08-05 22:51:12.015 [INFO][4253] ipam.go 489: Trying affinity for 192.168.120.64/26 host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.094229 containerd[1444]: 2024-08-05 22:51:12.019 [INFO][4253] ipam.go 155: Attempting to load block cidr=192.168.120.64/26 host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.094229 containerd[1444]: 2024-08-05 22:51:12.021 [INFO][4253] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.094229 containerd[1444]: 2024-08-05 22:51:12.022 [INFO][4253] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.094229 containerd[1444]: 2024-08-05 22:51:12.023 [INFO][4253] ipam.go 1685: Creating new handle: k8s-pod-network.52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056 Aug 5 22:51:12.094229 containerd[1444]: 2024-08-05 22:51:12.028 [INFO][4253] ipam.go 1203: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.094229 containerd[1444]: 2024-08-05 22:51:12.036 [INFO][4253] ipam.go 1216: Successfully claimed IPs: [192.168.120.65/26] block=192.168.120.64/26 handle="k8s-pod-network.52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.094229 containerd[1444]: 2024-08-05 22:51:12.036 [INFO][4253] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.65/26] handle="k8s-pod-network.52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.094229 containerd[1444]: 2024-08-05 22:51:12.036 [INFO][4253] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:51:12.094229 containerd[1444]: 2024-08-05 22:51:12.036 [INFO][4253] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.120.65/26] IPv6=[] ContainerID="52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056" HandleID="k8s-pod-network.52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0" Aug 5 22:51:12.095257 containerd[1444]: 2024-08-05 22:51:12.040 [INFO][4209] k8s.go 386: Populated endpoint ContainerID="52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056" Namespace="calico-system" Pod="csi-node-driver-7trhv" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a3054ae9-283f-4e4a-bf3e-fdb7b75b0214", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 50, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-4-e6fc6d4d35.novalocal", ContainerID:"", Pod:"csi-node-driver-7trhv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.120.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali79b5b5a4ae5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:51:12.095257 containerd[1444]: 2024-08-05 22:51:12.040 [INFO][4209] k8s.go 387: Calico CNI using IPs: [192.168.120.65/32] ContainerID="52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056" Namespace="calico-system" Pod="csi-node-driver-7trhv" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0" Aug 5 22:51:12.095257 containerd[1444]: 2024-08-05 22:51:12.040 [INFO][4209] dataplane_linux.go 68: Setting the host side veth name to cali79b5b5a4ae5 ContainerID="52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056" Namespace="calico-system" Pod="csi-node-driver-7trhv" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0" Aug 5 22:51:12.095257 containerd[1444]: 2024-08-05 22:51:12.068 [INFO][4209] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056" Namespace="calico-system" Pod="csi-node-driver-7trhv" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0" Aug 5 22:51:12.095257 containerd[1444]: 2024-08-05 22:51:12.072 [INFO][4209] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056" Namespace="calico-system" Pod="csi-node-driver-7trhv" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a3054ae9-283f-4e4a-bf3e-fdb7b75b0214", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 50, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-4-e6fc6d4d35.novalocal", ContainerID:"52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056", Pod:"csi-node-driver-7trhv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.120.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali79b5b5a4ae5", MAC:"d6:20:f0:96:44:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:51:12.095257 containerd[1444]: 2024-08-05 22:51:12.090 [INFO][4209] k8s.go 500: Wrote updated endpoint to datastore ContainerID="52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056" Namespace="calico-system" Pod="csi-node-driver-7trhv" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0" Aug 5 22:51:12.152009 systemd-networkd[1359]: cali1cd205b08f9: Link UP Aug 5 22:51:12.153557 systemd-networkd[1359]: cali1cd205b08f9: Gained carrier Aug 5 22:51:12.189644 containerd[1444]: 2024-08-05 22:51:11.817 [INFO][4214] utils.go 100: File /var/lib/calico/mtu does not exist Aug 5 22:51:12.189644 containerd[1444]: 2024-08-05 22:51:11.859 [INFO][4214] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0 calico-kube-controllers-8b48f45f5- calico-system 5365d583-874a-419c-9670-661f7a11e6f5 812 0 2024-08-05 22:50:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8b48f45f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4012-1-0-4-e6fc6d4d35.novalocal calico-kube-controllers-8b48f45f5-9rkmv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1cd205b08f9 [] []}} ContainerID="6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921" Namespace="calico-system" Pod="calico-kube-controllers-8b48f45f5-9rkmv" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-" Aug 5 22:51:12.189644 containerd[1444]: 2024-08-05 22:51:11.859 [INFO][4214] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921" Namespace="calico-system" Pod="calico-kube-controllers-8b48f45f5-9rkmv" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0" Aug 5 22:51:12.189644 containerd[1444]: 2024-08-05 22:51:11.941 [INFO][4265] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921" HandleID="k8s-pod-network.6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0" Aug 5 22:51:12.189644 containerd[1444]: 2024-08-05 22:51:11.971 [INFO][4265] ipam_plugin.go 264: Auto assigning IP ContainerID="6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921" HandleID="k8s-pod-network.6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050650), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012-1-0-4-e6fc6d4d35.novalocal", "pod":"calico-kube-controllers-8b48f45f5-9rkmv", "timestamp":"2024-08-05 22:51:11.941121813 +0000 UTC"}, Hostname:"ci-4012-1-0-4-e6fc6d4d35.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:51:12.189644 containerd[1444]: 2024-08-05 22:51:11.972 [INFO][4265] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:51:12.189644 containerd[1444]: 2024-08-05 22:51:12.036 [INFO][4265] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:51:12.189644 containerd[1444]: 2024-08-05 22:51:12.037 [INFO][4265] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012-1-0-4-e6fc6d4d35.novalocal' Aug 5 22:51:12.189644 containerd[1444]: 2024-08-05 22:51:12.040 [INFO][4265] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.189644 containerd[1444]: 2024-08-05 22:51:12.050 [INFO][4265] ipam.go 372: Looking up existing affinities for host host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.189644 containerd[1444]: 2024-08-05 22:51:12.073 [INFO][4265] ipam.go 489: Trying affinity for 192.168.120.64/26 host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.189644 containerd[1444]: 2024-08-05 22:51:12.086 [INFO][4265] ipam.go 155: Attempting to load block cidr=192.168.120.64/26 host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.189644 containerd[1444]: 2024-08-05 22:51:12.120 [INFO][4265] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.189644 containerd[1444]: 2024-08-05 22:51:12.120 [INFO][4265] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.189644 containerd[1444]: 2024-08-05 22:51:12.123 [INFO][4265] ipam.go 1685: Creating new handle: k8s-pod-network.6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921 Aug 5 22:51:12.189644 containerd[1444]: 2024-08-05 22:51:12.136 [INFO][4265] ipam.go 1203: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.189644 containerd[1444]: 2024-08-05 22:51:12.143 [INFO][4265] ipam.go 1216: Successfully claimed IPs: [192.168.120.66/26] block=192.168.120.64/26 handle="k8s-pod-network.6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.189644 containerd[1444]: 2024-08-05 22:51:12.144 [INFO][4265] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.66/26] handle="k8s-pod-network.6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.189644 containerd[1444]: 2024-08-05 22:51:12.144 [INFO][4265] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:51:12.189644 containerd[1444]: 2024-08-05 22:51:12.144 [INFO][4265] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.120.66/26] IPv6=[] ContainerID="6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921" HandleID="k8s-pod-network.6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0" Aug 5 22:51:12.190341 containerd[1444]: 2024-08-05 22:51:12.148 [INFO][4214] k8s.go 386: Populated endpoint ContainerID="6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921" Namespace="calico-system" Pod="calico-kube-controllers-8b48f45f5-9rkmv" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0", GenerateName:"calico-kube-controllers-8b48f45f5-", Namespace:"calico-system", SelfLink:"", UID:"5365d583-874a-419c-9670-661f7a11e6f5", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 50, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8b48f45f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-4-e6fc6d4d35.novalocal", ContainerID:"", Pod:"calico-kube-controllers-8b48f45f5-9rkmv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1cd205b08f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:51:12.190341 containerd[1444]: 2024-08-05 22:51:12.148 [INFO][4214] k8s.go 387: Calico CNI using IPs: [192.168.120.66/32] ContainerID="6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921" Namespace="calico-system" Pod="calico-kube-controllers-8b48f45f5-9rkmv" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0" Aug 5 22:51:12.190341 containerd[1444]: 2024-08-05 22:51:12.148 [INFO][4214] dataplane_linux.go 68: Setting the host side veth name to cali1cd205b08f9 ContainerID="6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921" Namespace="calico-system" Pod="calico-kube-controllers-8b48f45f5-9rkmv" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0" Aug 5 22:51:12.190341 containerd[1444]: 2024-08-05 22:51:12.166 [INFO][4214] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921" Namespace="calico-system" Pod="calico-kube-controllers-8b48f45f5-9rkmv" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0" Aug 5 22:51:12.190341 containerd[1444]: 2024-08-05 22:51:12.168 [INFO][4214] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921" Namespace="calico-system" Pod="calico-kube-controllers-8b48f45f5-9rkmv" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0", GenerateName:"calico-kube-controllers-8b48f45f5-", Namespace:"calico-system", SelfLink:"", UID:"5365d583-874a-419c-9670-661f7a11e6f5", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 50, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8b48f45f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-4-e6fc6d4d35.novalocal", ContainerID:"6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921", Pod:"calico-kube-controllers-8b48f45f5-9rkmv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1cd205b08f9", MAC:"ce:19:21:f3:2f:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:51:12.190341 containerd[1444]: 2024-08-05 22:51:12.186 [INFO][4214] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921" Namespace="calico-system" Pod="calico-kube-controllers-8b48f45f5-9rkmv" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0" Aug 5 22:51:12.192535 containerd[1444]: time="2024-08-05T22:51:12.192332714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:51:12.192535 containerd[1444]: time="2024-08-05T22:51:12.192432862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:51:12.193355 containerd[1444]: time="2024-08-05T22:51:12.193130582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:51:12.193355 containerd[1444]: time="2024-08-05T22:51:12.193170457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:51:12.236689 systemd[1]: Started cri-containerd-52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056.scope - libcontainer container 52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056. Aug 5 22:51:12.245915 containerd[1444]: time="2024-08-05T22:51:12.245519350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:51:12.246570 containerd[1444]: time="2024-08-05T22:51:12.245694159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:51:12.246570 containerd[1444]: time="2024-08-05T22:51:12.246351723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:51:12.246570 containerd[1444]: time="2024-08-05T22:51:12.246404863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:51:12.274626 systemd[1]: Started cri-containerd-6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921.scope - libcontainer container 6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921. Aug 5 22:51:12.282257 containerd[1444]: time="2024-08-05T22:51:12.282220898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7trhv,Uid:a3054ae9-283f-4e4a-bf3e-fdb7b75b0214,Namespace:calico-system,Attempt:1,} returns sandbox id \"52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056\"" Aug 5 22:51:12.339160 containerd[1444]: time="2024-08-05T22:51:12.336978634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 5 22:51:12.418126 systemd-networkd[1359]: cali017d4c76ac7: Link UP Aug 5 22:51:12.418318 systemd-networkd[1359]: cali017d4c76ac7: Gained carrier Aug 5 22:51:12.427882 containerd[1444]: time="2024-08-05T22:51:12.427737645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b48f45f5-9rkmv,Uid:5365d583-874a-419c-9670-661f7a11e6f5,Namespace:calico-system,Attempt:1,} returns sandbox id \"6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921\"" Aug 5 22:51:12.444752 containerd[1444]: 2024-08-05 22:51:11.820 [INFO][4233] utils.go 100: File /var/lib/calico/mtu does not exist Aug 5 22:51:12.444752 containerd[1444]: 2024-08-05 22:51:11.864 [INFO][4233] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0 coredns-76f75df574- kube-system 03e56f3f-28a0-40b8-af65-62461e1e06ab 808 0 2024-08-05 22:50:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012-1-0-4-e6fc6d4d35.novalocal coredns-76f75df574-4sl6k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali017d4c76ac7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102" Namespace="kube-system" Pod="coredns-76f75df574-4sl6k" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-" Aug 5 22:51:12.444752 containerd[1444]: 2024-08-05 22:51:11.864 [INFO][4233] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102" Namespace="kube-system" Pod="coredns-76f75df574-4sl6k" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0" Aug 5 22:51:12.444752 containerd[1444]: 2024-08-05 22:51:11.996 [INFO][4271] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102" HandleID="k8s-pod-network.e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0" Aug 5 22:51:12.444752 containerd[1444]: 2024-08-05 22:51:12.009 [INFO][4271] ipam_plugin.go 264: Auto assigning IP ContainerID="e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102" HandleID="k8s-pod-network.e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319660), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012-1-0-4-e6fc6d4d35.novalocal", "pod":"coredns-76f75df574-4sl6k", "timestamp":"2024-08-05 22:51:11.996652917 +0000 UTC"}, Hostname:"ci-4012-1-0-4-e6fc6d4d35.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:51:12.444752 containerd[1444]: 2024-08-05 22:51:12.009 [INFO][4271] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:51:12.444752 containerd[1444]: 2024-08-05 22:51:12.144 [INFO][4271] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:51:12.444752 containerd[1444]: 2024-08-05 22:51:12.144 [INFO][4271] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012-1-0-4-e6fc6d4d35.novalocal' Aug 5 22:51:12.444752 containerd[1444]: 2024-08-05 22:51:12.167 [INFO][4271] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.444752 containerd[1444]: 2024-08-05 22:51:12.180 [INFO][4271] ipam.go 372: Looking up existing affinities for host host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.444752 containerd[1444]: 2024-08-05 22:51:12.218 [INFO][4271] ipam.go 489: Trying affinity for 192.168.120.64/26 host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.444752 containerd[1444]: 2024-08-05 22:51:12.306 [INFO][4271] ipam.go 155: Attempting to load block cidr=192.168.120.64/26 host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.444752 containerd[1444]: 2024-08-05 22:51:12.312 [INFO][4271] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.444752 containerd[1444]: 2024-08-05 22:51:12.312 [INFO][4271] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.444752 containerd[1444]: 2024-08-05 22:51:12.326 [INFO][4271] ipam.go 1685: Creating new handle: k8s-pod-network.e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102 Aug 5 22:51:12.444752 containerd[1444]: 2024-08-05 22:51:12.332 [INFO][4271] ipam.go 1203: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.444752 containerd[1444]: 2024-08-05 22:51:12.355 [INFO][4271] ipam.go 1216: Successfully claimed IPs: [192.168.120.67/26] block=192.168.120.64/26 handle="k8s-pod-network.e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.444752 containerd[1444]: 2024-08-05 22:51:12.356 [INFO][4271] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.67/26] handle="k8s-pod-network.e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:12.444752 containerd[1444]: 2024-08-05 22:51:12.357 [INFO][4271] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:51:12.444752 containerd[1444]: 2024-08-05 22:51:12.357 [INFO][4271] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.120.67/26] IPv6=[] ContainerID="e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102" HandleID="k8s-pod-network.e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0" Aug 5 22:51:12.447041 containerd[1444]: 2024-08-05 22:51:12.373 [INFO][4233] k8s.go 386: Populated endpoint ContainerID="e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102" Namespace="kube-system" Pod="coredns-76f75df574-4sl6k" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"03e56f3f-28a0-40b8-af65-62461e1e06ab", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-4-e6fc6d4d35.novalocal", ContainerID:"", Pod:"coredns-76f75df574-4sl6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali017d4c76ac7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:51:12.447041 containerd[1444]: 2024-08-05 22:51:12.374 [INFO][4233] k8s.go 387: Calico CNI using IPs: [192.168.120.67/32] ContainerID="e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102" Namespace="kube-system" Pod="coredns-76f75df574-4sl6k" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0" Aug 5 22:51:12.447041 containerd[1444]: 2024-08-05 22:51:12.374 [INFO][4233] dataplane_linux.go 68: Setting the host side veth name to cali017d4c76ac7 ContainerID="e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102" Namespace="kube-system" Pod="coredns-76f75df574-4sl6k" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0" Aug 5 22:51:12.447041 containerd[1444]: 2024-08-05 22:51:12.380 [INFO][4233] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102" Namespace="kube-system" Pod="coredns-76f75df574-4sl6k" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0" Aug 5 22:51:12.447041 containerd[1444]: 2024-08-05 22:51:12.382 [INFO][4233] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102" Namespace="kube-system" Pod="coredns-76f75df574-4sl6k" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"03e56f3f-28a0-40b8-af65-62461e1e06ab", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-4-e6fc6d4d35.novalocal", ContainerID:"e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102", Pod:"coredns-76f75df574-4sl6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali017d4c76ac7", MAC:"d6:f9:42:60:d2:3f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:51:12.447041 containerd[1444]: 2024-08-05 22:51:12.439 [INFO][4233] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102" Namespace="kube-system" Pod="coredns-76f75df574-4sl6k" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0" Aug 5 22:51:12.490659 containerd[1444]: time="2024-08-05T22:51:12.485396631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:51:12.490659 containerd[1444]: time="2024-08-05T22:51:12.485497430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:51:12.490659 containerd[1444]: time="2024-08-05T22:51:12.485534139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:51:12.490659 containerd[1444]: time="2024-08-05T22:51:12.485572181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:51:12.547384 systemd[1]: run-containerd-runc-k8s.io-e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102-runc.S9tvJg.mount: Deactivated successfully. Aug 5 22:51:12.560871 systemd[1]: Started cri-containerd-e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102.scope - libcontainer container e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102. Aug 5 22:51:12.627815 containerd[1444]: time="2024-08-05T22:51:12.627586039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4sl6k,Uid:03e56f3f-28a0-40b8-af65-62461e1e06ab,Namespace:kube-system,Attempt:1,} returns sandbox id \"e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102\"" Aug 5 22:51:12.760202 containerd[1444]: time="2024-08-05T22:51:12.760139801Z" level=info msg="CreateContainer within sandbox \"e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:51:12.947487 containerd[1444]: time="2024-08-05T22:51:12.946016436Z" level=info msg="CreateContainer within sandbox \"e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d3736687b2590082d6ea4be093183522ecff6add1e8e43a69df77ebf45adac24\"" Aug 5 22:51:12.948569 containerd[1444]: time="2024-08-05T22:51:12.947793023Z" level=info msg="StartContainer for \"d3736687b2590082d6ea4be093183522ecff6add1e8e43a69df77ebf45adac24\"" Aug 5 22:51:12.988676 systemd[1]: Started cri-containerd-d3736687b2590082d6ea4be093183522ecff6add1e8e43a69df77ebf45adac24.scope - libcontainer container d3736687b2590082d6ea4be093183522ecff6add1e8e43a69df77ebf45adac24. Aug 5 22:51:13.495196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3902532837.mount: Deactivated successfully. Aug 5 22:51:13.535666 containerd[1444]: time="2024-08-05T22:51:13.535606766Z" level=info msg="StartContainer for \"d3736687b2590082d6ea4be093183522ecff6add1e8e43a69df77ebf45adac24\" returns successfully" Aug 5 22:51:13.572013 systemd-networkd[1359]: vxlan.calico: Link UP Aug 5 22:51:13.572031 systemd-networkd[1359]: vxlan.calico: Gained carrier Aug 5 22:51:13.583049 systemd-networkd[1359]: cali017d4c76ac7: Gained IPv6LL Aug 5 22:51:13.710638 systemd-networkd[1359]: cali79b5b5a4ae5: Gained IPv6LL Aug 5 22:51:13.850422 containerd[1444]: time="2024-08-05T22:51:13.850362339Z" level=info msg="StopPodSandbox for \"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\"" Aug 5 22:51:14.032062 containerd[1444]: 2024-08-05 22:51:13.954 [INFO][4567] k8s.go 608: Cleaning up netns ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Aug 5 22:51:14.032062 containerd[1444]: 2024-08-05 22:51:13.955 [INFO][4567] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" iface="eth0" netns="/var/run/netns/cni-b49dd241-9314-0510-3d91-3a0781a22f32" Aug 5 22:51:14.032062 containerd[1444]: 2024-08-05 22:51:13.955 [INFO][4567] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" iface="eth0" netns="/var/run/netns/cni-b49dd241-9314-0510-3d91-3a0781a22f32" Aug 5 22:51:14.032062 containerd[1444]: 2024-08-05 22:51:13.955 [INFO][4567] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" iface="eth0" netns="/var/run/netns/cni-b49dd241-9314-0510-3d91-3a0781a22f32" Aug 5 22:51:14.032062 containerd[1444]: 2024-08-05 22:51:13.955 [INFO][4567] k8s.go 615: Releasing IP address(es) ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Aug 5 22:51:14.032062 containerd[1444]: 2024-08-05 22:51:13.955 [INFO][4567] utils.go 188: Calico CNI releasing IP address ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Aug 5 22:51:14.032062 containerd[1444]: 2024-08-05 22:51:13.998 [INFO][4574] ipam_plugin.go 411: Releasing address using handleID ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" HandleID="k8s-pod-network.a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0" Aug 5 22:51:14.032062 containerd[1444]: 2024-08-05 22:51:13.998 [INFO][4574] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:51:14.032062 containerd[1444]: 2024-08-05 22:51:13.998 [INFO][4574] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:51:14.032062 containerd[1444]: 2024-08-05 22:51:14.012 [WARNING][4574] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" HandleID="k8s-pod-network.a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0" Aug 5 22:51:14.032062 containerd[1444]: 2024-08-05 22:51:14.012 [INFO][4574] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" HandleID="k8s-pod-network.a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0" Aug 5 22:51:14.032062 containerd[1444]: 2024-08-05 22:51:14.014 [INFO][4574] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:51:14.032062 containerd[1444]: 2024-08-05 22:51:14.017 [INFO][4567] k8s.go 621: Teardown processing complete. ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Aug 5 22:51:14.035584 containerd[1444]: time="2024-08-05T22:51:14.035258337Z" level=info msg="TearDown network for sandbox \"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\" successfully" Aug 5 22:51:14.035584 containerd[1444]: time="2024-08-05T22:51:14.035294575Z" level=info msg="StopPodSandbox for \"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\" returns successfully" Aug 5 22:51:14.036502 systemd[1]: run-netns-cni\x2db49dd241\x2d9314\x2d0510\x2d3d91\x2d3a0781a22f32.mount: Deactivated successfully. Aug 5 22:51:14.156368 containerd[1444]: time="2024-08-05T22:51:14.135655403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c78g6,Uid:2bed6644-fbd4-4e3b-8155-30ecff2fffc6,Namespace:kube-system,Attempt:1,}" Aug 5 22:51:14.158885 systemd-networkd[1359]: cali1cd205b08f9: Gained IPv6LL Aug 5 22:51:14.338451 kubelet[2639]: I0805 22:51:14.334634 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-4sl6k" podStartSLOduration=42.33458792 podStartE2EDuration="42.33458792s" podCreationTimestamp="2024-08-05 22:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:51:14.331194828 +0000 UTC m=+55.640657781" watchObservedRunningTime="2024-08-05 22:51:14.33458792 +0000 UTC m=+55.644050863" Aug 5 22:51:14.509868 systemd-networkd[1359]: califb056384ca7: Link UP Aug 5 22:51:14.511278 systemd-networkd[1359]: califb056384ca7: Gained carrier Aug 5 22:51:14.533966 containerd[1444]: 2024-08-05 22:51:14.394 [INFO][4581] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0 coredns-76f75df574- kube-system 2bed6644-fbd4-4e3b-8155-30ecff2fffc6 838 0 2024-08-05 22:50:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012-1-0-4-e6fc6d4d35.novalocal coredns-76f75df574-c78g6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califb056384ca7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793" Namespace="kube-system" Pod="coredns-76f75df574-c78g6" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-" Aug 5 22:51:14.533966 containerd[1444]: 2024-08-05 22:51:14.394 [INFO][4581] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793" Namespace="kube-system" Pod="coredns-76f75df574-c78g6" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0" Aug 5 22:51:14.533966 containerd[1444]: 2024-08-05 22:51:14.446 [INFO][4595] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793" HandleID="k8s-pod-network.c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0" Aug 5 22:51:14.533966 containerd[1444]: 2024-08-05 22:51:14.464 [INFO][4595] ipam_plugin.go 264: Auto assigning IP ContainerID="c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793" HandleID="k8s-pod-network.c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000378fd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012-1-0-4-e6fc6d4d35.novalocal", "pod":"coredns-76f75df574-c78g6", "timestamp":"2024-08-05 22:51:14.446816635 +0000 UTC"}, Hostname:"ci-4012-1-0-4-e6fc6d4d35.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:51:14.533966 containerd[1444]: 2024-08-05 22:51:14.464 [INFO][4595] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:51:14.533966 containerd[1444]: 2024-08-05 22:51:14.465 [INFO][4595] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:51:14.533966 containerd[1444]: 2024-08-05 22:51:14.465 [INFO][4595] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012-1-0-4-e6fc6d4d35.novalocal' Aug 5 22:51:14.533966 containerd[1444]: 2024-08-05 22:51:14.467 [INFO][4595] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:14.533966 containerd[1444]: 2024-08-05 22:51:14.474 [INFO][4595] ipam.go 372: Looking up existing affinities for host host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:14.533966 containerd[1444]: 2024-08-05 22:51:14.481 [INFO][4595] ipam.go 489: Trying affinity for 192.168.120.64/26 host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:14.533966 containerd[1444]: 2024-08-05 22:51:14.483 [INFO][4595] ipam.go 155: Attempting to load block cidr=192.168.120.64/26 host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:14.533966 containerd[1444]: 2024-08-05 22:51:14.485 [INFO][4595] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:14.533966 containerd[1444]: 2024-08-05 22:51:14.485 [INFO][4595] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:14.533966 containerd[1444]: 2024-08-05 22:51:14.487 [INFO][4595] ipam.go 1685: Creating new handle: k8s-pod-network.c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793 Aug 5 22:51:14.533966 containerd[1444]: 2024-08-05 22:51:14.493 [INFO][4595] ipam.go 1203: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:14.533966 containerd[1444]: 2024-08-05 22:51:14.500 [INFO][4595] ipam.go 1216: Successfully claimed IPs: [192.168.120.68/26] block=192.168.120.64/26 handle="k8s-pod-network.c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:14.533966 containerd[1444]: 2024-08-05 22:51:14.502 [INFO][4595] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.68/26] handle="k8s-pod-network.c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:51:14.533966 containerd[1444]: 2024-08-05 22:51:14.502 [INFO][4595] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:51:14.533966 containerd[1444]: 2024-08-05 22:51:14.502 [INFO][4595] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.120.68/26] IPv6=[] ContainerID="c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793" HandleID="k8s-pod-network.c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0" Aug 5 22:51:14.539782 containerd[1444]: 2024-08-05 22:51:14.504 [INFO][4581] k8s.go 386: Populated endpoint ContainerID="c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793" Namespace="kube-system" Pod="coredns-76f75df574-c78g6" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2bed6644-fbd4-4e3b-8155-30ecff2fffc6", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-4-e6fc6d4d35.novalocal", ContainerID:"", Pod:"coredns-76f75df574-c78g6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb056384ca7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:51:14.539782 containerd[1444]: 2024-08-05 22:51:14.504 [INFO][4581] k8s.go 387: Calico CNI using IPs: [192.168.120.68/32] ContainerID="c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793" Namespace="kube-system" Pod="coredns-76f75df574-c78g6" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0" Aug 5 22:51:14.539782 containerd[1444]: 2024-08-05 22:51:14.504 [INFO][4581] dataplane_linux.go 68: Setting the host side veth name to califb056384ca7 ContainerID="c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793" Namespace="kube-system" Pod="coredns-76f75df574-c78g6" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0" Aug 5 22:51:14.539782 containerd[1444]: 2024-08-05 22:51:14.510 [INFO][4581] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793" Namespace="kube-system" Pod="coredns-76f75df574-c78g6" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0" Aug 5 22:51:14.539782 containerd[1444]: 2024-08-05 22:51:14.512 [INFO][4581] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793" Namespace="kube-system" Pod="coredns-76f75df574-c78g6" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2bed6644-fbd4-4e3b-8155-30ecff2fffc6", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-4-e6fc6d4d35.novalocal", ContainerID:"c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793", Pod:"coredns-76f75df574-c78g6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb056384ca7", MAC:"52:ba:39:70:fe:bd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:51:14.539782 containerd[1444]: 2024-08-05 22:51:14.529 [INFO][4581] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793" Namespace="kube-system" Pod="coredns-76f75df574-c78g6" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0" Aug 5 22:51:14.589534 containerd[1444]: time="2024-08-05T22:51:14.588164893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:51:14.589762 containerd[1444]: time="2024-08-05T22:51:14.589628542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:51:14.589762 containerd[1444]: time="2024-08-05T22:51:14.589654561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:51:14.589762 containerd[1444]: time="2024-08-05T22:51:14.589668157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:51:14.640243 systemd[1]: run-containerd-runc-k8s.io-c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793-runc.k73RWF.mount: Deactivated successfully. Aug 5 22:51:14.652354 systemd[1]: Started cri-containerd-c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793.scope - libcontainer container c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793. Aug 5 22:51:14.749119 containerd[1444]: time="2024-08-05T22:51:14.749060731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c78g6,Uid:2bed6644-fbd4-4e3b-8155-30ecff2fffc6,Namespace:kube-system,Attempt:1,} returns sandbox id \"c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793\"" Aug 5 22:51:14.754557 containerd[1444]: time="2024-08-05T22:51:14.754502711Z" level=info msg="CreateContainer within sandbox \"c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:51:14.864315 containerd[1444]: time="2024-08-05T22:51:14.864250857Z" level=info msg="CreateContainer within sandbox \"c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b1987219d05b1b84fec7e274b35e2b1214bb765d6b35bd51f9d2f8875166e291\"" Aug 5 22:51:14.866511 containerd[1444]: time="2024-08-05T22:51:14.866057020Z" level=info msg="StartContainer for \"b1987219d05b1b84fec7e274b35e2b1214bb765d6b35bd51f9d2f8875166e291\"" Aug 5 22:51:14.900644 systemd[1]: Started cri-containerd-b1987219d05b1b84fec7e274b35e2b1214bb765d6b35bd51f9d2f8875166e291.scope - libcontainer container b1987219d05b1b84fec7e274b35e2b1214bb765d6b35bd51f9d2f8875166e291. Aug 5 22:51:14.951871 containerd[1444]: time="2024-08-05T22:51:14.951403481Z" level=info msg="StartContainer for \"b1987219d05b1b84fec7e274b35e2b1214bb765d6b35bd51f9d2f8875166e291\" returns successfully" Aug 5 22:51:15.119102 systemd-networkd[1359]: vxlan.calico: Gained IPv6LL Aug 5 22:51:15.240018 containerd[1444]: time="2024-08-05T22:51:15.239412971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:51:15.240842 containerd[1444]: time="2024-08-05T22:51:15.240803542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Aug 5 22:51:15.243351 containerd[1444]: time="2024-08-05T22:51:15.243284501Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:51:15.254510 containerd[1444]: time="2024-08-05T22:51:15.253764501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:51:15.254997 containerd[1444]: time="2024-08-05T22:51:15.254890927Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.917865653s" Aug 5 22:51:15.254997 containerd[1444]: time="2024-08-05T22:51:15.254943896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Aug 5 22:51:15.258506 containerd[1444]: time="2024-08-05T22:51:15.258449739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 5 22:51:15.260772 containerd[1444]: time="2024-08-05T22:51:15.260254929Z" level=info msg="CreateContainer within sandbox \"52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 5 22:51:15.309759 containerd[1444]: time="2024-08-05T22:51:15.309566098Z" level=info msg="CreateContainer within sandbox \"52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f962a0d9d09f7f885017e80fbfefdb18a0ada32d531a3b1824eb65316ec221fc\"" Aug 5 22:51:15.310488 containerd[1444]: time="2024-08-05T22:51:15.310438105Z" level=info msg="StartContainer for \"f962a0d9d09f7f885017e80fbfefdb18a0ada32d531a3b1824eb65316ec221fc\"" Aug 5 22:51:15.369643 systemd[1]: Started cri-containerd-f962a0d9d09f7f885017e80fbfefdb18a0ada32d531a3b1824eb65316ec221fc.scope - libcontainer container f962a0d9d09f7f885017e80fbfefdb18a0ada32d531a3b1824eb65316ec221fc. Aug 5 22:51:15.438401 kubelet[2639]: I0805 22:51:15.437321 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-c78g6" podStartSLOduration=43.437275525 podStartE2EDuration="43.437275525s" podCreationTimestamp="2024-08-05 22:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:51:15.382834434 +0000 UTC m=+56.692297377" watchObservedRunningTime="2024-08-05 22:51:15.437275525 +0000 UTC m=+56.746738458" Aug 5 22:51:15.488110 containerd[1444]: time="2024-08-05T22:51:15.488020456Z" level=info msg="StartContainer for \"f962a0d9d09f7f885017e80fbfefdb18a0ada32d531a3b1824eb65316ec221fc\" returns successfully" Aug 5 22:51:16.078733 systemd-networkd[1359]: califb056384ca7: Gained IPv6LL Aug 5 22:51:18.649392 containerd[1444]: time="2024-08-05T22:51:18.648353677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:51:18.649392 containerd[1444]: time="2024-08-05T22:51:18.649344988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Aug 5 22:51:18.650654 containerd[1444]: time="2024-08-05T22:51:18.650626565Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:51:18.653357 containerd[1444]: time="2024-08-05T22:51:18.653331203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:51:18.654189 containerd[1444]: time="2024-08-05T22:51:18.654140472Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.395480048s" Aug 5 22:51:18.654248 containerd[1444]: time="2024-08-05T22:51:18.654188582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Aug 5 22:51:18.656673 containerd[1444]: time="2024-08-05T22:51:18.656645837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 5 22:51:18.688629 containerd[1444]: time="2024-08-05T22:51:18.688223188Z" level=info msg="CreateContainer within sandbox \"6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 5 22:51:18.716343 containerd[1444]: time="2024-08-05T22:51:18.716292062Z" level=info msg="CreateContainer within sandbox \"6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"acd52926ebc98479e5d41951d0a60b41c65dfa7cc75fbc7dc453e16df0232121\"" Aug 5 22:51:18.717410 containerd[1444]: time="2024-08-05T22:51:18.717364395Z" level=info msg="StartContainer for \"acd52926ebc98479e5d41951d0a60b41c65dfa7cc75fbc7dc453e16df0232121\"" Aug 5 22:51:18.761949 systemd[1]: Started cri-containerd-acd52926ebc98479e5d41951d0a60b41c65dfa7cc75fbc7dc453e16df0232121.scope - libcontainer container acd52926ebc98479e5d41951d0a60b41c65dfa7cc75fbc7dc453e16df0232121. Aug 5 22:51:18.819427 containerd[1444]: time="2024-08-05T22:51:18.818581295Z" level=info msg="StartContainer for \"acd52926ebc98479e5d41951d0a60b41c65dfa7cc75fbc7dc453e16df0232121\" returns successfully" Aug 5 22:51:18.875425 containerd[1444]: time="2024-08-05T22:51:18.873801741Z" level=info msg="StopPodSandbox for \"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\"" Aug 5 22:51:19.049921 containerd[1444]: 2024-08-05 22:51:18.994 [WARNING][4798] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0", GenerateName:"calico-kube-controllers-8b48f45f5-", Namespace:"calico-system", SelfLink:"", UID:"5365d583-874a-419c-9670-661f7a11e6f5", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 50, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8b48f45f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-4-e6fc6d4d35.novalocal", ContainerID:"6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921", Pod:"calico-kube-controllers-8b48f45f5-9rkmv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1cd205b08f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:51:19.049921 containerd[1444]: 2024-08-05 22:51:18.996 [INFO][4798] k8s.go 608: Cleaning up netns ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Aug 5 22:51:19.049921 containerd[1444]: 2024-08-05 22:51:18.997 [INFO][4798] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" iface="eth0" netns="" Aug 5 22:51:19.049921 containerd[1444]: 2024-08-05 22:51:18.997 [INFO][4798] k8s.go 615: Releasing IP address(es) ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Aug 5 22:51:19.049921 containerd[1444]: 2024-08-05 22:51:18.997 [INFO][4798] utils.go 188: Calico CNI releasing IP address ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Aug 5 22:51:19.049921 containerd[1444]: 2024-08-05 22:51:19.034 [INFO][4806] ipam_plugin.go 411: Releasing address using handleID ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" HandleID="k8s-pod-network.19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0" Aug 5 22:51:19.049921 containerd[1444]: 2024-08-05 22:51:19.034 [INFO][4806] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:51:19.049921 containerd[1444]: 2024-08-05 22:51:19.034 [INFO][4806] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:51:19.049921 containerd[1444]: 2024-08-05 22:51:19.043 [WARNING][4806] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" HandleID="k8s-pod-network.19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0" Aug 5 22:51:19.049921 containerd[1444]: 2024-08-05 22:51:19.043 [INFO][4806] ipam_plugin.go 439: Releasing address using workloadID ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" HandleID="k8s-pod-network.19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0" Aug 5 22:51:19.049921 containerd[1444]: 2024-08-05 22:51:19.045 [INFO][4806] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:51:19.049921 containerd[1444]: 2024-08-05 22:51:19.047 [INFO][4798] k8s.go 621: Teardown processing complete. ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Aug 5 22:51:19.052421 containerd[1444]: time="2024-08-05T22:51:19.052364727Z" level=info msg="TearDown network for sandbox \"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\" successfully" Aug 5 22:51:19.052517 containerd[1444]: time="2024-08-05T22:51:19.052407647Z" level=info msg="StopPodSandbox for \"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\" returns successfully" Aug 5 22:51:19.058492 containerd[1444]: time="2024-08-05T22:51:19.057973037Z" level=info msg="RemovePodSandbox for \"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\"" Aug 5 22:51:19.060880 containerd[1444]: time="2024-08-05T22:51:19.058605985Z" level=info msg="Forcibly stopping sandbox \"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\"" Aug 5 22:51:19.166196 containerd[1444]: 2024-08-05 22:51:19.117 [WARNING][4824] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0", GenerateName:"calico-kube-controllers-8b48f45f5-", Namespace:"calico-system", SelfLink:"", UID:"5365d583-874a-419c-9670-661f7a11e6f5", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 50, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8b48f45f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-4-e6fc6d4d35.novalocal", ContainerID:"6d800502fae0cfeb8f7bc531c6bc52051bece6df3aa0faf10efd00a9eb7ca921", Pod:"calico-kube-controllers-8b48f45f5-9rkmv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1cd205b08f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:51:19.166196 containerd[1444]: 2024-08-05 22:51:19.117 [INFO][4824] k8s.go 608: Cleaning up netns ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Aug 5 22:51:19.166196 containerd[1444]: 2024-08-05 22:51:19.117 [INFO][4824] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" iface="eth0" netns="" Aug 5 22:51:19.166196 containerd[1444]: 2024-08-05 22:51:19.117 [INFO][4824] k8s.go 615: Releasing IP address(es) ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Aug 5 22:51:19.166196 containerd[1444]: 2024-08-05 22:51:19.117 [INFO][4824] utils.go 188: Calico CNI releasing IP address ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Aug 5 22:51:19.166196 containerd[1444]: 2024-08-05 22:51:19.148 [INFO][4830] ipam_plugin.go 411: Releasing address using handleID ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" HandleID="k8s-pod-network.19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0" Aug 5 22:51:19.166196 containerd[1444]: 2024-08-05 22:51:19.148 [INFO][4830] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:51:19.166196 containerd[1444]: 2024-08-05 22:51:19.149 [INFO][4830] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:51:19.166196 containerd[1444]: 2024-08-05 22:51:19.159 [WARNING][4830] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" HandleID="k8s-pod-network.19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0" Aug 5 22:51:19.166196 containerd[1444]: 2024-08-05 22:51:19.159 [INFO][4830] ipam_plugin.go 439: Releasing address using workloadID ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" HandleID="k8s-pod-network.19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--kube--controllers--8b48f45f5--9rkmv-eth0" Aug 5 22:51:19.166196 containerd[1444]: 2024-08-05 22:51:19.161 [INFO][4830] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:51:19.166196 containerd[1444]: 2024-08-05 22:51:19.163 [INFO][4824] k8s.go 621: Teardown processing complete. ContainerID="19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e" Aug 5 22:51:19.168667 containerd[1444]: time="2024-08-05T22:51:19.166581260Z" level=info msg="TearDown network for sandbox \"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\" successfully" Aug 5 22:51:19.183158 containerd[1444]: time="2024-08-05T22:51:19.183093834Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:51:19.183336 containerd[1444]: time="2024-08-05T22:51:19.183187560Z" level=info msg="RemovePodSandbox \"19d22067b680726c5c3286618ffbdfc22f55fb83237fc901529582162a51c71e\" returns successfully" Aug 5 22:51:19.184576 containerd[1444]: time="2024-08-05T22:51:19.184304737Z" level=info msg="StopPodSandbox for \"1ec415744433471b48e891dca30ec57045df33e072a395562dbe62847be382ec\"" Aug 5 22:51:19.184576 containerd[1444]: time="2024-08-05T22:51:19.184439229Z" level=info msg="TearDown network for sandbox \"1ec415744433471b48e891dca30ec57045df33e072a395562dbe62847be382ec\" successfully" Aug 5 22:51:19.184576 containerd[1444]: time="2024-08-05T22:51:19.184496537Z" level=info msg="StopPodSandbox for \"1ec415744433471b48e891dca30ec57045df33e072a395562dbe62847be382ec\" returns successfully" Aug 5 22:51:19.184970 containerd[1444]: time="2024-08-05T22:51:19.184937304Z" level=info msg="RemovePodSandbox for \"1ec415744433471b48e891dca30ec57045df33e072a395562dbe62847be382ec\"" Aug 5 22:51:19.185029 containerd[1444]: time="2024-08-05T22:51:19.184973282Z" level=info msg="Forcibly stopping sandbox \"1ec415744433471b48e891dca30ec57045df33e072a395562dbe62847be382ec\"" Aug 5 22:51:19.185130 containerd[1444]: time="2024-08-05T22:51:19.185059014Z" level=info msg="TearDown network for sandbox \"1ec415744433471b48e891dca30ec57045df33e072a395562dbe62847be382ec\" successfully" Aug 5 22:51:19.191290 containerd[1444]: time="2024-08-05T22:51:19.191222696Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ec415744433471b48e891dca30ec57045df33e072a395562dbe62847be382ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:51:19.191404 containerd[1444]: time="2024-08-05T22:51:19.191302226Z" level=info msg="RemovePodSandbox \"1ec415744433471b48e891dca30ec57045df33e072a395562dbe62847be382ec\" returns successfully" Aug 5 22:51:19.192327 containerd[1444]: time="2024-08-05T22:51:19.192290972Z" level=info msg="StopPodSandbox for \"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\"" Aug 5 22:51:19.329700 containerd[1444]: 2024-08-05 22:51:19.258 [WARNING][4849] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"03e56f3f-28a0-40b8-af65-62461e1e06ab", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-4-e6fc6d4d35.novalocal", ContainerID:"e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102", Pod:"coredns-76f75df574-4sl6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali017d4c76ac7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:51:19.329700 containerd[1444]: 2024-08-05 22:51:19.259 [INFO][4849] k8s.go 608: Cleaning up netns ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Aug 5 22:51:19.329700 containerd[1444]: 2024-08-05 22:51:19.259 [INFO][4849] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" iface="eth0" netns="" Aug 5 22:51:19.329700 containerd[1444]: 2024-08-05 22:51:19.259 [INFO][4849] k8s.go 615: Releasing IP address(es) ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Aug 5 22:51:19.329700 containerd[1444]: 2024-08-05 22:51:19.259 [INFO][4849] utils.go 188: Calico CNI releasing IP address ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Aug 5 22:51:19.329700 containerd[1444]: 2024-08-05 22:51:19.301 [INFO][4855] ipam_plugin.go 411: Releasing address using handleID ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" HandleID="k8s-pod-network.e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0" Aug 5 22:51:19.329700 containerd[1444]: 2024-08-05 22:51:19.301 [INFO][4855] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:51:19.329700 containerd[1444]: 2024-08-05 22:51:19.301 [INFO][4855] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:51:19.329700 containerd[1444]: 2024-08-05 22:51:19.317 [WARNING][4855] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" HandleID="k8s-pod-network.e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0" Aug 5 22:51:19.329700 containerd[1444]: 2024-08-05 22:51:19.317 [INFO][4855] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" HandleID="k8s-pod-network.e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0" Aug 5 22:51:19.329700 containerd[1444]: 2024-08-05 22:51:19.320 [INFO][4855] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:51:19.329700 containerd[1444]: 2024-08-05 22:51:19.325 [INFO][4849] k8s.go 621: Teardown processing complete. ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Aug 5 22:51:19.332614 containerd[1444]: time="2024-08-05T22:51:19.331079500Z" level=info msg="TearDown network for sandbox \"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\" successfully" Aug 5 22:51:19.332614 containerd[1444]: time="2024-08-05T22:51:19.331117241Z" level=info msg="StopPodSandbox for \"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\" returns successfully" Aug 5 22:51:19.332614 containerd[1444]: time="2024-08-05T22:51:19.332220173Z" level=info msg="RemovePodSandbox for \"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\"" Aug 5 22:51:19.332614 containerd[1444]: time="2024-08-05T22:51:19.332249357Z" level=info msg="Forcibly stopping sandbox \"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\"" Aug 5 22:51:19.376881 kubelet[2639]: I0805 22:51:19.376653 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8b48f45f5-9rkmv" podStartSLOduration=32.153129541 podStartE2EDuration="38.376023438s" podCreationTimestamp="2024-08-05 22:50:41 +0000 UTC" firstStartedPulling="2024-08-05 22:51:12.432207889 +0000 UTC m=+53.741670833" lastFinishedPulling="2024-08-05 22:51:18.655101797 +0000 UTC m=+59.964564730" observedRunningTime="2024-08-05 22:51:19.371604289 +0000 UTC m=+60.681067232" watchObservedRunningTime="2024-08-05 22:51:19.376023438 +0000 UTC m=+60.685486371" Aug 5 22:51:19.479822 containerd[1444]: 2024-08-05 22:51:19.420 [WARNING][4877] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"03e56f3f-28a0-40b8-af65-62461e1e06ab", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-4-e6fc6d4d35.novalocal", ContainerID:"e33894a59fb934a8ded7c9f6f686b3c7c2effa1beb4db09c8526cef00aca1102", Pod:"coredns-76f75df574-4sl6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali017d4c76ac7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:51:19.479822 containerd[1444]: 2024-08-05 22:51:19.420 [INFO][4877] k8s.go 608: Cleaning up netns ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Aug 5 22:51:19.479822 containerd[1444]: 2024-08-05 22:51:19.420 [INFO][4877] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" iface="eth0" netns="" Aug 5 22:51:19.479822 containerd[1444]: 2024-08-05 22:51:19.420 [INFO][4877] k8s.go 615: Releasing IP address(es) ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Aug 5 22:51:19.479822 containerd[1444]: 2024-08-05 22:51:19.420 [INFO][4877] utils.go 188: Calico CNI releasing IP address ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Aug 5 22:51:19.479822 containerd[1444]: 2024-08-05 22:51:19.461 [INFO][4897] ipam_plugin.go 411: Releasing address using handleID ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" HandleID="k8s-pod-network.e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0" Aug 5 22:51:19.479822 containerd[1444]: 2024-08-05 22:51:19.462 [INFO][4897] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:51:19.479822 containerd[1444]: 2024-08-05 22:51:19.462 [INFO][4897] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:51:19.479822 containerd[1444]: 2024-08-05 22:51:19.470 [WARNING][4897] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" HandleID="k8s-pod-network.e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0" Aug 5 22:51:19.479822 containerd[1444]: 2024-08-05 22:51:19.470 [INFO][4897] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" HandleID="k8s-pod-network.e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--4sl6k-eth0" Aug 5 22:51:19.479822 containerd[1444]: 2024-08-05 22:51:19.473 [INFO][4897] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:51:19.479822 containerd[1444]: 2024-08-05 22:51:19.477 [INFO][4877] k8s.go 621: Teardown processing complete. ContainerID="e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4" Aug 5 22:51:19.479822 containerd[1444]: time="2024-08-05T22:51:19.479168472Z" level=info msg="TearDown network for sandbox \"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\" successfully" Aug 5 22:51:19.484484 containerd[1444]: time="2024-08-05T22:51:19.483404485Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:51:19.484776 containerd[1444]: time="2024-08-05T22:51:19.484647459Z" level=info msg="RemovePodSandbox \"e0dbbd7f5a300d095dad529fe830a5632e750c0f7ed77690527ca75cc926c8c4\" returns successfully" Aug 5 22:51:19.485170 containerd[1444]: time="2024-08-05T22:51:19.485141767Z" level=info msg="StopPodSandbox for \"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\"" Aug 5 22:51:19.592736 containerd[1444]: 2024-08-05 22:51:19.547 [WARNING][4919] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2bed6644-fbd4-4e3b-8155-30ecff2fffc6", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-4-e6fc6d4d35.novalocal", ContainerID:"c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793", Pod:"coredns-76f75df574-c78g6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb056384ca7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:51:19.592736 containerd[1444]: 2024-08-05 22:51:19.548 [INFO][4919] k8s.go 608: Cleaning up netns ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Aug 5 22:51:19.592736 containerd[1444]: 2024-08-05 22:51:19.548 [INFO][4919] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" iface="eth0" netns="" Aug 5 22:51:19.592736 containerd[1444]: 2024-08-05 22:51:19.548 [INFO][4919] k8s.go 615: Releasing IP address(es) ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Aug 5 22:51:19.592736 containerd[1444]: 2024-08-05 22:51:19.548 [INFO][4919] utils.go 188: Calico CNI releasing IP address ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Aug 5 22:51:19.592736 containerd[1444]: 2024-08-05 22:51:19.578 [INFO][4925] ipam_plugin.go 411: Releasing address using handleID ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" HandleID="k8s-pod-network.a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0" Aug 5 22:51:19.592736 containerd[1444]: 2024-08-05 22:51:19.578 [INFO][4925] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:51:19.592736 containerd[1444]: 2024-08-05 22:51:19.578 [INFO][4925] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:51:19.592736 containerd[1444]: 2024-08-05 22:51:19.587 [WARNING][4925] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" HandleID="k8s-pod-network.a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0" Aug 5 22:51:19.592736 containerd[1444]: 2024-08-05 22:51:19.587 [INFO][4925] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" HandleID="k8s-pod-network.a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0" Aug 5 22:51:19.592736 containerd[1444]: 2024-08-05 22:51:19.588 [INFO][4925] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:51:19.592736 containerd[1444]: 2024-08-05 22:51:19.590 [INFO][4919] k8s.go 621: Teardown processing complete. ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Aug 5 22:51:19.593583 containerd[1444]: time="2024-08-05T22:51:19.592712592Z" level=info msg="TearDown network for sandbox \"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\" successfully" Aug 5 22:51:19.593583 containerd[1444]: time="2024-08-05T22:51:19.592757116Z" level=info msg="StopPodSandbox for \"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\" returns successfully" Aug 5 22:51:19.595792 containerd[1444]: time="2024-08-05T22:51:19.595740948Z" level=info msg="RemovePodSandbox for \"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\"" Aug 5 22:51:19.596267 containerd[1444]: time="2024-08-05T22:51:19.595869119Z" level=info msg="Forcibly stopping sandbox \"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\"" Aug 5 22:51:19.718969 containerd[1444]: 2024-08-05 22:51:19.653 [WARNING][4943] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2bed6644-fbd4-4e3b-8155-30ecff2fffc6", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-4-e6fc6d4d35.novalocal", ContainerID:"c0c7e77dc2450d08b43f551bd8675f6ad265cb0306dce245f34db94ac88d7793", Pod:"coredns-76f75df574-c78g6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb056384ca7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:51:19.718969 containerd[1444]: 2024-08-05 22:51:19.654 [INFO][4943] k8s.go 608: Cleaning up netns ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Aug 5 22:51:19.718969 containerd[1444]: 2024-08-05 22:51:19.654 [INFO][4943] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" iface="eth0" netns="" Aug 5 22:51:19.718969 containerd[1444]: 2024-08-05 22:51:19.655 [INFO][4943] k8s.go 615: Releasing IP address(es) ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Aug 5 22:51:19.718969 containerd[1444]: 2024-08-05 22:51:19.655 [INFO][4943] utils.go 188: Calico CNI releasing IP address ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Aug 5 22:51:19.718969 containerd[1444]: 2024-08-05 22:51:19.700 [INFO][4951] ipam_plugin.go 411: Releasing address using handleID ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" HandleID="k8s-pod-network.a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0" Aug 5 22:51:19.718969 containerd[1444]: 2024-08-05 22:51:19.700 [INFO][4951] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:51:19.718969 containerd[1444]: 2024-08-05 22:51:19.700 [INFO][4951] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:51:19.718969 containerd[1444]: 2024-08-05 22:51:19.710 [WARNING][4951] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" HandleID="k8s-pod-network.a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0" Aug 5 22:51:19.718969 containerd[1444]: 2024-08-05 22:51:19.710 [INFO][4951] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" HandleID="k8s-pod-network.a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-coredns--76f75df574--c78g6-eth0" Aug 5 22:51:19.718969 containerd[1444]: 2024-08-05 22:51:19.713 [INFO][4951] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:51:19.718969 containerd[1444]: 2024-08-05 22:51:19.715 [INFO][4943] k8s.go 621: Teardown processing complete. ContainerID="a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9" Aug 5 22:51:19.721063 containerd[1444]: time="2024-08-05T22:51:19.720295772Z" level=info msg="TearDown network for sandbox \"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\" successfully" Aug 5 22:51:19.725120 containerd[1444]: time="2024-08-05T22:51:19.725055348Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:51:19.725399 containerd[1444]: time="2024-08-05T22:51:19.725333931Z" level=info msg="RemovePodSandbox \"a05ac7a298c57e0219ebc4cc26239792daedd16dd373413bfea2747def3eceb9\" returns successfully" Aug 5 22:51:19.726131 containerd[1444]: time="2024-08-05T22:51:19.726008548Z" level=info msg="StopPodSandbox for \"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\"" Aug 5 22:51:19.816205 containerd[1444]: 2024-08-05 22:51:19.777 [WARNING][4969] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a3054ae9-283f-4e4a-bf3e-fdb7b75b0214", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 50, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-4-e6fc6d4d35.novalocal", ContainerID:"52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056", Pod:"csi-node-driver-7trhv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.120.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali79b5b5a4ae5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:51:19.816205 containerd[1444]: 2024-08-05 22:51:19.777 [INFO][4969] k8s.go 608: Cleaning up netns ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Aug 5 22:51:19.816205 containerd[1444]: 2024-08-05 22:51:19.778 [INFO][4969] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" iface="eth0" netns="" Aug 5 22:51:19.816205 containerd[1444]: 2024-08-05 22:51:19.778 [INFO][4969] k8s.go 615: Releasing IP address(es) ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Aug 5 22:51:19.816205 containerd[1444]: 2024-08-05 22:51:19.778 [INFO][4969] utils.go 188: Calico CNI releasing IP address ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Aug 5 22:51:19.816205 containerd[1444]: 2024-08-05 22:51:19.804 [INFO][4975] ipam_plugin.go 411: Releasing address using handleID ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" HandleID="k8s-pod-network.d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0" Aug 5 22:51:19.816205 containerd[1444]: 2024-08-05 22:51:19.804 [INFO][4975] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:51:19.816205 containerd[1444]: 2024-08-05 22:51:19.804 [INFO][4975] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:51:19.816205 containerd[1444]: 2024-08-05 22:51:19.811 [WARNING][4975] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" HandleID="k8s-pod-network.d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0" Aug 5 22:51:19.816205 containerd[1444]: 2024-08-05 22:51:19.811 [INFO][4975] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" HandleID="k8s-pod-network.d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0" Aug 5 22:51:19.816205 containerd[1444]: 2024-08-05 22:51:19.813 [INFO][4975] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:51:19.816205 containerd[1444]: 2024-08-05 22:51:19.814 [INFO][4969] k8s.go 621: Teardown processing complete. ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Aug 5 22:51:19.817235 containerd[1444]: time="2024-08-05T22:51:19.816273860Z" level=info msg="TearDown network for sandbox \"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\" successfully" Aug 5 22:51:19.817235 containerd[1444]: time="2024-08-05T22:51:19.816302755Z" level=info msg="StopPodSandbox for \"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\" returns successfully" Aug 5 22:51:19.817235 containerd[1444]: time="2024-08-05T22:51:19.816816349Z" level=info msg="RemovePodSandbox for \"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\"" Aug 5 22:51:19.817235 containerd[1444]: time="2024-08-05T22:51:19.816855943Z" level=info msg="Forcibly stopping sandbox \"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\"" Aug 5 22:51:19.903096 containerd[1444]: 2024-08-05 22:51:19.861 [WARNING][4993] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a3054ae9-283f-4e4a-bf3e-fdb7b75b0214", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 50, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-4-e6fc6d4d35.novalocal", ContainerID:"52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056", Pod:"csi-node-driver-7trhv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.120.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali79b5b5a4ae5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:51:19.903096 containerd[1444]: 2024-08-05 22:51:19.861 [INFO][4993] k8s.go 608: Cleaning up netns ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Aug 5 22:51:19.903096 containerd[1444]: 2024-08-05 22:51:19.861 [INFO][4993] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" iface="eth0" netns="" Aug 5 22:51:19.903096 containerd[1444]: 2024-08-05 22:51:19.861 [INFO][4993] k8s.go 615: Releasing IP address(es) ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Aug 5 22:51:19.903096 containerd[1444]: 2024-08-05 22:51:19.861 [INFO][4993] utils.go 188: Calico CNI releasing IP address ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Aug 5 22:51:19.903096 containerd[1444]: 2024-08-05 22:51:19.886 [INFO][4999] ipam_plugin.go 411: Releasing address using handleID ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" HandleID="k8s-pod-network.d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0" Aug 5 22:51:19.903096 containerd[1444]: 2024-08-05 22:51:19.886 [INFO][4999] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:51:19.903096 containerd[1444]: 2024-08-05 22:51:19.886 [INFO][4999] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:51:19.903096 containerd[1444]: 2024-08-05 22:51:19.893 [WARNING][4999] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" HandleID="k8s-pod-network.d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0" Aug 5 22:51:19.903096 containerd[1444]: 2024-08-05 22:51:19.893 [INFO][4999] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" HandleID="k8s-pod-network.d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-csi--node--driver--7trhv-eth0" Aug 5 22:51:19.903096 containerd[1444]: 2024-08-05 22:51:19.896 [INFO][4999] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:51:19.903096 containerd[1444]: 2024-08-05 22:51:19.901 [INFO][4993] k8s.go 621: Teardown processing complete. ContainerID="d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106" Aug 5 22:51:19.904843 containerd[1444]: time="2024-08-05T22:51:19.903700922Z" level=info msg="TearDown network for sandbox \"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\" successfully" Aug 5 22:51:19.911136 containerd[1444]: time="2024-08-05T22:51:19.910934043Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:51:19.911136 containerd[1444]: time="2024-08-05T22:51:19.911018191Z" level=info msg="RemovePodSandbox \"d435678f0d8e49b8c13bb1e3e923db5ef12b73009c0ec11263256539345cf106\" returns successfully" Aug 5 22:51:19.911570 containerd[1444]: time="2024-08-05T22:51:19.911525985Z" level=info msg="StopPodSandbox for \"cd3f42ba8f1f7eead500f222b580b053b31993c20b009c2b60dbda7cd9313086\"" Aug 5 22:51:19.911630 containerd[1444]: time="2024-08-05T22:51:19.911618489Z" level=info msg="TearDown network for sandbox \"cd3f42ba8f1f7eead500f222b580b053b31993c20b009c2b60dbda7cd9313086\" successfully" Aug 5 22:51:19.911663 containerd[1444]: time="2024-08-05T22:51:19.911634008Z" level=info msg="StopPodSandbox for \"cd3f42ba8f1f7eead500f222b580b053b31993c20b009c2b60dbda7cd9313086\" returns successfully" Aug 5 22:51:19.912061 containerd[1444]: time="2024-08-05T22:51:19.912027647Z" level=info msg="RemovePodSandbox for \"cd3f42ba8f1f7eead500f222b580b053b31993c20b009c2b60dbda7cd9313086\"" Aug 5 22:51:19.912147 containerd[1444]: time="2024-08-05T22:51:19.912059377Z" level=info msg="Forcibly stopping sandbox \"cd3f42ba8f1f7eead500f222b580b053b31993c20b009c2b60dbda7cd9313086\"" Aug 5 22:51:19.912147 containerd[1444]: time="2024-08-05T22:51:19.912110032Z" level=info msg="TearDown network for sandbox \"cd3f42ba8f1f7eead500f222b580b053b31993c20b009c2b60dbda7cd9313086\" successfully" Aug 5 22:51:19.922018 containerd[1444]: time="2024-08-05T22:51:19.921970626Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd3f42ba8f1f7eead500f222b580b053b31993c20b009c2b60dbda7cd9313086\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:51:19.922115 containerd[1444]: time="2024-08-05T22:51:19.922031210Z" level=info msg="RemovePodSandbox \"cd3f42ba8f1f7eead500f222b580b053b31993c20b009c2b60dbda7cd9313086\" returns successfully" Aug 5 22:51:20.755836 containerd[1444]: time="2024-08-05T22:51:20.755690252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:51:20.758124 containerd[1444]: time="2024-08-05T22:51:20.758003916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Aug 5 22:51:20.759999 containerd[1444]: time="2024-08-05T22:51:20.759892081Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:51:20.766287 containerd[1444]: time="2024-08-05T22:51:20.766180889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:51:20.770408 containerd[1444]: time="2024-08-05T22:51:20.770304521Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.113066633s" Aug 5 22:51:20.770408 containerd[1444]: time="2024-08-05T22:51:20.770388438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Aug 5 22:51:20.774528 containerd[1444]: time="2024-08-05T22:51:20.773901776Z" level=info msg="CreateContainer within sandbox \"52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 5 22:51:20.808044 containerd[1444]: time="2024-08-05T22:51:20.807928031Z" level=info msg="CreateContainer within sandbox \"52cfa398ff2bf2ce3733aff1299e3b1964b9891c8178b0abb676cd3381cb1056\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bc7c2f2d98282688c90cb380434b10f1353f8701a0c92dcdea556c8374f36279\"" Aug 5 22:51:20.809593 containerd[1444]: time="2024-08-05T22:51:20.809041351Z" level=info msg="StartContainer for \"bc7c2f2d98282688c90cb380434b10f1353f8701a0c92dcdea556c8374f36279\"" Aug 5 22:51:20.876603 systemd[1]: Started cri-containerd-bc7c2f2d98282688c90cb380434b10f1353f8701a0c92dcdea556c8374f36279.scope - libcontainer container bc7c2f2d98282688c90cb380434b10f1353f8701a0c92dcdea556c8374f36279. Aug 5 22:51:20.913087 containerd[1444]: time="2024-08-05T22:51:20.912244749Z" level=info msg="StartContainer for \"bc7c2f2d98282688c90cb380434b10f1353f8701a0c92dcdea556c8374f36279\" returns successfully" Aug 5 22:51:21.410576 kubelet[2639]: I0805 22:51:21.409024 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-7trhv" podStartSLOduration=33.974516226 podStartE2EDuration="42.408934884s" podCreationTimestamp="2024-08-05 22:50:39 +0000 UTC" firstStartedPulling="2024-08-05 22:51:12.336440454 +0000 UTC m=+53.645903387" lastFinishedPulling="2024-08-05 22:51:20.770859052 +0000 UTC m=+62.080322045" observedRunningTime="2024-08-05 22:51:21.40812361 +0000 UTC m=+62.717586603" watchObservedRunningTime="2024-08-05 22:51:21.408934884 +0000 UTC m=+62.718397867" Aug 5 22:51:21.492038 kubelet[2639]: I0805 22:51:21.491961 2639 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 5 22:51:21.502008 kubelet[2639]: I0805 22:51:21.501789 2639 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 5 22:51:23.383776 update_engine[1428]: I0805 22:51:23.383432 1428 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 5 22:51:23.383776 update_engine[1428]: I0805 22:51:23.383515 1428 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 5 22:51:23.391620 update_engine[1428]: I0805 22:51:23.391242 1428 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 5 22:51:23.393093 update_engine[1428]: I0805 22:51:23.392061 1428 omaha_request_params.cc:62] Current group set to beta Aug 5 22:51:23.393093 update_engine[1428]: I0805 22:51:23.392211 1428 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 5 22:51:23.393093 update_engine[1428]: I0805 22:51:23.392220 1428 update_attempter.cc:643] Scheduling an action processor start. Aug 5 22:51:23.393093 update_engine[1428]: I0805 22:51:23.392235 1428 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 5 22:51:23.393093 update_engine[1428]: I0805 22:51:23.392289 1428 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 5 22:51:23.393093 update_engine[1428]: I0805 22:51:23.392350 1428 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 5 22:51:23.393093 update_engine[1428]: I0805 22:51:23.392355 1428 omaha_request_action.cc:272] Request: Aug 5 22:51:23.393093 update_engine[1428]: Aug 5 22:51:23.393093 update_engine[1428]: Aug 5 22:51:23.393093 update_engine[1428]: Aug 5 22:51:23.393093 update_engine[1428]: Aug 5 22:51:23.393093 update_engine[1428]: Aug 5 22:51:23.393093 update_engine[1428]: Aug 5 22:51:23.393093 update_engine[1428]: Aug 5 22:51:23.393093 update_engine[1428]: Aug 5 22:51:23.393093 update_engine[1428]: I0805 22:51:23.392359 1428 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 5 22:51:23.428693 locksmithd[1451]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 5 22:51:23.429773 update_engine[1428]: I0805 22:51:23.429339 1428 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 5 22:51:23.429773 update_engine[1428]: I0805 22:51:23.429701 1428 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 5 22:51:23.441030 update_engine[1428]: E0805 22:51:23.441000 1428 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 5 22:51:23.441181 update_engine[1428]: I0805 22:51:23.441167 1428 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 5 22:51:33.317535 update_engine[1428]: I0805 22:51:33.316282 1428 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 5 22:51:33.317535 update_engine[1428]: I0805 22:51:33.316682 1428 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 5 22:51:33.317535 update_engine[1428]: I0805 22:51:33.317179 1428 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 5 22:51:33.329019 update_engine[1428]: E0805 22:51:33.328797 1428 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 5 22:51:33.329019 update_engine[1428]: I0805 22:51:33.328945 1428 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Aug 5 22:51:33.540511 systemd[1]: Started sshd@9-172.24.4.9:22-172.24.4.1:41554.service - OpenSSH per-connection server daemon (172.24.4.1:41554). Aug 5 22:51:34.795956 sshd[5105]: Accepted publickey for core from 172.24.4.1 port 41554 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:51:34.799420 sshd[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:51:34.805601 systemd-logind[1427]: New session 12 of user core. Aug 5 22:51:34.812624 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 22:51:36.028139 sshd[5105]: pam_unix(sshd:session): session closed for user core Aug 5 22:51:36.034296 systemd-logind[1427]: Session 12 logged out. Waiting for processes to exit. Aug 5 22:51:36.034333 systemd[1]: sshd@9-172.24.4.9:22-172.24.4.1:41554.service: Deactivated successfully. Aug 5 22:51:36.037407 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 22:51:36.042331 systemd-logind[1427]: Removed session 12. Aug 5 22:51:41.062079 systemd[1]: Started sshd@10-172.24.4.9:22-172.24.4.1:55696.service - OpenSSH per-connection server daemon (172.24.4.1:55696). Aug 5 22:51:42.371621 sshd[5132]: Accepted publickey for core from 172.24.4.1 port 55696 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:51:42.373139 sshd[5132]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:51:42.385450 systemd-logind[1427]: New session 13 of user core. Aug 5 22:51:42.393764 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 22:51:43.299295 update_engine[1428]: I0805 22:51:43.299214 1428 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 5 22:51:43.300451 update_engine[1428]: I0805 22:51:43.299571 1428 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 5 22:51:43.300451 update_engine[1428]: I0805 22:51:43.299888 1428 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 5 22:51:43.310741 update_engine[1428]: E0805 22:51:43.310696 1428 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 5 22:51:43.310883 update_engine[1428]: I0805 22:51:43.310787 1428 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Aug 5 22:51:43.359983 sshd[5132]: pam_unix(sshd:session): session closed for user core Aug 5 22:51:43.391700 systemd[1]: sshd@10-172.24.4.9:22-172.24.4.1:55696.service: Deactivated successfully. Aug 5 22:51:43.394519 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 22:51:43.396698 systemd-logind[1427]: Session 13 logged out. Waiting for processes to exit. Aug 5 22:51:43.401941 systemd-logind[1427]: Removed session 13. Aug 5 22:51:48.381131 systemd[1]: Started sshd@11-172.24.4.9:22-172.24.4.1:49538.service - OpenSSH per-connection server daemon (172.24.4.1:49538). Aug 5 22:51:50.060811 sshd[5152]: Accepted publickey for core from 172.24.4.1 port 49538 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:51:50.081197 sshd[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:51:50.088791 systemd-logind[1427]: New session 14 of user core. Aug 5 22:51:50.095943 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 22:51:51.007645 sshd[5152]: pam_unix(sshd:session): session closed for user core Aug 5 22:51:51.021827 systemd[1]: sshd@11-172.24.4.9:22-172.24.4.1:49538.service: Deactivated successfully. Aug 5 22:51:51.027447 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 22:51:51.031054 systemd-logind[1427]: Session 14 logged out. Waiting for processes to exit. Aug 5 22:51:51.043132 systemd[1]: Started sshd@12-172.24.4.9:22-172.24.4.1:49542.service - OpenSSH per-connection server daemon (172.24.4.1:49542). Aug 5 22:51:51.048262 systemd-logind[1427]: Removed session 14. Aug 5 22:51:52.503934 sshd[5166]: Accepted publickey for core from 172.24.4.1 port 49542 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:51:52.506434 sshd[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:51:52.512386 systemd-logind[1427]: New session 15 of user core. Aug 5 22:51:52.519794 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 22:51:53.299771 update_engine[1428]: I0805 22:51:53.299707 1428 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 5 22:51:53.300856 update_engine[1428]: I0805 22:51:53.299922 1428 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 5 22:51:53.300856 update_engine[1428]: I0805 22:51:53.300163 1428 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 5 22:51:53.310679 update_engine[1428]: E0805 22:51:53.310617 1428 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 5 22:51:53.310679 update_engine[1428]: I0805 22:51:53.310688 1428 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 5 22:51:53.310679 update_engine[1428]: I0805 22:51:53.310694 1428 omaha_request_action.cc:617] Omaha request response: Aug 5 22:51:53.311221 update_engine[1428]: E0805 22:51:53.310777 1428 omaha_request_action.cc:636] Omaha request network transfer failed. Aug 5 22:51:53.311221 update_engine[1428]: I0805 22:51:53.310814 1428 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Aug 5 22:51:53.311221 update_engine[1428]: I0805 22:51:53.310818 1428 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 5 22:51:53.311221 update_engine[1428]: I0805 22:51:53.310822 1428 update_attempter.cc:306] Processing Done. Aug 5 22:51:53.311221 update_engine[1428]: E0805 22:51:53.310832 1428 update_attempter.cc:619] Update failed. Aug 5 22:51:53.311221 update_engine[1428]: I0805 22:51:53.310858 1428 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Aug 5 22:51:53.311221 update_engine[1428]: I0805 22:51:53.310861 1428 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Aug 5 22:51:53.311221 update_engine[1428]: I0805 22:51:53.310865 1428 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Aug 5 22:51:53.311874 update_engine[1428]: I0805 22:51:53.311381 1428 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 5 22:51:53.311874 update_engine[1428]: I0805 22:51:53.311407 1428 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 5 22:51:53.311874 update_engine[1428]: I0805 22:51:53.311411 1428 omaha_request_action.cc:272] Request: Aug 5 22:51:53.311874 update_engine[1428]: Aug 5 22:51:53.311874 update_engine[1428]: Aug 5 22:51:53.311874 update_engine[1428]: Aug 5 22:51:53.311874 update_engine[1428]: Aug 5 22:51:53.311874 update_engine[1428]: Aug 5 22:51:53.311874 update_engine[1428]: Aug 5 22:51:53.311874 update_engine[1428]: I0805 22:51:53.311421 1428 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 5 22:51:53.311874 update_engine[1428]: I0805 22:51:53.311770 1428 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 5 22:51:53.313705 update_engine[1428]: I0805 22:51:53.311950 1428 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 5 22:51:53.314608 locksmithd[1451]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Aug 5 22:51:53.322096 update_engine[1428]: E0805 22:51:53.322059 1428 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 5 22:51:53.322183 update_engine[1428]: I0805 22:51:53.322117 1428 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 5 22:51:53.322183 update_engine[1428]: I0805 22:51:53.322122 1428 omaha_request_action.cc:617] Omaha request response: Aug 5 22:51:53.322183 update_engine[1428]: I0805 22:51:53.322127 1428 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 5 22:51:53.322183 update_engine[1428]: I0805 22:51:53.322131 1428 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 5 22:51:53.322183 update_engine[1428]: I0805 22:51:53.322134 1428 update_attempter.cc:306] Processing Done. Aug 5 22:51:53.322183 update_engine[1428]: I0805 22:51:53.322137 1428 update_attempter.cc:310] Error event sent. Aug 5 22:51:53.322183 update_engine[1428]: I0805 22:51:53.322144 1428 update_check_scheduler.cc:74] Next update check in 46m13s Aug 5 22:51:53.322576 locksmithd[1451]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Aug 5 22:51:53.482295 sshd[5166]: pam_unix(sshd:session): session closed for user core Aug 5 22:51:53.493231 systemd[1]: sshd@12-172.24.4.9:22-172.24.4.1:49542.service: Deactivated successfully. Aug 5 22:51:53.495924 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 22:51:53.498065 systemd-logind[1427]: Session 15 logged out. Waiting for processes to exit. Aug 5 22:51:53.507846 systemd[1]: Started sshd@13-172.24.4.9:22-172.24.4.1:49554.service - OpenSSH per-connection server daemon (172.24.4.1:49554). Aug 5 22:51:53.509530 systemd-logind[1427]: Removed session 15. Aug 5 22:51:55.570552 sshd[5177]: Accepted publickey for core from 172.24.4.1 port 49554 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:51:55.574704 sshd[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:51:55.586635 systemd-logind[1427]: New session 16 of user core. Aug 5 22:51:55.597839 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 22:51:56.431892 sshd[5177]: pam_unix(sshd:session): session closed for user core Aug 5 22:51:56.440517 systemd[1]: sshd@13-172.24.4.9:22-172.24.4.1:49554.service: Deactivated successfully. Aug 5 22:51:56.446847 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 22:51:56.450594 systemd-logind[1427]: Session 16 logged out. Waiting for processes to exit. Aug 5 22:51:56.454185 systemd-logind[1427]: Removed session 16. Aug 5 22:52:01.373958 systemd[1]: run-containerd-runc-k8s.io-3f092259eb6601a5c5e21175832bc2541c88d56b5818432c54ef36e5ada3d402-runc.rIAbDG.mount: Deactivated successfully. Aug 5 22:52:01.446762 systemd[1]: Started sshd@14-172.24.4.9:22-172.24.4.1:58632.service - OpenSSH per-connection server daemon (172.24.4.1:58632). Aug 5 22:52:02.681429 sshd[5270]: Accepted publickey for core from 172.24.4.1 port 58632 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:52:02.684976 sshd[5270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:52:02.693900 systemd-logind[1427]: New session 17 of user core. Aug 5 22:52:02.701751 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 22:52:03.356005 sshd[5270]: pam_unix(sshd:session): session closed for user core Aug 5 22:52:03.361722 systemd[1]: sshd@14-172.24.4.9:22-172.24.4.1:58632.service: Deactivated successfully. Aug 5 22:52:03.366671 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 22:52:03.371927 systemd-logind[1427]: Session 17 logged out. Waiting for processes to exit. Aug 5 22:52:03.374186 systemd-logind[1427]: Removed session 17. Aug 5 22:52:08.380037 systemd[1]: Started sshd@15-172.24.4.9:22-172.24.4.1:59452.service - OpenSSH per-connection server daemon (172.24.4.1:59452). Aug 5 22:52:09.885893 sshd[5290]: Accepted publickey for core from 172.24.4.1 port 59452 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:52:09.888884 sshd[5290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:52:09.900777 systemd-logind[1427]: New session 18 of user core. Aug 5 22:52:09.904751 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 22:52:10.755245 sshd[5290]: pam_unix(sshd:session): session closed for user core Aug 5 22:52:10.763060 systemd[1]: sshd@15-172.24.4.9:22-172.24.4.1:59452.service: Deactivated successfully. Aug 5 22:52:10.766864 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 22:52:10.768529 systemd-logind[1427]: Session 18 logged out. Waiting for processes to exit. Aug 5 22:52:10.769628 systemd-logind[1427]: Removed session 18. Aug 5 22:52:15.773768 systemd[1]: Started sshd@16-172.24.4.9:22-172.24.4.1:41206.service - OpenSSH per-connection server daemon (172.24.4.1:41206). Aug 5 22:52:17.029698 sshd[5305]: Accepted publickey for core from 172.24.4.1 port 41206 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:52:17.033147 sshd[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:52:17.041781 systemd-logind[1427]: New session 19 of user core. Aug 5 22:52:17.047694 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 22:52:17.809720 sshd[5305]: pam_unix(sshd:session): session closed for user core Aug 5 22:52:17.823571 systemd[1]: sshd@16-172.24.4.9:22-172.24.4.1:41206.service: Deactivated successfully. Aug 5 22:52:17.828685 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 22:52:17.835549 systemd-logind[1427]: Session 19 logged out. Waiting for processes to exit. Aug 5 22:52:17.842130 systemd[1]: Started sshd@17-172.24.4.9:22-172.24.4.1:41212.service - OpenSSH per-connection server daemon (172.24.4.1:41212). Aug 5 22:52:17.848082 systemd-logind[1427]: Removed session 19. Aug 5 22:52:19.279762 sshd[5323]: Accepted publickey for core from 172.24.4.1 port 41212 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:52:19.283365 sshd[5323]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:52:19.295788 systemd-logind[1427]: New session 20 of user core. Aug 5 22:52:19.301774 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 22:52:20.952300 sshd[5323]: pam_unix(sshd:session): session closed for user core Aug 5 22:52:20.963058 systemd[1]: sshd@17-172.24.4.9:22-172.24.4.1:41212.service: Deactivated successfully. Aug 5 22:52:20.966058 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 22:52:20.969305 systemd-logind[1427]: Session 20 logged out. Waiting for processes to exit. Aug 5 22:52:20.974725 systemd[1]: Started sshd@18-172.24.4.9:22-172.24.4.1:41220.service - OpenSSH per-connection server daemon (172.24.4.1:41220). Aug 5 22:52:20.978545 systemd-logind[1427]: Removed session 20. Aug 5 22:52:22.565739 sshd[5336]: Accepted publickey for core from 172.24.4.1 port 41220 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:52:22.571988 sshd[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:52:22.584036 systemd-logind[1427]: New session 21 of user core. Aug 5 22:52:22.589099 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 5 22:52:25.670399 sshd[5336]: pam_unix(sshd:session): session closed for user core Aug 5 22:52:25.691187 systemd[1]: Started sshd@19-172.24.4.9:22-172.24.4.1:34344.service - OpenSSH per-connection server daemon (172.24.4.1:34344). Aug 5 22:52:25.693139 systemd[1]: sshd@18-172.24.4.9:22-172.24.4.1:41220.service: Deactivated successfully. Aug 5 22:52:25.701984 systemd[1]: session-21.scope: Deactivated successfully. Aug 5 22:52:25.706731 systemd-logind[1427]: Session 21 logged out. Waiting for processes to exit. Aug 5 22:52:25.714562 systemd-logind[1427]: Removed session 21. Aug 5 22:52:27.125920 sshd[5352]: Accepted publickey for core from 172.24.4.1 port 34344 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:52:27.129362 sshd[5352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:52:27.141059 systemd-logind[1427]: New session 22 of user core. Aug 5 22:52:27.149832 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 5 22:52:28.144838 systemd[1]: run-containerd-runc-k8s.io-acd52926ebc98479e5d41951d0a60b41c65dfa7cc75fbc7dc453e16df0232121-runc.9yioDm.mount: Deactivated successfully. Aug 5 22:52:29.131903 sshd[5352]: pam_unix(sshd:session): session closed for user core Aug 5 22:52:29.140197 systemd[1]: sshd@19-172.24.4.9:22-172.24.4.1:34344.service: Deactivated successfully. Aug 5 22:52:29.143442 systemd[1]: session-22.scope: Deactivated successfully. Aug 5 22:52:29.146040 systemd-logind[1427]: Session 22 logged out. Waiting for processes to exit. Aug 5 22:52:29.152553 systemd[1]: Started sshd@20-172.24.4.9:22-172.24.4.1:34358.service - OpenSSH per-connection server daemon (172.24.4.1:34358). Aug 5 22:52:29.156301 systemd-logind[1427]: Removed session 22. Aug 5 22:52:29.853844 kubelet[2639]: I0805 22:52:29.853538 2639 topology_manager.go:215] "Topology Admit Handler" podUID="1c4fff96-65d7-43ba-8f0f-ee54539e4a0f" podNamespace="calico-apiserver" podName="calico-apiserver-c576f497-h6bld" Aug 5 22:52:29.889481 systemd[1]: Created slice kubepods-besteffort-pod1c4fff96_65d7_43ba_8f0f_ee54539e4a0f.slice - libcontainer container kubepods-besteffort-pod1c4fff96_65d7_43ba_8f0f_ee54539e4a0f.slice. Aug 5 22:52:29.984577 kubelet[2639]: I0805 22:52:29.984517 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz8cn\" (UniqueName: \"kubernetes.io/projected/1c4fff96-65d7-43ba-8f0f-ee54539e4a0f-kube-api-access-vz8cn\") pod \"calico-apiserver-c576f497-h6bld\" (UID: \"1c4fff96-65d7-43ba-8f0f-ee54539e4a0f\") " pod="calico-apiserver/calico-apiserver-c576f497-h6bld" Aug 5 22:52:29.984745 kubelet[2639]: I0805 22:52:29.984596 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1c4fff96-65d7-43ba-8f0f-ee54539e4a0f-calico-apiserver-certs\") pod \"calico-apiserver-c576f497-h6bld\" (UID: \"1c4fff96-65d7-43ba-8f0f-ee54539e4a0f\") " pod="calico-apiserver/calico-apiserver-c576f497-h6bld" Aug 5 22:52:30.220349 containerd[1444]: time="2024-08-05T22:52:30.195907536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c576f497-h6bld,Uid:1c4fff96-65d7-43ba-8f0f-ee54539e4a0f,Namespace:calico-apiserver,Attempt:0,}" Aug 5 22:52:30.873117 sshd[5389]: Accepted publickey for core from 172.24.4.1 port 34358 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:52:30.940781 sshd[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:52:30.954091 systemd-logind[1427]: New session 23 of user core. Aug 5 22:52:30.958712 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 5 22:52:31.009342 systemd-networkd[1359]: cali3880c830d87: Link UP Aug 5 22:52:31.009591 systemd-networkd[1359]: cali3880c830d87: Gained carrier Aug 5 22:52:31.066105 containerd[1444]: 2024-08-05 22:52:30.584 [INFO][5398] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--apiserver--c576f497--h6bld-eth0 calico-apiserver-c576f497- calico-apiserver 1c4fff96-65d7-43ba-8f0f-ee54539e4a0f 1246 0 2024-08-05 22:52:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c576f497 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4012-1-0-4-e6fc6d4d35.novalocal calico-apiserver-c576f497-h6bld eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3880c830d87 [] []}} ContainerID="65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab" Namespace="calico-apiserver" Pod="calico-apiserver-c576f497-h6bld" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--apiserver--c576f497--h6bld-" Aug 5 22:52:31.066105 containerd[1444]: 2024-08-05 22:52:30.585 [INFO][5398] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab" Namespace="calico-apiserver" Pod="calico-apiserver-c576f497-h6bld" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--apiserver--c576f497--h6bld-eth0" Aug 5 22:52:31.066105 containerd[1444]: 2024-08-05 22:52:30.702 [INFO][5408] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab" HandleID="k8s-pod-network.65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--apiserver--c576f497--h6bld-eth0" Aug 5 22:52:31.066105 containerd[1444]: 2024-08-05 22:52:30.718 [INFO][5408] ipam_plugin.go 264: Auto assigning IP ContainerID="65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab" HandleID="k8s-pod-network.65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--apiserver--c576f497--h6bld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003120b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4012-1-0-4-e6fc6d4d35.novalocal", "pod":"calico-apiserver-c576f497-h6bld", "timestamp":"2024-08-05 22:52:30.702895466 +0000 UTC"}, Hostname:"ci-4012-1-0-4-e6fc6d4d35.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:52:31.066105 containerd[1444]: 2024-08-05 22:52:30.863 [INFO][5408] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:52:31.066105 containerd[1444]: 2024-08-05 22:52:30.863 [INFO][5408] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:52:31.066105 containerd[1444]: 2024-08-05 22:52:30.863 [INFO][5408] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012-1-0-4-e6fc6d4d35.novalocal' Aug 5 22:52:31.066105 containerd[1444]: 2024-08-05 22:52:30.885 [INFO][5408] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:52:31.066105 containerd[1444]: 2024-08-05 22:52:30.927 [INFO][5408] ipam.go 372: Looking up existing affinities for host host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:52:31.066105 containerd[1444]: 2024-08-05 22:52:30.955 [INFO][5408] ipam.go 489: Trying affinity for 192.168.120.64/26 host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:52:31.066105 containerd[1444]: 2024-08-05 22:52:30.965 [INFO][5408] ipam.go 155: Attempting to load block cidr=192.168.120.64/26 host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:52:31.066105 containerd[1444]: 2024-08-05 22:52:30.972 [INFO][5408] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:52:31.066105 containerd[1444]: 2024-08-05 22:52:30.972 [INFO][5408] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:52:31.066105 containerd[1444]: 2024-08-05 22:52:30.974 [INFO][5408] ipam.go 1685: Creating new handle: k8s-pod-network.65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab Aug 5 22:52:31.066105 containerd[1444]: 2024-08-05 22:52:30.984 [INFO][5408] ipam.go 1203: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:52:31.066105 containerd[1444]: 2024-08-05 22:52:30.997 [INFO][5408] ipam.go 1216: Successfully claimed IPs: [192.168.120.69/26] block=192.168.120.64/26 handle="k8s-pod-network.65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:52:31.066105 containerd[1444]: 2024-08-05 22:52:30.998 [INFO][5408] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.69/26] handle="k8s-pod-network.65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab" host="ci-4012-1-0-4-e6fc6d4d35.novalocal" Aug 5 22:52:31.066105 containerd[1444]: 2024-08-05 22:52:30.998 [INFO][5408] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:52:31.066105 containerd[1444]: 2024-08-05 22:52:30.998 [INFO][5408] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.120.69/26] IPv6=[] ContainerID="65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab" HandleID="k8s-pod-network.65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab" Workload="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--apiserver--c576f497--h6bld-eth0" Aug 5 22:52:31.073570 containerd[1444]: 2024-08-05 22:52:31.003 [INFO][5398] k8s.go 386: Populated endpoint ContainerID="65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab" Namespace="calico-apiserver" Pod="calico-apiserver-c576f497-h6bld" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--apiserver--c576f497--h6bld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--apiserver--c576f497--h6bld-eth0", GenerateName:"calico-apiserver-c576f497-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c4fff96-65d7-43ba-8f0f-ee54539e4a0f", ResourceVersion:"1246", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 52, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c576f497", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-4-e6fc6d4d35.novalocal", ContainerID:"", Pod:"calico-apiserver-c576f497-h6bld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3880c830d87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:52:31.073570 containerd[1444]: 2024-08-05 22:52:31.003 [INFO][5398] k8s.go 387: Calico CNI using IPs: [192.168.120.69/32] ContainerID="65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab" Namespace="calico-apiserver" Pod="calico-apiserver-c576f497-h6bld" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--apiserver--c576f497--h6bld-eth0" Aug 5 22:52:31.073570 containerd[1444]: 2024-08-05 22:52:31.004 [INFO][5398] dataplane_linux.go 68: Setting the host side veth name to cali3880c830d87 ContainerID="65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab" Namespace="calico-apiserver" Pod="calico-apiserver-c576f497-h6bld" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--apiserver--c576f497--h6bld-eth0" Aug 5 22:52:31.073570 containerd[1444]: 2024-08-05 22:52:31.008 [INFO][5398] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab" Namespace="calico-apiserver" Pod="calico-apiserver-c576f497-h6bld" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--apiserver--c576f497--h6bld-eth0" Aug 5 22:52:31.073570 containerd[1444]: 2024-08-05 22:52:31.009 [INFO][5398] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab" Namespace="calico-apiserver" Pod="calico-apiserver-c576f497-h6bld" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--apiserver--c576f497--h6bld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--apiserver--c576f497--h6bld-eth0", GenerateName:"calico-apiserver-c576f497-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c4fff96-65d7-43ba-8f0f-ee54539e4a0f", ResourceVersion:"1246", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 52, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c576f497", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-4-e6fc6d4d35.novalocal", ContainerID:"65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab", Pod:"calico-apiserver-c576f497-h6bld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3880c830d87", MAC:"7e:c3:d6:99:b8:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:52:31.073570 containerd[1444]: 2024-08-05 22:52:31.062 [INFO][5398] k8s.go 500: Wrote updated endpoint to datastore ContainerID="65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab" Namespace="calico-apiserver" Pod="calico-apiserver-c576f497-h6bld" WorkloadEndpoint="ci--4012--1--0--4--e6fc6d4d35.novalocal-k8s-calico--apiserver--c576f497--h6bld-eth0" Aug 5 22:52:31.401545 containerd[1444]: time="2024-08-05T22:52:31.400737390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:52:31.401545 containerd[1444]: time="2024-08-05T22:52:31.400815797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:52:31.401545 containerd[1444]: time="2024-08-05T22:52:31.400842838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:52:31.401545 containerd[1444]: time="2024-08-05T22:52:31.400862505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:52:31.451882 systemd[1]: run-containerd-runc-k8s.io-65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab-runc.LvGlka.mount: Deactivated successfully. Aug 5 22:52:31.493119 systemd[1]: Started cri-containerd-65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab.scope - libcontainer container 65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab. Aug 5 22:52:31.571153 containerd[1444]: time="2024-08-05T22:52:31.571105316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c576f497-h6bld,Uid:1c4fff96-65d7-43ba-8f0f-ee54539e4a0f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab\"" Aug 5 22:52:31.582230 containerd[1444]: time="2024-08-05T22:52:31.581993643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 5 22:52:32.244352 sshd[5389]: pam_unix(sshd:session): session closed for user core Aug 5 22:52:32.254184 systemd[1]: sshd@20-172.24.4.9:22-172.24.4.1:34358.service: Deactivated successfully. Aug 5 22:52:32.255080 systemd-logind[1427]: Session 23 logged out. Waiting for processes to exit. Aug 5 22:52:32.257924 systemd[1]: session-23.scope: Deactivated successfully. Aug 5 22:52:32.260691 systemd-logind[1427]: Removed session 23. Aug 5 22:52:32.558764 systemd-networkd[1359]: cali3880c830d87: Gained IPv6LL Aug 5 22:52:35.730233 containerd[1444]: time="2024-08-05T22:52:35.730087521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:52:35.735214 containerd[1444]: time="2024-08-05T22:52:35.735061122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Aug 5 22:52:35.753784 containerd[1444]: time="2024-08-05T22:52:35.753377883Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:52:35.762541 containerd[1444]: time="2024-08-05T22:52:35.759630873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:52:35.765955 containerd[1444]: time="2024-08-05T22:52:35.764448340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 4.182400395s" Aug 5 22:52:35.765955 containerd[1444]: time="2024-08-05T22:52:35.765720025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Aug 5 22:52:35.789103 containerd[1444]: time="2024-08-05T22:52:35.788994047Z" level=info msg="CreateContainer within sandbox \"65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 5 22:52:35.842329 containerd[1444]: time="2024-08-05T22:52:35.841492171Z" level=info msg="CreateContainer within sandbox \"65cc9dd3db9fee5aa7e89900de35684cde74bb7f73261e3714faeb563212c4ab\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"548ab2124fbc7707efe9b7cbe945491e187eff49a0a38bbc8efc93a981ca48df\"" Aug 5 22:52:35.843255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3107327102.mount: Deactivated successfully. Aug 5 22:52:35.845818 containerd[1444]: time="2024-08-05T22:52:35.843385242Z" level=info msg="StartContainer for \"548ab2124fbc7707efe9b7cbe945491e187eff49a0a38bbc8efc93a981ca48df\"" Aug 5 22:52:35.893740 systemd[1]: Started cri-containerd-548ab2124fbc7707efe9b7cbe945491e187eff49a0a38bbc8efc93a981ca48df.scope - libcontainer container 548ab2124fbc7707efe9b7cbe945491e187eff49a0a38bbc8efc93a981ca48df. Aug 5 22:52:36.801306 containerd[1444]: time="2024-08-05T22:52:36.800620024Z" level=info msg="StartContainer for \"548ab2124fbc7707efe9b7cbe945491e187eff49a0a38bbc8efc93a981ca48df\" returns successfully" Aug 5 22:52:36.994819 kubelet[2639]: I0805 22:52:36.994765 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c576f497-h6bld" podStartSLOduration=3.786256377 podStartE2EDuration="7.978856507s" podCreationTimestamp="2024-08-05 22:52:29 +0000 UTC" firstStartedPulling="2024-08-05 22:52:31.574025876 +0000 UTC m=+132.883488819" lastFinishedPulling="2024-08-05 22:52:35.766625965 +0000 UTC m=+137.076088949" observedRunningTime="2024-08-05 22:52:36.914339827 +0000 UTC m=+138.223802790" watchObservedRunningTime="2024-08-05 22:52:36.978856507 +0000 UTC m=+138.288319440" Aug 5 22:52:37.273041 systemd[1]: Started sshd@21-172.24.4.9:22-172.24.4.1:48866.service - OpenSSH per-connection server daemon (172.24.4.1:48866). Aug 5 22:52:38.780188 sshd[5571]: Accepted publickey for core from 172.24.4.1 port 48866 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:52:38.786912 sshd[5571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:52:38.804563 systemd-logind[1427]: New session 24 of user core. Aug 5 22:52:38.808826 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 5 22:52:40.381830 sshd[5571]: pam_unix(sshd:session): session closed for user core Aug 5 22:52:40.388858 systemd[1]: sshd@21-172.24.4.9:22-172.24.4.1:48866.service: Deactivated successfully. Aug 5 22:52:40.394803 systemd[1]: session-24.scope: Deactivated successfully. Aug 5 22:52:40.399994 systemd-logind[1427]: Session 24 logged out. Waiting for processes to exit. Aug 5 22:52:40.402400 systemd-logind[1427]: Removed session 24. Aug 5 22:52:45.399864 systemd[1]: Started sshd@22-172.24.4.9:22-172.24.4.1:48172.service - OpenSSH per-connection server daemon (172.24.4.1:48172). Aug 5 22:52:46.643973 sshd[5595]: Accepted publickey for core from 172.24.4.1 port 48172 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:52:46.647421 sshd[5595]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:52:46.656482 systemd-logind[1427]: New session 25 of user core. Aug 5 22:52:46.671856 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 5 22:52:47.511442 sshd[5595]: pam_unix(sshd:session): session closed for user core Aug 5 22:52:47.519155 systemd[1]: sshd@22-172.24.4.9:22-172.24.4.1:48172.service: Deactivated successfully. Aug 5 22:52:47.522800 systemd[1]: session-25.scope: Deactivated successfully. Aug 5 22:52:47.525072 systemd-logind[1427]: Session 25 logged out. Waiting for processes to exit. Aug 5 22:52:47.527971 systemd-logind[1427]: Removed session 25. Aug 5 22:52:52.538615 systemd[1]: Started sshd@23-172.24.4.9:22-172.24.4.1:48188.service - OpenSSH per-connection server daemon (172.24.4.1:48188). Aug 5 22:52:53.650941 sshd[5626]: Accepted publickey for core from 172.24.4.1 port 48188 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:52:53.653962 sshd[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:52:53.665217 systemd-logind[1427]: New session 26 of user core. Aug 5 22:52:53.669796 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 5 22:52:54.327906 sshd[5626]: pam_unix(sshd:session): session closed for user core Aug 5 22:52:54.334902 systemd[1]: sshd@23-172.24.4.9:22-172.24.4.1:48188.service: Deactivated successfully. Aug 5 22:52:54.337290 systemd[1]: session-26.scope: Deactivated successfully. Aug 5 22:52:54.338794 systemd-logind[1427]: Session 26 logged out. Waiting for processes to exit. Aug 5 22:52:54.340219 systemd-logind[1427]: Removed session 26. Aug 5 22:52:59.349348 systemd[1]: Started sshd@24-172.24.4.9:22-172.24.4.1:59730.service - OpenSSH per-connection server daemon (172.24.4.1:59730). Aug 5 22:53:00.755506 sshd[5658]: Accepted publickey for core from 172.24.4.1 port 59730 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:53:00.761389 sshd[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:53:00.777045 systemd-logind[1427]: New session 27 of user core. Aug 5 22:53:00.783891 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 5 22:53:01.767214 sshd[5658]: pam_unix(sshd:session): session closed for user core Aug 5 22:53:01.772187 systemd[1]: sshd@24-172.24.4.9:22-172.24.4.1:59730.service: Deactivated successfully. Aug 5 22:53:01.776030 systemd[1]: session-27.scope: Deactivated successfully. Aug 5 22:53:01.778075 systemd-logind[1427]: Session 27 logged out. Waiting for processes to exit. Aug 5 22:53:01.780216 systemd-logind[1427]: Removed session 27. Aug 5 22:53:06.789133 systemd[1]: Started sshd@25-172.24.4.9:22-172.24.4.1:34840.service - OpenSSH per-connection server daemon (172.24.4.1:34840). Aug 5 22:53:07.993426 sshd[5720]: Accepted publickey for core from 172.24.4.1 port 34840 ssh2: RSA SHA256:cjmFSVO7eydDuaVzSXM20ZoSjXKvzJjGrWPgxXzoVU8 Aug 5 22:53:07.997565 sshd[5720]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:53:08.005482 systemd-logind[1427]: New session 28 of user core. Aug 5 22:53:08.012788 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 5 22:53:08.937948 sshd[5720]: pam_unix(sshd:session): session closed for user core Aug 5 22:53:08.941720 systemd[1]: sshd@25-172.24.4.9:22-172.24.4.1:34840.service: Deactivated successfully. Aug 5 22:53:08.945781 systemd[1]: session-28.scope: Deactivated successfully. Aug 5 22:53:08.948973 systemd-logind[1427]: Session 28 logged out. Waiting for processes to exit. Aug 5 22:53:08.952987 systemd-logind[1427]: Removed session 28.