Jun 25 18:47:40.088410 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 25 18:47:40.088459 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:47:40.088474 kernel: BIOS-provided physical RAM map: Jun 25 18:47:40.088485 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 25 18:47:40.088496 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jun 25 18:47:40.088506 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jun 25 18:47:40.088522 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jun 25 18:47:40.088536 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jun 25 18:47:40.088546 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jun 25 18:47:40.088557 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jun 25 18:47:40.088568 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jun 25 18:47:40.088580 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jun 25 18:47:40.088591 kernel: printk: bootconsole [earlyser0] enabled Jun 25 18:47:40.088603 kernel: NX (Execute Disable) protection: active Jun 25 18:47:40.088620 kernel: APIC: Static calls initialized Jun 25 18:47:40.088633 kernel: efi: EFI v2.7 by Microsoft Jun 25 18:47:40.088646 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Jun 25 18:47:40.088658 kernel: SMBIOS 3.1.0 present. Jun 25 18:47:40.088671 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jun 25 18:47:40.088684 kernel: Hypervisor detected: Microsoft Hyper-V Jun 25 18:47:40.088697 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jun 25 18:47:40.088712 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jun 25 18:47:40.088728 kernel: Hyper-V: Nested features: 0x1e0101 Jun 25 18:47:40.088741 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jun 25 18:47:40.088756 kernel: Hyper-V: Using hypercall for remote TLB flush Jun 25 18:47:40.088769 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 25 18:47:40.088782 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 25 18:47:40.088796 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jun 25 18:47:40.088809 kernel: tsc: Detected 2593.906 MHz processor Jun 25 18:47:40.088822 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 18:47:40.088836 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 18:47:40.088849 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jun 25 18:47:40.088862 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 25 18:47:40.088877 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 18:47:40.088890 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jun 25 18:47:40.088902 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jun 25 18:47:40.088916 kernel: Using GB pages for direct mapping Jun 25 18:47:40.088928 kernel: Secure boot disabled Jun 25 18:47:40.088941 kernel: ACPI: Early table checksum verification disabled Jun 25 18:47:40.088955 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jun 25 18:47:40.088973 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:47:40.088990 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:47:40.089004 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jun 25 18:47:40.089017 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jun 25 18:47:40.089032 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:47:40.089045 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:47:40.089059 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:47:40.089076 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:47:40.089090 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:47:40.089104 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:47:40.089118 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:47:40.089132 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jun 25 18:47:40.089146 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jun 25 18:47:40.089179 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jun 25 18:47:40.089192 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jun 25 18:47:40.089208 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jun 25 18:47:40.089223 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jun 25 18:47:40.089236 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jun 25 18:47:40.089250 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jun 25 18:47:40.089263 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jun 25 18:47:40.089276 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jun 25 18:47:40.089291 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 18:47:40.089304 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 25 18:47:40.089318 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jun 25 18:47:40.089335 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jun 25 18:47:40.089349 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jun 25 18:47:40.089363 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jun 25 18:47:40.089377 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jun 25 18:47:40.089391 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jun 25 18:47:40.089404 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jun 25 18:47:40.089418 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jun 25 18:47:40.089432 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jun 25 18:47:40.089446 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jun 25 18:47:40.089463 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jun 25 18:47:40.089476 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jun 25 18:47:40.089491 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jun 25 18:47:40.089505 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jun 25 18:47:40.089518 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jun 25 18:47:40.089532 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jun 25 18:47:40.089546 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jun 25 18:47:40.089561 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jun 25 18:47:40.089574 kernel: Zone ranges: Jun 25 18:47:40.089590 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 18:47:40.089604 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 25 18:47:40.089617 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jun 25 18:47:40.089631 kernel: Movable zone start for each node Jun 25 18:47:40.089645 kernel: Early memory node ranges Jun 25 18:47:40.089659 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 25 18:47:40.089672 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jun 25 18:47:40.089685 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jun 25 18:47:40.089697 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jun 25 18:47:40.089713 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jun 25 18:47:40.089727 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 18:47:40.089740 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 25 18:47:40.089752 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jun 25 18:47:40.089766 kernel: ACPI: PM-Timer IO Port: 0x408 Jun 25 18:47:40.089779 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jun 25 18:47:40.089792 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jun 25 18:47:40.089805 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 18:47:40.089817 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 18:47:40.089835 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jun 25 18:47:40.089849 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 18:47:40.089864 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jun 25 18:47:40.089878 kernel: Booting paravirtualized kernel on Hyper-V Jun 25 18:47:40.089893 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 18:47:40.089907 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 18:47:40.089922 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jun 25 18:47:40.089937 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jun 25 18:47:40.089951 kernel: pcpu-alloc: [0] 0 1 Jun 25 18:47:40.089968 kernel: Hyper-V: PV spinlocks enabled Jun 25 18:47:40.089983 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 18:47:40.090000 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:47:40.090015 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:47:40.090029 kernel: random: crng init done Jun 25 18:47:40.090044 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 25 18:47:40.090058 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:47:40.090073 kernel: Fallback order for Node 0: 0 Jun 25 18:47:40.090091 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jun 25 18:47:40.090118 kernel: Policy zone: Normal Jun 25 18:47:40.090136 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:47:40.090150 kernel: software IO TLB: area num 2. Jun 25 18:47:40.090203 kernel: Memory: 8070928K/8387460K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 316272K reserved, 0K cma-reserved) Jun 25 18:47:40.090217 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 18:47:40.090232 kernel: ftrace: allocating 37650 entries in 148 pages Jun 25 18:47:40.090247 kernel: ftrace: allocated 148 pages with 3 groups Jun 25 18:47:40.090261 kernel: Dynamic Preempt: voluntary Jun 25 18:47:40.090276 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:47:40.090292 kernel: rcu: RCU event tracing is enabled. Jun 25 18:47:40.090311 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 18:47:40.090326 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:47:40.090341 kernel: Rude variant of Tasks RCU enabled. Jun 25 18:47:40.090356 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:47:40.090371 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:47:40.090389 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 18:47:40.090403 kernel: Using NULL legacy PIC Jun 25 18:47:40.090418 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jun 25 18:47:40.090433 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:47:40.090448 kernel: Console: colour dummy device 80x25 Jun 25 18:47:40.090462 kernel: printk: console [tty1] enabled Jun 25 18:47:40.090477 kernel: printk: console [ttyS0] enabled Jun 25 18:47:40.090492 kernel: printk: bootconsole [earlyser0] disabled Jun 25 18:47:40.090506 kernel: ACPI: Core revision 20230628 Jun 25 18:47:40.090521 kernel: Failed to register legacy timer interrupt Jun 25 18:47:40.090539 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 18:47:40.090554 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 25 18:47:40.090569 kernel: Hyper-V: Using IPI hypercalls Jun 25 18:47:40.090584 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jun 25 18:47:40.090598 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jun 25 18:47:40.090613 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jun 25 18:47:40.090628 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jun 25 18:47:40.090643 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jun 25 18:47:40.090658 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jun 25 18:47:40.090676 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jun 25 18:47:40.090691 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 25 18:47:40.090706 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jun 25 18:47:40.090721 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 18:47:40.090736 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 18:47:40.090750 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 18:47:40.090764 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 18:47:40.090779 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 25 18:47:40.090794 kernel: RETBleed: Vulnerable Jun 25 18:47:40.090812 kernel: Speculative Store Bypass: Vulnerable Jun 25 18:47:40.090826 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:47:40.090841 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:47:40.090856 kernel: GDS: Unknown: Dependent on hypervisor status Jun 25 18:47:40.090870 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 18:47:40.090885 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 18:47:40.090900 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 18:47:40.090914 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 25 18:47:40.090928 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 25 18:47:40.090943 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 25 18:47:40.090958 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 18:47:40.090975 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jun 25 18:47:40.090990 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jun 25 18:47:40.091004 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jun 25 18:47:40.091019 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jun 25 18:47:40.091033 kernel: Freeing SMP alternatives memory: 32K Jun 25 18:47:40.091048 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:47:40.091063 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:47:40.091077 kernel: SELinux: Initializing. Jun 25 18:47:40.091092 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 18:47:40.091106 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 18:47:40.091121 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jun 25 18:47:40.091136 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:47:40.091188 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:47:40.091202 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:47:40.091217 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jun 25 18:47:40.091232 kernel: signal: max sigframe size: 3632 Jun 25 18:47:40.091246 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:47:40.091260 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:47:40.091273 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 18:47:40.091287 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:47:40.091299 kernel: smpboot: x86: Booting SMP configuration: Jun 25 18:47:40.091317 kernel: .... node #0, CPUs: #1 Jun 25 18:47:40.091331 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jun 25 18:47:40.091345 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 25 18:47:40.091358 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 18:47:40.091373 kernel: smpboot: Max logical packages: 1 Jun 25 18:47:40.091386 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jun 25 18:47:40.091400 kernel: devtmpfs: initialized Jun 25 18:47:40.091413 kernel: x86/mm: Memory block size: 128MB Jun 25 18:47:40.091430 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jun 25 18:47:40.091443 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:47:40.091457 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 18:47:40.091471 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:47:40.091486 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:47:40.091499 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:47:40.091513 kernel: audit: type=2000 audit(1719341259.027:1): state=initialized audit_enabled=0 res=1 Jun 25 18:47:40.091526 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:47:40.091540 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 18:47:40.091556 kernel: cpuidle: using governor menu Jun 25 18:47:40.091570 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:47:40.091583 kernel: dca service started, version 1.12.1 Jun 25 18:47:40.091599 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jun 25 18:47:40.091616 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 18:47:40.091631 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:47:40.091647 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:47:40.091662 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:47:40.091676 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:47:40.091693 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:47:40.091706 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:47:40.091719 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:47:40.091733 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:47:40.091746 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:47:40.091759 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 25 18:47:40.091772 kernel: ACPI: Interpreter enabled Jun 25 18:47:40.091786 kernel: ACPI: PM: (supports S0 S5) Jun 25 18:47:40.091800 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 18:47:40.091816 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 18:47:40.091830 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 25 18:47:40.091844 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jun 25 18:47:40.091857 kernel: iommu: Default domain type: Translated Jun 25 18:47:40.091872 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 18:47:40.091885 kernel: efivars: Registered efivars operations Jun 25 18:47:40.091899 kernel: PCI: Using ACPI for IRQ routing Jun 25 18:47:40.091913 kernel: PCI: System does not support PCI Jun 25 18:47:40.091926 kernel: vgaarb: loaded Jun 25 18:47:40.091942 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jun 25 18:47:40.091956 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:47:40.091970 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:47:40.091984 kernel: pnp: PnP ACPI init Jun 25 18:47:40.091998 kernel: pnp: PnP ACPI: found 3 devices Jun 25 18:47:40.092013 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 18:47:40.092027 kernel: NET: Registered PF_INET protocol family Jun 25 18:47:40.092040 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 18:47:40.092055 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 25 18:47:40.092072 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:47:40.092086 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:47:40.092101 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 25 18:47:40.092115 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 25 18:47:40.092129 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 18:47:40.092143 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 18:47:40.092247 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:47:40.092262 kernel: NET: Registered PF_XDP protocol family Jun 25 18:47:40.092275 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:47:40.092294 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 25 18:47:40.092309 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jun 25 18:47:40.092324 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 18:47:40.092339 kernel: Initialise system trusted keyrings Jun 25 18:47:40.092353 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 25 18:47:40.092368 kernel: Key type asymmetric registered Jun 25 18:47:40.092383 kernel: Asymmetric key parser 'x509' registered Jun 25 18:47:40.092397 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 25 18:47:40.092412 kernel: io scheduler mq-deadline registered Jun 25 18:47:40.092431 kernel: io scheduler kyber registered Jun 25 18:47:40.092446 kernel: io scheduler bfq registered Jun 25 18:47:40.092460 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 18:47:40.092475 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:47:40.092490 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 18:47:40.092504 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 25 18:47:40.092519 kernel: i8042: PNP: No PS/2 controller found. Jun 25 18:47:40.092706 kernel: rtc_cmos 00:02: registered as rtc0 Jun 25 18:47:40.092835 kernel: rtc_cmos 00:02: setting system clock to 2024-06-25T18:47:39 UTC (1719341259) Jun 25 18:47:40.092949 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jun 25 18:47:40.092968 kernel: intel_pstate: CPU model not supported Jun 25 18:47:40.092983 kernel: efifb: probing for efifb Jun 25 18:47:40.092998 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 25 18:47:40.093014 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 25 18:47:40.093029 kernel: efifb: scrolling: redraw Jun 25 18:47:40.093044 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 25 18:47:40.093063 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 18:47:40.093077 kernel: fb0: EFI VGA frame buffer device Jun 25 18:47:40.093093 kernel: pstore: Using crash dump compression: deflate Jun 25 18:47:40.093108 kernel: pstore: Registered efi_pstore as persistent store backend Jun 25 18:47:40.093123 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:47:40.093138 kernel: Segment Routing with IPv6 Jun 25 18:47:40.093168 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:47:40.093182 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:47:40.093197 kernel: Key type dns_resolver registered Jun 25 18:47:40.093212 kernel: IPI shorthand broadcast: enabled Jun 25 18:47:40.093230 kernel: sched_clock: Marking stable (856178100, 45116200)->(1112154500, -210860200) Jun 25 18:47:40.093245 kernel: registered taskstats version 1 Jun 25 18:47:40.093260 kernel: Loading compiled-in X.509 certificates Jun 25 18:47:40.093275 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 25 18:47:40.093289 kernel: Key type .fscrypt registered Jun 25 18:47:40.093304 kernel: Key type fscrypt-provisioning registered Jun 25 18:47:40.093319 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:47:40.093334 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:47:40.093352 kernel: ima: No architecture policies found Jun 25 18:47:40.093367 kernel: clk: Disabling unused clocks Jun 25 18:47:40.093382 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 25 18:47:40.093396 kernel: Write protecting the kernel read-only data: 36864k Jun 25 18:47:40.093412 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 25 18:47:40.093427 kernel: Run /init as init process Jun 25 18:47:40.093441 kernel: with arguments: Jun 25 18:47:40.093456 kernel: /init Jun 25 18:47:40.093470 kernel: with environment: Jun 25 18:47:40.093487 kernel: HOME=/ Jun 25 18:47:40.093502 kernel: TERM=linux Jun 25 18:47:40.093516 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:47:40.093534 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:47:40.093552 systemd[1]: Detected virtualization microsoft. Jun 25 18:47:40.093568 systemd[1]: Detected architecture x86-64. Jun 25 18:47:40.093584 systemd[1]: Running in initrd. Jun 25 18:47:40.093599 systemd[1]: No hostname configured, using default hostname. Jun 25 18:47:40.093617 systemd[1]: Hostname set to . Jun 25 18:47:40.093633 systemd[1]: Initializing machine ID from random generator. Jun 25 18:47:40.093648 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:47:40.093664 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:47:40.093680 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:47:40.093696 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:47:40.093712 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:47:40.093728 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:47:40.093747 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:47:40.093765 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:47:40.093781 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:47:40.093798 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:47:40.093814 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:47:40.093830 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:47:40.093846 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:47:40.093864 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:47:40.093880 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:47:40.093896 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:47:40.093911 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:47:40.093927 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:47:40.093943 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:47:40.093959 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:47:40.093975 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:47:40.093993 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:47:40.094009 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:47:40.094025 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:47:40.094041 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:47:40.094057 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:47:40.094072 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:47:40.094088 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:47:40.094104 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:47:40.094120 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:47:40.094171 systemd-journald[176]: Collecting audit messages is disabled. Jun 25 18:47:40.094208 systemd-journald[176]: Journal started Jun 25 18:47:40.094245 systemd-journald[176]: Runtime Journal (/run/log/journal/72145f8f3d1b4174804d9f4492e3d73a) is 8.0M, max 158.8M, 150.8M free. Jun 25 18:47:40.101180 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:47:40.104414 systemd-modules-load[177]: Inserted module 'overlay' Jun 25 18:47:40.113518 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:47:40.120138 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:47:40.126790 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:47:40.146404 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:47:40.161284 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:47:40.161321 kernel: Bridge firewalling registered Jun 25 18:47:40.161611 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:47:40.164923 systemd-modules-load[177]: Inserted module 'br_netfilter' Jun 25 18:47:40.170708 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:47:40.176752 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:47:40.182574 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:47:40.190400 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:47:40.201379 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:47:40.211318 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:47:40.216962 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:47:40.232821 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:47:40.236990 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:47:40.248286 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:47:40.253581 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:47:40.264857 dracut-cmdline[209]: dracut-dracut-053 Jun 25 18:47:40.265355 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:47:40.273789 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:47:40.324663 systemd-resolved[217]: Positive Trust Anchors: Jun 25 18:47:40.327040 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:47:40.327104 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:47:40.349736 systemd-resolved[217]: Defaulting to hostname 'linux'. Jun 25 18:47:40.353327 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:47:40.358798 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:47:40.383176 kernel: SCSI subsystem initialized Jun 25 18:47:40.395173 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:47:40.408180 kernel: iscsi: registered transport (tcp) Jun 25 18:47:40.433723 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:47:40.433806 kernel: QLogic iSCSI HBA Driver Jun 25 18:47:40.469694 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:47:40.480328 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:47:40.510063 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:47:40.510165 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:47:40.513260 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:47:40.558188 kernel: raid6: avx512x4 gen() 18455 MB/s Jun 25 18:47:40.577170 kernel: raid6: avx512x2 gen() 18603 MB/s Jun 25 18:47:40.596168 kernel: raid6: avx512x1 gen() 18563 MB/s Jun 25 18:47:40.616170 kernel: raid6: avx2x4 gen() 18606 MB/s Jun 25 18:47:40.635169 kernel: raid6: avx2x2 gen() 18109 MB/s Jun 25 18:47:40.654960 kernel: raid6: avx2x1 gen() 14124 MB/s Jun 25 18:47:40.654991 kernel: raid6: using algorithm avx2x4 gen() 18606 MB/s Jun 25 18:47:40.676743 kernel: raid6: .... xor() 7443 MB/s, rmw enabled Jun 25 18:47:40.676779 kernel: raid6: using avx512x2 recovery algorithm Jun 25 18:47:40.703182 kernel: xor: automatically using best checksumming function avx Jun 25 18:47:40.872183 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:47:40.881899 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:47:40.891309 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:47:40.904831 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jun 25 18:47:40.909216 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:47:40.927314 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:47:40.941812 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jun 25 18:47:40.971433 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:47:40.978456 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:47:41.020529 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:47:41.035626 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:47:41.068663 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:47:41.075264 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:47:41.082132 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:47:41.088301 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:47:41.100963 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:47:41.113185 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 18:47:41.129244 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:47:41.129490 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:47:41.140923 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:47:41.146757 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:47:41.155849 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 18:47:41.155874 kernel: AES CTR mode by8 optimization enabled Jun 25 18:47:41.147054 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:47:41.161590 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:47:41.172378 kernel: hv_vmbus: Vmbus version:5.2 Jun 25 18:47:41.173643 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:47:41.179939 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:47:41.195465 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:47:41.198519 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:47:41.212293 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 18:47:41.217474 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:47:41.230730 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 25 18:47:41.230755 kernel: hv_vmbus: registering driver hv_netvsc Jun 25 18:47:41.241192 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 25 18:47:41.258088 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 18:47:41.258150 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 18:47:41.258171 kernel: hv_vmbus: registering driver hid_hyperv Jun 25 18:47:41.264971 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 25 18:47:41.268899 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 25 18:47:41.278462 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:47:41.285062 kernel: hv_vmbus: registering driver hv_storvsc Jun 25 18:47:41.290194 kernel: scsi host0: storvsc_host_t Jun 25 18:47:41.294178 kernel: scsi host1: storvsc_host_t Jun 25 18:47:41.298890 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 25 18:47:41.297994 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:47:41.310258 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jun 25 18:47:41.315204 kernel: PTP clock support registered Jun 25 18:47:41.338973 kernel: hv_utils: Registering HyperV Utility Driver Jun 25 18:47:41.339024 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 25 18:47:41.348135 kernel: hv_vmbus: registering driver hv_utils Jun 25 18:47:41.348179 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 18:47:41.348197 kernel: hv_utils: Heartbeat IC version 3.0 Jun 25 18:47:41.348208 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 25 18:47:41.348355 kernel: hv_utils: Shutdown IC version 3.2 Jun 25 18:47:41.353191 kernel: hv_utils: TimeSync IC version 4.0 Jun 25 18:47:41.827111 systemd-resolved[217]: Clock change detected. Flushing caches. Jun 25 18:47:41.835210 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:47:41.858105 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 25 18:47:41.871772 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 25 18:47:41.872092 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 25 18:47:41.872259 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 25 18:47:41.872434 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 25 18:47:41.872593 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:47:41.872621 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 25 18:47:42.005412 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 25 18:47:42.017039 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (451) Jun 25 18:47:42.032442 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 18:47:42.053582 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (462) Jun 25 18:47:42.074229 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 25 18:47:42.082545 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 25 18:47:42.090011 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 25 18:47:42.105041 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:47:42.123951 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:47:42.135896 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:47:42.141890 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:47:43.143966 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:47:43.144313 disk-uuid[595]: The operation has completed successfully. Jun 25 18:48:01.556854 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:48:01.556998 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:48:01.573114 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:48:01.628744 sh[709]: Success Jun 25 18:48:01.651927 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 18:48:01.731975 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:48:01.750001 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:48:01.755225 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:48:01.785843 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 25 18:48:01.785914 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:48:01.789681 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:48:01.792691 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:48:01.795028 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:48:01.883387 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:48:01.888778 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:48:01.904030 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:48:01.911139 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:48:01.928002 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:48:01.928063 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:48:01.928082 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:48:01.936986 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:48:01.946735 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:48:01.952006 kernel: BTRFS info (device sda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:48:01.960958 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:48:01.973856 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:48:02.008794 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:48:02.021116 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:48:02.050271 systemd-networkd[893]: lo: Link UP Jun 25 18:48:02.050280 systemd-networkd[893]: lo: Gained carrier Jun 25 18:48:02.051253 systemd-networkd[893]: Enumeration completed Jun 25 18:48:02.051595 systemd-networkd[893]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:48:02.051599 systemd-networkd[893]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:48:02.052108 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:48:02.058444 systemd[1]: Reached target network.target - Network. Jun 25 18:48:02.060829 systemd-networkd[893]: eth0: Link UP Jun 25 18:48:02.060949 systemd-networkd[893]: eth0: Gained carrier Jun 25 18:48:02.060962 systemd-networkd[893]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:48:02.307624 ignition[832]: Ignition 2.19.0 Jun 25 18:48:02.307637 ignition[832]: Stage: fetch-offline Jun 25 18:48:02.307679 ignition[832]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:48:02.307690 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:48:02.307813 ignition[832]: parsed url from cmdline: "" Jun 25 18:48:02.307818 ignition[832]: no config URL provided Jun 25 18:48:02.307825 ignition[832]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:48:02.307836 ignition[832]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:48:02.307843 ignition[832]: failed to fetch config: resource requires networking Jun 25 18:48:02.308096 ignition[832]: Ignition finished successfully Jun 25 18:48:02.327248 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:48:02.338135 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 18:48:02.351275 ignition[901]: Ignition 2.19.0 Jun 25 18:48:02.351285 ignition[901]: Stage: fetch Jun 25 18:48:02.351509 ignition[901]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:48:02.351523 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:48:02.351637 ignition[901]: parsed url from cmdline: "" Jun 25 18:48:02.351641 ignition[901]: no config URL provided Jun 25 18:48:02.351647 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:48:02.351654 ignition[901]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:48:02.351687 ignition[901]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 25 18:48:02.351833 ignition[901]: GET error: Get "http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text": dial tcp 169.254.169.254:80: connect: network is unreachable Jun 25 18:48:02.552124 ignition[901]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #2 Jun 25 18:48:02.556789 ignition[901]: GET error: Get "http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text": dial tcp 169.254.169.254:80: connect: network is unreachable Jun 25 18:48:02.666952 systemd-networkd[893]: eth0: DHCPv4 address 10.200.8.15/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 18:48:02.957470 ignition[901]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #3 Jun 25 18:48:03.356315 ignition[901]: GET result: OK Jun 25 18:48:03.356493 ignition[901]: config has been read from IMDS userdata Jun 25 18:48:03.356535 ignition[901]: parsing config with SHA512: c89f014a9be7d2e61e7ac312eb7963506eabf73151f80a606b120f1134c00d7fd7d31c4e735414df6670d7e8f287eee523f61a5a95fdeb88fccd52584238ae8a Jun 25 18:48:03.365849 unknown[901]: fetched base config from "system" Jun 25 18:48:03.365864 unknown[901]: fetched base config from "system" Jun 25 18:48:03.367548 ignition[901]: fetch: fetch complete Jun 25 18:48:03.365896 unknown[901]: fetched user config from "azure" Jun 25 18:48:03.367553 ignition[901]: fetch: fetch passed Jun 25 18:48:03.367603 ignition[901]: Ignition finished successfully Jun 25 18:48:03.379231 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 18:48:03.389032 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:48:03.404542 ignition[909]: Ignition 2.19.0 Jun 25 18:48:03.404553 ignition[909]: Stage: kargs Jun 25 18:48:03.404782 ignition[909]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:48:03.406649 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:48:03.404795 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:48:03.405741 ignition[909]: kargs: kargs passed Jun 25 18:48:03.405785 ignition[909]: Ignition finished successfully Jun 25 18:48:03.423007 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:48:03.437537 ignition[916]: Ignition 2.19.0 Jun 25 18:48:03.437547 ignition[916]: Stage: disks Jun 25 18:48:03.439580 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:48:03.437759 ignition[916]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:48:03.443779 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:48:03.437771 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:48:03.448400 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:48:03.438686 ignition[916]: disks: disks passed Jun 25 18:48:03.451301 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:48:03.438731 ignition[916]: Ignition finished successfully Jun 25 18:48:03.456122 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:48:03.461434 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:48:03.478300 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:48:03.503755 systemd-fsck[925]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jun 25 18:48:03.508261 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:48:03.521982 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:48:03.624183 kernel: EXT4-fs (sda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 25 18:48:03.624750 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:48:03.627588 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:48:03.646958 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:48:03.652099 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:48:03.659898 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (936) Jun 25 18:48:03.662053 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 25 18:48:03.678011 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:48:03.678047 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:48:03.678068 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:48:03.664811 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:48:03.685313 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:48:03.664845 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:48:03.690406 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:48:03.697101 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:48:03.709038 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:48:03.809102 systemd-networkd[893]: eth0: Gained IPv6LL Jun 25 18:48:03.898849 coreos-metadata[938]: Jun 25 18:48:03.898 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 18:48:03.904815 coreos-metadata[938]: Jun 25 18:48:03.904 INFO Fetch successful Jun 25 18:48:03.907265 coreos-metadata[938]: Jun 25 18:48:03.905 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 25 18:48:03.924095 coreos-metadata[938]: Jun 25 18:48:03.924 INFO Fetch successful Jun 25 18:48:03.928303 coreos-metadata[938]: Jun 25 18:48:03.928 INFO wrote hostname ci-4012.0.0-a-c5aaeb7e49 to /sysroot/etc/hostname Jun 25 18:48:03.932098 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:48:03.957591 initrd-setup-root[966]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:48:03.969010 initrd-setup-root[973]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:48:03.976653 initrd-setup-root[980]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:48:03.981538 initrd-setup-root[987]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:48:04.222193 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:48:04.232977 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:48:04.240165 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:48:04.251474 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:48:04.253912 kernel: BTRFS info (device sda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:48:04.283263 ignition[1058]: INFO : Ignition 2.19.0 Jun 25 18:48:04.283263 ignition[1058]: INFO : Stage: mount Jun 25 18:48:04.290125 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:48:04.290125 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:48:04.290125 ignition[1058]: INFO : mount: mount passed Jun 25 18:48:04.290125 ignition[1058]: INFO : Ignition finished successfully Jun 25 18:48:04.285988 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:48:04.290301 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:48:04.304458 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:48:04.630098 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:48:04.655893 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1071) Jun 25 18:48:04.662819 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:48:04.662898 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:48:04.665206 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:48:04.670888 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:48:04.672140 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:48:04.693776 ignition[1087]: INFO : Ignition 2.19.0 Jun 25 18:48:04.693776 ignition[1087]: INFO : Stage: files Jun 25 18:48:04.698500 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:48:04.698500 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:48:04.698500 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:48:04.706977 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:48:04.706977 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:48:04.719940 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:48:04.723943 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:48:04.723943 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:48:04.723943 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:48:04.723943 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 18:48:04.720391 unknown[1087]: wrote ssh authorized keys file for user: core Jun 25 18:48:04.781839 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:48:04.889667 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:48:04.889667 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:48:04.899496 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:48:04.899496 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:48:04.899496 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:48:04.899496 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:48:04.918126 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:48:04.918126 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:48:04.927491 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:48:04.927491 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:48:04.927491 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:48:04.927491 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:48:04.927491 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:48:04.927491 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:48:04.927491 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jun 25 18:48:05.506349 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 18:48:05.827987 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:48:05.827987 ignition[1087]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 18:48:05.840234 ignition[1087]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:48:05.840234 ignition[1087]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:48:05.840234 ignition[1087]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 18:48:05.840234 ignition[1087]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:48:05.840234 ignition[1087]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:48:05.840234 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:48:05.840234 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:48:05.840234 ignition[1087]: INFO : files: files passed Jun 25 18:48:05.840234 ignition[1087]: INFO : Ignition finished successfully Jun 25 18:48:05.836502 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:48:05.854051 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:48:05.865024 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:48:05.877158 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:48:05.901057 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:48:05.877264 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:48:05.910096 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:48:05.910096 initrd-setup-root-after-ignition[1116]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:48:05.887981 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:48:05.891996 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:48:05.902650 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:48:05.941219 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:48:05.941348 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:48:05.947556 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:48:05.952948 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:48:05.955479 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:48:05.969092 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:48:05.986357 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:48:05.995026 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:48:06.006202 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:48:06.011752 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:48:06.014853 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:48:06.019950 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:48:06.020100 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:48:06.025647 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:48:06.030136 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:48:06.035358 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:48:06.040640 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:48:06.045834 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:48:06.051495 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:48:06.063520 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:48:06.069812 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:48:06.070955 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:48:06.071332 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:48:06.071737 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:48:06.071864 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:48:06.072660 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:48:06.073573 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:48:06.073960 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:48:06.087475 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:48:06.092806 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:48:06.097019 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:48:06.103050 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:48:06.103211 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:48:06.108307 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:48:06.108451 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:48:06.113566 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 25 18:48:06.113713 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:48:06.136013 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:48:06.151119 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:48:06.155710 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:48:06.156602 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:48:06.171251 ignition[1141]: INFO : Ignition 2.19.0 Jun 25 18:48:06.171251 ignition[1141]: INFO : Stage: umount Jun 25 18:48:06.171251 ignition[1141]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:48:06.171251 ignition[1141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:48:06.171251 ignition[1141]: INFO : umount: umount passed Jun 25 18:48:06.171251 ignition[1141]: INFO : Ignition finished successfully Jun 25 18:48:06.159681 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:48:06.159784 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:48:06.164667 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:48:06.164755 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:48:06.170509 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:48:06.170610 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:48:06.185164 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:48:06.185210 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:48:06.185914 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:48:06.185951 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:48:06.186309 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 18:48:06.186343 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 18:48:06.186734 systemd[1]: Stopped target network.target - Network. Jun 25 18:48:06.204793 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:48:06.204846 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:48:06.207780 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:48:06.211202 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:48:06.220918 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:48:06.225838 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:48:06.228217 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:48:06.232787 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:48:06.232846 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:48:06.265065 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:48:06.265130 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:48:06.267787 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:48:06.267852 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:48:06.270377 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:48:06.270433 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:48:06.273305 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:48:06.277354 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:48:06.298491 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:48:06.301002 systemd-networkd[893]: eth0: DHCPv6 lease lost Jun 25 18:48:06.303534 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:48:06.303668 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:48:06.310789 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:48:06.310907 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:48:06.314633 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:48:06.314706 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:48:06.331018 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:48:06.333315 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:48:06.335721 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:48:06.341554 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:48:06.344886 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:48:06.350047 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:48:06.352558 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:48:06.360318 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:48:06.360378 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:48:06.369055 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:48:06.393521 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:48:06.393683 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:48:06.403390 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:48:06.403501 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:48:06.408775 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:48:06.408846 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:48:06.413441 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:48:06.413480 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:48:06.418918 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:48:06.418976 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:48:06.424643 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:48:06.424686 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:48:06.434033 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:48:06.436440 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:48:06.458065 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:48:06.460866 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:48:06.460938 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:48:06.473352 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 25 18:48:06.473413 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:48:06.479268 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:48:06.479328 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:48:06.486278 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:48:06.488646 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:48:06.494849 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:48:06.494945 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:48:08.582565 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:48:08.582733 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:48:08.588554 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:48:08.593184 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:48:08.593265 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:48:08.610047 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:48:08.617754 systemd[1]: Switching root. Jun 25 18:48:08.700003 systemd-journald[176]: Journal stopped Jun 25 18:47:40.088410 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 25 18:47:40.088459 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:47:40.088474 kernel: BIOS-provided physical RAM map: Jun 25 18:47:40.088485 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 25 18:47:40.088496 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jun 25 18:47:40.088506 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jun 25 18:47:40.088522 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jun 25 18:47:40.088536 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jun 25 18:47:40.088546 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jun 25 18:47:40.088557 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jun 25 18:47:40.088568 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jun 25 18:47:40.088580 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jun 25 18:47:40.088591 kernel: printk: bootconsole [earlyser0] enabled Jun 25 18:47:40.088603 kernel: NX (Execute Disable) protection: active Jun 25 18:47:40.088620 kernel: APIC: Static calls initialized Jun 25 18:47:40.088633 kernel: efi: EFI v2.7 by Microsoft Jun 25 18:47:40.088646 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Jun 25 18:47:40.088658 kernel: SMBIOS 3.1.0 present. Jun 25 18:47:40.088671 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jun 25 18:47:40.088684 kernel: Hypervisor detected: Microsoft Hyper-V Jun 25 18:47:40.088697 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jun 25 18:47:40.088712 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jun 25 18:47:40.088728 kernel: Hyper-V: Nested features: 0x1e0101 Jun 25 18:47:40.088741 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jun 25 18:47:40.088756 kernel: Hyper-V: Using hypercall for remote TLB flush Jun 25 18:47:40.088769 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 25 18:47:40.088782 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 25 18:47:40.088796 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jun 25 18:47:40.088809 kernel: tsc: Detected 2593.906 MHz processor Jun 25 18:47:40.088822 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 18:47:40.088836 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 18:47:40.088849 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jun 25 18:47:40.088862 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 25 18:47:40.088877 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 18:47:40.088890 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jun 25 18:47:40.088902 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jun 25 18:47:40.088916 kernel: Using GB pages for direct mapping Jun 25 18:47:40.088928 kernel: Secure boot disabled Jun 25 18:47:40.088941 kernel: ACPI: Early table checksum verification disabled Jun 25 18:47:40.088955 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jun 25 18:47:40.088973 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:47:40.088990 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:47:40.089004 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jun 25 18:47:40.089017 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jun 25 18:47:40.089032 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:47:40.089045 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:47:40.089059 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:47:40.089076 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:47:40.089090 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:47:40.089104 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:47:40.089118 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:47:40.089132 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jun 25 18:47:40.089146 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jun 25 18:47:40.089179 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jun 25 18:47:40.089192 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jun 25 18:47:40.089208 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jun 25 18:47:40.089223 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jun 25 18:47:40.089236 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jun 25 18:47:40.089250 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jun 25 18:47:40.089263 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jun 25 18:47:40.089276 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jun 25 18:47:40.089291 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 18:47:40.089304 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 25 18:47:40.089318 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jun 25 18:47:40.089335 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jun 25 18:47:40.089349 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jun 25 18:47:40.089363 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jun 25 18:47:40.089377 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jun 25 18:47:40.089391 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jun 25 18:47:40.089404 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jun 25 18:47:40.089418 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jun 25 18:47:40.089432 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jun 25 18:47:40.089446 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jun 25 18:47:40.089463 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jun 25 18:47:40.089476 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jun 25 18:47:40.089491 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jun 25 18:47:40.089505 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jun 25 18:47:40.089518 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jun 25 18:47:40.089532 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jun 25 18:47:40.089546 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jun 25 18:47:40.089561 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jun 25 18:47:40.089574 kernel: Zone ranges: Jun 25 18:47:40.089590 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 18:47:40.089604 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 25 18:47:40.089617 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jun 25 18:47:40.089631 kernel: Movable zone start for each node Jun 25 18:47:40.089645 kernel: Early memory node ranges Jun 25 18:47:40.089659 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 25 18:47:40.089672 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jun 25 18:47:40.089685 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jun 25 18:47:40.089697 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jun 25 18:47:40.089713 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jun 25 18:47:40.089727 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 18:47:40.089740 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 25 18:47:40.089752 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jun 25 18:47:40.089766 kernel: ACPI: PM-Timer IO Port: 0x408 Jun 25 18:47:40.089779 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jun 25 18:47:40.089792 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jun 25 18:47:40.089805 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 18:47:40.089817 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 18:47:40.089835 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jun 25 18:47:40.089849 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 18:47:40.089864 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jun 25 18:47:40.089878 kernel: Booting paravirtualized kernel on Hyper-V Jun 25 18:47:40.089893 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 18:47:40.089907 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 18:47:40.089922 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jun 25 18:47:40.089937 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jun 25 18:47:40.089951 kernel: pcpu-alloc: [0] 0 1 Jun 25 18:47:40.089968 kernel: Hyper-V: PV spinlocks enabled Jun 25 18:47:40.089983 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 18:47:40.090000 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:47:40.090015 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:47:40.090029 kernel: random: crng init done Jun 25 18:47:40.090044 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 25 18:47:40.090058 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:47:40.090073 kernel: Fallback order for Node 0: 0 Jun 25 18:47:40.090091 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jun 25 18:47:40.090118 kernel: Policy zone: Normal Jun 25 18:47:40.090136 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:47:40.090150 kernel: software IO TLB: area num 2. Jun 25 18:47:40.090203 kernel: Memory: 8070928K/8387460K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 316272K reserved, 0K cma-reserved) Jun 25 18:47:40.090217 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 18:47:40.090232 kernel: ftrace: allocating 37650 entries in 148 pages Jun 25 18:47:40.090247 kernel: ftrace: allocated 148 pages with 3 groups Jun 25 18:47:40.090261 kernel: Dynamic Preempt: voluntary Jun 25 18:47:40.090276 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:47:40.090292 kernel: rcu: RCU event tracing is enabled. Jun 25 18:47:40.090311 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 18:47:40.090326 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:47:40.090341 kernel: Rude variant of Tasks RCU enabled. Jun 25 18:47:40.090356 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:47:40.090371 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:47:40.090389 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 18:47:40.090403 kernel: Using NULL legacy PIC Jun 25 18:47:40.090418 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jun 25 18:47:40.090433 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:47:40.090448 kernel: Console: colour dummy device 80x25 Jun 25 18:47:40.090462 kernel: printk: console [tty1] enabled Jun 25 18:47:40.090477 kernel: printk: console [ttyS0] enabled Jun 25 18:47:40.090492 kernel: printk: bootconsole [earlyser0] disabled Jun 25 18:47:40.090506 kernel: ACPI: Core revision 20230628 Jun 25 18:47:40.090521 kernel: Failed to register legacy timer interrupt Jun 25 18:47:40.090539 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 18:47:40.090554 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 25 18:47:40.090569 kernel: Hyper-V: Using IPI hypercalls Jun 25 18:47:40.090584 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jun 25 18:47:40.090598 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jun 25 18:47:40.090613 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jun 25 18:47:40.090628 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jun 25 18:47:40.090643 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jun 25 18:47:40.090658 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jun 25 18:47:40.090676 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jun 25 18:47:40.090691 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 25 18:47:40.090706 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jun 25 18:47:40.090721 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 18:47:40.090736 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 18:47:40.090750 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 18:47:40.090764 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 18:47:40.090779 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 25 18:47:40.090794 kernel: RETBleed: Vulnerable Jun 25 18:47:40.090812 kernel: Speculative Store Bypass: Vulnerable Jun 25 18:47:40.090826 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:47:40.090841 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:47:40.090856 kernel: GDS: Unknown: Dependent on hypervisor status Jun 25 18:47:40.090870 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 18:47:40.090885 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 18:47:40.090900 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 18:47:40.090914 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 25 18:47:40.090928 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 25 18:47:40.090943 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 25 18:47:40.090958 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 18:47:40.090975 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jun 25 18:47:40.090990 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jun 25 18:47:40.091004 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jun 25 18:47:40.091019 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jun 25 18:47:40.091033 kernel: Freeing SMP alternatives memory: 32K Jun 25 18:47:40.091048 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:47:40.091063 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:47:40.091077 kernel: SELinux: Initializing. Jun 25 18:47:40.091092 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 18:47:40.091106 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 18:47:40.091121 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jun 25 18:47:40.091136 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:47:40.091188 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:47:40.091202 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:47:40.091217 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jun 25 18:47:40.091232 kernel: signal: max sigframe size: 3632 Jun 25 18:47:40.091246 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:47:40.091260 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:47:40.091273 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 18:47:40.091287 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:47:40.091299 kernel: smpboot: x86: Booting SMP configuration: Jun 25 18:47:40.091317 kernel: .... node #0, CPUs: #1 Jun 25 18:47:40.091331 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jun 25 18:47:40.091345 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 25 18:47:40.091358 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 18:47:40.091373 kernel: smpboot: Max logical packages: 1 Jun 25 18:47:40.091386 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jun 25 18:47:40.091400 kernel: devtmpfs: initialized Jun 25 18:47:40.091413 kernel: x86/mm: Memory block size: 128MB Jun 25 18:47:40.091430 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jun 25 18:47:40.091443 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:47:40.091457 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 18:47:40.091471 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:47:40.091486 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:47:40.091499 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:47:40.091513 kernel: audit: type=2000 audit(1719341259.027:1): state=initialized audit_enabled=0 res=1 Jun 25 18:47:40.091526 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:47:40.091540 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 18:47:40.091556 kernel: cpuidle: using governor menu Jun 25 18:47:40.091570 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:47:40.091583 kernel: dca service started, version 1.12.1 Jun 25 18:47:40.091599 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jun 25 18:47:40.091616 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 18:47:40.091631 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:47:40.091647 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:47:40.091662 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:47:40.091676 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:47:40.091693 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:47:40.091706 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:47:40.091719 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:47:40.091733 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:47:40.091746 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:47:40.091759 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 25 18:47:40.091772 kernel: ACPI: Interpreter enabled Jun 25 18:47:40.091786 kernel: ACPI: PM: (supports S0 S5) Jun 25 18:47:40.091800 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 18:47:40.091816 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 18:47:40.091830 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 25 18:47:40.091844 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jun 25 18:47:40.091857 kernel: iommu: Default domain type: Translated Jun 25 18:47:40.091872 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 18:47:40.091885 kernel: efivars: Registered efivars operations Jun 25 18:47:40.091899 kernel: PCI: Using ACPI for IRQ routing Jun 25 18:47:40.091913 kernel: PCI: System does not support PCI Jun 25 18:47:40.091926 kernel: vgaarb: loaded Jun 25 18:47:40.091942 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jun 25 18:47:40.091956 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:47:40.091970 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:47:40.091984 kernel: pnp: PnP ACPI init Jun 25 18:47:40.091998 kernel: pnp: PnP ACPI: found 3 devices Jun 25 18:47:40.092013 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 18:47:40.092027 kernel: NET: Registered PF_INET protocol family Jun 25 18:47:40.092040 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 18:47:40.092055 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 25 18:47:40.092072 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:47:40.092086 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:47:40.092101 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 25 18:47:40.092115 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 25 18:47:40.092129 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 18:47:40.092143 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 18:47:40.092247 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:47:40.092262 kernel: NET: Registered PF_XDP protocol family Jun 25 18:47:40.092275 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:47:40.092294 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 25 18:47:40.092309 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jun 25 18:47:40.092324 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 18:47:40.092339 kernel: Initialise system trusted keyrings Jun 25 18:47:40.092353 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 25 18:47:40.092368 kernel: Key type asymmetric registered Jun 25 18:47:40.092383 kernel: Asymmetric key parser 'x509' registered Jun 25 18:47:40.092397 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 25 18:47:40.092412 kernel: io scheduler mq-deadline registered Jun 25 18:47:40.092431 kernel: io scheduler kyber registered Jun 25 18:47:40.092446 kernel: io scheduler bfq registered Jun 25 18:47:40.092460 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 18:47:40.092475 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:47:40.092490 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 18:47:40.092504 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 25 18:47:40.092519 kernel: i8042: PNP: No PS/2 controller found. Jun 25 18:47:40.092706 kernel: rtc_cmos 00:02: registered as rtc0 Jun 25 18:47:40.092835 kernel: rtc_cmos 00:02: setting system clock to 2024-06-25T18:47:39 UTC (1719341259) Jun 25 18:47:40.092949 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jun 25 18:47:40.092968 kernel: intel_pstate: CPU model not supported Jun 25 18:47:40.092983 kernel: efifb: probing for efifb Jun 25 18:47:40.092998 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 25 18:47:40.093014 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 25 18:47:40.093029 kernel: efifb: scrolling: redraw Jun 25 18:47:40.093044 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 25 18:47:40.093063 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 18:47:40.093077 kernel: fb0: EFI VGA frame buffer device Jun 25 18:47:40.093093 kernel: pstore: Using crash dump compression: deflate Jun 25 18:47:40.093108 kernel: pstore: Registered efi_pstore as persistent store backend Jun 25 18:47:40.093123 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:47:40.093138 kernel: Segment Routing with IPv6 Jun 25 18:47:40.093168 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:47:40.093182 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:47:40.093197 kernel: Key type dns_resolver registered Jun 25 18:47:40.093212 kernel: IPI shorthand broadcast: enabled Jun 25 18:47:40.093230 kernel: sched_clock: Marking stable (856178100, 45116200)->(1112154500, -210860200) Jun 25 18:47:40.093245 kernel: registered taskstats version 1 Jun 25 18:47:40.093260 kernel: Loading compiled-in X.509 certificates Jun 25 18:47:40.093275 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 25 18:47:40.093289 kernel: Key type .fscrypt registered Jun 25 18:47:40.093304 kernel: Key type fscrypt-provisioning registered Jun 25 18:47:40.093319 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:47:40.093334 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:47:40.093352 kernel: ima: No architecture policies found Jun 25 18:47:40.093367 kernel: clk: Disabling unused clocks Jun 25 18:47:40.093382 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 25 18:47:40.093396 kernel: Write protecting the kernel read-only data: 36864k Jun 25 18:47:40.093412 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 25 18:47:40.093427 kernel: Run /init as init process Jun 25 18:47:40.093441 kernel: with arguments: Jun 25 18:47:40.093456 kernel: /init Jun 25 18:47:40.093470 kernel: with environment: Jun 25 18:47:40.093487 kernel: HOME=/ Jun 25 18:47:40.093502 kernel: TERM=linux Jun 25 18:47:40.093516 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:47:40.093534 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:47:40.093552 systemd[1]: Detected virtualization microsoft. Jun 25 18:47:40.093568 systemd[1]: Detected architecture x86-64. Jun 25 18:47:40.093584 systemd[1]: Running in initrd. Jun 25 18:47:40.093599 systemd[1]: No hostname configured, using default hostname. Jun 25 18:47:40.093617 systemd[1]: Hostname set to . Jun 25 18:47:40.093633 systemd[1]: Initializing machine ID from random generator. Jun 25 18:47:40.093648 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:47:40.093664 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:47:40.093680 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:47:40.093696 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:47:40.093712 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:47:40.093728 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:47:40.093747 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:47:40.093765 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:47:40.093781 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:47:40.093798 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:47:40.093814 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:47:40.093830 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:47:40.093846 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:47:40.093864 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:47:40.093880 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:47:40.093896 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:47:40.093911 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:47:40.093927 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:47:40.093943 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:47:40.093959 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:47:40.093975 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:47:40.093993 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:47:40.094009 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:47:40.094025 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:47:40.094041 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:47:40.094057 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:47:40.094072 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:47:40.094088 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:47:40.094104 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:47:40.094120 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:47:40.094171 systemd-journald[176]: Collecting audit messages is disabled. Jun 25 18:47:40.094208 systemd-journald[176]: Journal started Jun 25 18:47:40.094245 systemd-journald[176]: Runtime Journal (/run/log/journal/72145f8f3d1b4174804d9f4492e3d73a) is 8.0M, max 158.8M, 150.8M free. Jun 25 18:47:40.101180 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:47:40.104414 systemd-modules-load[177]: Inserted module 'overlay' Jun 25 18:47:40.113518 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:47:40.120138 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:47:40.126790 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:47:40.146404 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:47:40.161284 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:47:40.161321 kernel: Bridge firewalling registered Jun 25 18:47:40.161611 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:47:40.164923 systemd-modules-load[177]: Inserted module 'br_netfilter' Jun 25 18:47:40.170708 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:47:40.176752 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:47:40.182574 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:47:40.190400 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:47:40.201379 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:47:40.211318 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:47:40.216962 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:47:40.232821 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:47:40.236990 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:47:40.248286 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:47:40.253581 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:47:40.264857 dracut-cmdline[209]: dracut-dracut-053 Jun 25 18:47:40.265355 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:47:40.273789 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:47:40.324663 systemd-resolved[217]: Positive Trust Anchors: Jun 25 18:47:40.327040 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:47:40.327104 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:47:40.349736 systemd-resolved[217]: Defaulting to hostname 'linux'. Jun 25 18:47:40.353327 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:47:40.358798 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:47:40.383176 kernel: SCSI subsystem initialized Jun 25 18:47:40.395173 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:47:40.408180 kernel: iscsi: registered transport (tcp) Jun 25 18:47:40.433723 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:47:40.433806 kernel: QLogic iSCSI HBA Driver Jun 25 18:47:40.469694 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:47:40.480328 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:47:40.510063 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:47:40.510165 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:47:40.513260 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:47:40.558188 kernel: raid6: avx512x4 gen() 18455 MB/s Jun 25 18:47:40.577170 kernel: raid6: avx512x2 gen() 18603 MB/s Jun 25 18:47:40.596168 kernel: raid6: avx512x1 gen() 18563 MB/s Jun 25 18:47:40.616170 kernel: raid6: avx2x4 gen() 18606 MB/s Jun 25 18:47:40.635169 kernel: raid6: avx2x2 gen() 18109 MB/s Jun 25 18:47:40.654960 kernel: raid6: avx2x1 gen() 14124 MB/s Jun 25 18:47:40.654991 kernel: raid6: using algorithm avx2x4 gen() 18606 MB/s Jun 25 18:47:40.676743 kernel: raid6: .... xor() 7443 MB/s, rmw enabled Jun 25 18:47:40.676779 kernel: raid6: using avx512x2 recovery algorithm Jun 25 18:47:40.703182 kernel: xor: automatically using best checksumming function avx Jun 25 18:47:40.872183 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:47:40.881899 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:47:40.891309 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:47:40.904831 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jun 25 18:47:40.909216 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:47:40.927314 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:47:40.941812 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jun 25 18:47:40.971433 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:47:40.978456 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:47:41.020529 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:47:41.035626 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:47:41.068663 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:47:41.075264 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:47:41.082132 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:47:41.088301 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:47:41.100963 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:47:41.113185 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 18:47:41.129244 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:47:41.129490 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:47:41.140923 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:47:41.146757 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:47:41.155849 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 18:47:41.155874 kernel: AES CTR mode by8 optimization enabled Jun 25 18:47:41.147054 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:47:41.161590 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:47:41.172378 kernel: hv_vmbus: Vmbus version:5.2 Jun 25 18:47:41.173643 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:47:41.179939 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:47:41.195465 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:47:41.198519 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:47:41.212293 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 18:47:41.217474 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:47:41.230730 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 25 18:47:41.230755 kernel: hv_vmbus: registering driver hv_netvsc Jun 25 18:47:41.241192 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 25 18:47:41.258088 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 18:47:41.258150 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 18:47:41.258171 kernel: hv_vmbus: registering driver hid_hyperv Jun 25 18:47:41.264971 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 25 18:47:41.268899 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 25 18:47:41.278462 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:47:41.285062 kernel: hv_vmbus: registering driver hv_storvsc Jun 25 18:47:41.290194 kernel: scsi host0: storvsc_host_t Jun 25 18:47:41.294178 kernel: scsi host1: storvsc_host_t Jun 25 18:47:41.298890 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 25 18:47:41.297994 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:47:41.310258 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jun 25 18:47:41.315204 kernel: PTP clock support registered Jun 25 18:47:41.338973 kernel: hv_utils: Registering HyperV Utility Driver Jun 25 18:47:41.339024 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 25 18:47:41.348135 kernel: hv_vmbus: registering driver hv_utils Jun 25 18:47:41.348179 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 18:47:41.348197 kernel: hv_utils: Heartbeat IC version 3.0 Jun 25 18:47:41.348208 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 25 18:47:41.348355 kernel: hv_utils: Shutdown IC version 3.2 Jun 25 18:47:41.353191 kernel: hv_utils: TimeSync IC version 4.0 Jun 25 18:47:41.827111 systemd-resolved[217]: Clock change detected. Flushing caches. Jun 25 18:47:41.835210 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:47:41.858105 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 25 18:47:41.871772 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 25 18:47:41.872092 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 25 18:47:41.872259 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 25 18:47:41.872434 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 25 18:47:41.872593 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:47:41.872621 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 25 18:47:42.005412 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 25 18:47:42.017039 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (451) Jun 25 18:47:42.032442 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 18:47:42.053582 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (462) Jun 25 18:47:42.074229 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 25 18:47:42.082545 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 25 18:47:42.090011 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 25 18:47:42.105041 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:47:42.123951 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:47:42.135896 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:47:42.141890 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:47:43.143966 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:47:43.144313 disk-uuid[595]: The operation has completed successfully. Jun 25 18:48:01.556854 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:48:01.556998 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:48:01.573114 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:48:01.628744 sh[709]: Success Jun 25 18:48:01.651927 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 18:48:01.731975 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:48:01.750001 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:48:01.755225 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:48:01.785843 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 25 18:48:01.785914 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:48:01.789681 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:48:01.792691 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:48:01.795028 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:48:01.883387 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:48:01.888778 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:48:01.904030 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:48:01.911139 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:48:01.928002 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:48:01.928063 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:48:01.928082 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:48:01.936986 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:48:01.946735 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:48:01.952006 kernel: BTRFS info (device sda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:48:01.960958 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:48:01.973856 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:48:02.008794 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:48:02.021116 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:48:02.050271 systemd-networkd[893]: lo: Link UP Jun 25 18:48:02.050280 systemd-networkd[893]: lo: Gained carrier Jun 25 18:48:02.051253 systemd-networkd[893]: Enumeration completed Jun 25 18:48:02.051595 systemd-networkd[893]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:48:02.051599 systemd-networkd[893]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:48:02.052108 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:48:02.058444 systemd[1]: Reached target network.target - Network. Jun 25 18:48:02.060829 systemd-networkd[893]: eth0: Link UP Jun 25 18:48:02.060949 systemd-networkd[893]: eth0: Gained carrier Jun 25 18:48:02.060962 systemd-networkd[893]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:48:02.307624 ignition[832]: Ignition 2.19.0 Jun 25 18:48:02.307637 ignition[832]: Stage: fetch-offline Jun 25 18:48:02.307679 ignition[832]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:48:02.307690 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:48:02.307813 ignition[832]: parsed url from cmdline: "" Jun 25 18:48:02.307818 ignition[832]: no config URL provided Jun 25 18:48:02.307825 ignition[832]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:48:02.307836 ignition[832]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:48:02.307843 ignition[832]: failed to fetch config: resource requires networking Jun 25 18:48:02.308096 ignition[832]: Ignition finished successfully Jun 25 18:48:02.327248 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:48:02.338135 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 18:48:02.351275 ignition[901]: Ignition 2.19.0 Jun 25 18:48:02.351285 ignition[901]: Stage: fetch Jun 25 18:48:02.351509 ignition[901]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:48:02.351523 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:48:02.351637 ignition[901]: parsed url from cmdline: "" Jun 25 18:48:02.351641 ignition[901]: no config URL provided Jun 25 18:48:02.351647 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:48:02.351654 ignition[901]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:48:02.351687 ignition[901]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 25 18:48:02.351833 ignition[901]: GET error: Get "http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text": dial tcp 169.254.169.254:80: connect: network is unreachable Jun 25 18:48:02.552124 ignition[901]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #2 Jun 25 18:48:02.556789 ignition[901]: GET error: Get "http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text": dial tcp 169.254.169.254:80: connect: network is unreachable Jun 25 18:48:02.666952 systemd-networkd[893]: eth0: DHCPv4 address 10.200.8.15/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 18:48:02.957470 ignition[901]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #3 Jun 25 18:48:03.356315 ignition[901]: GET result: OK Jun 25 18:48:03.356493 ignition[901]: config has been read from IMDS userdata Jun 25 18:48:03.356535 ignition[901]: parsing config with SHA512: c89f014a9be7d2e61e7ac312eb7963506eabf73151f80a606b120f1134c00d7fd7d31c4e735414df6670d7e8f287eee523f61a5a95fdeb88fccd52584238ae8a Jun 25 18:48:03.365849 unknown[901]: fetched base config from "system" Jun 25 18:48:03.365864 unknown[901]: fetched base config from "system" Jun 25 18:48:03.367548 ignition[901]: fetch: fetch complete Jun 25 18:48:03.365896 unknown[901]: fetched user config from "azure" Jun 25 18:48:03.367553 ignition[901]: fetch: fetch passed Jun 25 18:48:03.367603 ignition[901]: Ignition finished successfully Jun 25 18:48:03.379231 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 18:48:03.389032 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:48:03.404542 ignition[909]: Ignition 2.19.0 Jun 25 18:48:03.404553 ignition[909]: Stage: kargs Jun 25 18:48:03.404782 ignition[909]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:48:03.406649 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:48:03.404795 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:48:03.405741 ignition[909]: kargs: kargs passed Jun 25 18:48:03.405785 ignition[909]: Ignition finished successfully Jun 25 18:48:03.423007 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:48:03.437537 ignition[916]: Ignition 2.19.0 Jun 25 18:48:03.437547 ignition[916]: Stage: disks Jun 25 18:48:03.439580 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:48:03.437759 ignition[916]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:48:03.443779 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:48:03.437771 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:48:03.448400 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:48:03.438686 ignition[916]: disks: disks passed Jun 25 18:48:03.451301 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:48:03.438731 ignition[916]: Ignition finished successfully Jun 25 18:48:03.456122 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:48:03.461434 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:48:03.478300 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:48:03.503755 systemd-fsck[925]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jun 25 18:48:03.508261 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:48:03.521982 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:48:03.624183 kernel: EXT4-fs (sda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 25 18:48:03.624750 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:48:03.627588 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:48:03.646958 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:48:03.652099 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:48:03.659898 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (936) Jun 25 18:48:03.662053 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 25 18:48:03.678011 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:48:03.678047 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:48:03.678068 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:48:03.664811 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:48:03.685313 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:48:03.664845 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:48:03.690406 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:48:03.697101 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:48:03.709038 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:48:03.809102 systemd-networkd[893]: eth0: Gained IPv6LL Jun 25 18:48:03.898849 coreos-metadata[938]: Jun 25 18:48:03.898 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 18:48:03.904815 coreos-metadata[938]: Jun 25 18:48:03.904 INFO Fetch successful Jun 25 18:48:03.907265 coreos-metadata[938]: Jun 25 18:48:03.905 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 25 18:48:03.924095 coreos-metadata[938]: Jun 25 18:48:03.924 INFO Fetch successful Jun 25 18:48:03.928303 coreos-metadata[938]: Jun 25 18:48:03.928 INFO wrote hostname ci-4012.0.0-a-c5aaeb7e49 to /sysroot/etc/hostname Jun 25 18:48:03.932098 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:48:03.957591 initrd-setup-root[966]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:48:03.969010 initrd-setup-root[973]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:48:03.976653 initrd-setup-root[980]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:48:03.981538 initrd-setup-root[987]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:48:04.222193 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:48:04.232977 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:48:04.240165 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:48:04.251474 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:48:04.253912 kernel: BTRFS info (device sda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:48:04.283263 ignition[1058]: INFO : Ignition 2.19.0 Jun 25 18:48:04.283263 ignition[1058]: INFO : Stage: mount Jun 25 18:48:04.290125 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:48:04.290125 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:48:04.290125 ignition[1058]: INFO : mount: mount passed Jun 25 18:48:04.290125 ignition[1058]: INFO : Ignition finished successfully Jun 25 18:48:04.285988 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:48:04.290301 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:48:04.304458 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:48:04.630098 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:48:04.655893 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1071) Jun 25 18:48:04.662819 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:48:04.662898 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:48:04.665206 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:48:04.670888 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:48:04.672140 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:48:04.693776 ignition[1087]: INFO : Ignition 2.19.0 Jun 25 18:48:04.693776 ignition[1087]: INFO : Stage: files Jun 25 18:48:04.698500 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:48:04.698500 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:48:04.698500 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:48:04.706977 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:48:04.706977 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:48:04.719940 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:48:04.723943 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:48:04.723943 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:48:04.723943 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:48:04.723943 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 18:48:04.720391 unknown[1087]: wrote ssh authorized keys file for user: core Jun 25 18:48:04.781839 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:48:04.889667 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:48:04.889667 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:48:04.899496 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:48:04.899496 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:48:04.899496 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:48:04.899496 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:48:04.918126 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:48:04.918126 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:48:04.927491 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:48:04.927491 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:48:04.927491 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:48:04.927491 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:48:04.927491 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:48:04.927491 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:48:04.927491 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jun 25 18:48:05.506349 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 18:48:05.827987 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:48:05.827987 ignition[1087]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 18:48:05.840234 ignition[1087]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:48:05.840234 ignition[1087]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:48:05.840234 ignition[1087]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 18:48:05.840234 ignition[1087]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:48:05.840234 ignition[1087]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:48:05.840234 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:48:05.840234 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:48:05.840234 ignition[1087]: INFO : files: files passed Jun 25 18:48:05.840234 ignition[1087]: INFO : Ignition finished successfully Jun 25 18:48:05.836502 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:48:05.854051 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:48:05.865024 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:48:05.877158 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:48:05.901057 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:48:05.877264 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:48:05.910096 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:48:05.910096 initrd-setup-root-after-ignition[1116]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:48:05.887981 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:48:05.891996 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:48:05.902650 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:48:05.941219 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:48:05.941348 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:48:05.947556 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:48:05.952948 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:48:05.955479 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:48:05.969092 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:48:05.986357 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:48:05.995026 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:48:06.006202 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:48:06.011752 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:48:06.014853 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:48:06.019950 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:48:06.020100 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:48:06.025647 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:48:06.030136 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:48:06.035358 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:48:06.040640 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:48:06.045834 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:48:06.051495 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:48:06.063520 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:48:06.069812 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:48:06.070955 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:48:06.071332 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:48:06.071737 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:48:06.071864 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:48:06.072660 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:48:06.073573 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:48:06.073960 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:48:06.087475 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:48:06.092806 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:48:06.097019 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:48:06.103050 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:48:06.103211 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:48:06.108307 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:48:06.108451 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:48:06.113566 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 25 18:48:06.113713 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:48:06.136013 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:48:06.151119 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:48:06.155710 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:48:06.156602 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:48:06.171251 ignition[1141]: INFO : Ignition 2.19.0 Jun 25 18:48:06.171251 ignition[1141]: INFO : Stage: umount Jun 25 18:48:06.171251 ignition[1141]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:48:06.171251 ignition[1141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:48:06.171251 ignition[1141]: INFO : umount: umount passed Jun 25 18:48:06.171251 ignition[1141]: INFO : Ignition finished successfully Jun 25 18:48:06.159681 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:48:06.159784 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:48:06.164667 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:48:06.164755 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:48:06.170509 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:48:06.170610 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:48:06.185164 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:48:06.185210 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:48:06.185914 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:48:06.185951 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:48:06.186309 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 18:48:06.186343 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 18:48:06.186734 systemd[1]: Stopped target network.target - Network. Jun 25 18:48:06.204793 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:48:06.204846 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:48:06.207780 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:48:06.211202 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:48:06.220918 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:48:06.225838 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:48:06.228217 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:48:06.232787 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:48:06.232846 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:48:06.265065 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:48:06.265130 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:48:06.267787 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:48:06.267852 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:48:06.270377 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:48:06.270433 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:48:06.273305 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:48:06.277354 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:48:06.298491 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:48:06.301002 systemd-networkd[893]: eth0: DHCPv6 lease lost Jun 25 18:48:06.303534 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:48:06.303668 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:48:06.310789 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:48:06.310907 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:48:06.314633 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:48:06.314706 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:48:06.331018 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:48:06.333315 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:48:06.335721 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:48:06.341554 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:48:06.344886 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:48:06.350047 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:48:06.352558 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:48:06.360318 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:48:06.360378 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:48:06.369055 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:48:06.393521 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:48:06.393683 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:48:06.403390 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:48:06.403501 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:48:06.408775 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:48:06.408846 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:48:06.413441 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:48:06.413480 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:48:06.418918 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:48:06.418976 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:48:06.424643 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:48:06.424686 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:48:06.434033 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:48:06.436440 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:48:06.458065 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:48:06.460866 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:48:06.460938 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:48:06.473352 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 25 18:48:06.473413 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:48:06.479268 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:48:06.479328 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:48:06.486278 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:48:06.488646 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:48:06.494849 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:48:06.494945 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:48:08.582565 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:48:08.582733 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:48:08.588554 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:48:08.593184 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:48:08.593265 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:48:08.610047 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:48:08.617754 systemd[1]: Switching root. Jun 25 18:48:08.700003 systemd-journald[176]: Journal stopped Jun 25 18:48:10.887140 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Jun 25 18:48:10.887174 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:48:10.887188 kernel: SELinux: policy capability open_perms=1 Jun 25 18:48:10.887197 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:48:10.887208 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:48:10.887216 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:48:10.887228 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:48:10.887240 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:48:10.887250 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:48:10.887260 kernel: audit: type=1403 audit(1719341289.059:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:48:10.887271 systemd[1]: Successfully loaded SELinux policy in 78.162ms. Jun 25 18:48:10.887282 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.349ms. Jun 25 18:48:10.887293 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:48:10.887305 systemd[1]: Detected virtualization microsoft. Jun 25 18:48:10.887318 systemd[1]: Detected architecture x86-64. Jun 25 18:48:10.887330 systemd[1]: Detected first boot. Jun 25 18:48:10.887340 systemd[1]: Hostname set to . Jun 25 18:48:10.887352 systemd[1]: Initializing machine ID from random generator. Jun 25 18:48:10.887362 zram_generator::config[1184]: No configuration found. Jun 25 18:48:10.887377 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:48:10.887388 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 18:48:10.887399 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 18:48:10.887409 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 18:48:10.887422 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:48:10.887432 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:48:10.887445 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:48:10.887461 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:48:10.887472 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:48:10.887483 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:48:10.887495 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:48:10.887507 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:48:10.887517 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:48:10.887530 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:48:10.887540 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:48:10.887555 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:48:10.887565 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:48:10.887578 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:48:10.887588 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 25 18:48:10.887600 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:48:10.887611 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 18:48:10.887627 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 18:48:10.887638 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 18:48:10.887653 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:48:10.887665 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:48:10.887676 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:48:10.887686 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:48:10.887698 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:48:10.887711 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:48:10.887724 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:48:10.887737 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:48:10.887749 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:48:10.887762 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:48:10.887775 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:48:10.887788 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:48:10.887802 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:48:10.887815 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:48:10.887826 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:48:10.887839 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:48:10.887849 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:48:10.887862 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:48:10.889901 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:48:10.889923 systemd[1]: Reached target machines.target - Containers. Jun 25 18:48:10.889945 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:48:10.889961 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:48:10.889973 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:48:10.889988 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:48:10.890001 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:48:10.890012 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:48:10.890026 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:48:10.890036 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:48:10.890050 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:48:10.890065 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:48:10.890077 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 18:48:10.890089 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 18:48:10.890101 kernel: loop: module loaded Jun 25 18:48:10.890112 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 18:48:10.890124 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 18:48:10.890135 kernel: fuse: init (API version 7.39) Jun 25 18:48:10.890147 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:48:10.890163 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:48:10.890174 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:48:10.890188 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:48:10.890198 kernel: ACPI: bus type drm_connector registered Jun 25 18:48:10.890211 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:48:10.890222 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 18:48:10.890235 systemd[1]: Stopped verity-setup.service. Jun 25 18:48:10.890245 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:48:10.890259 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:48:10.890273 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:48:10.890305 systemd-journald[1286]: Collecting audit messages is disabled. Jun 25 18:48:10.890330 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:48:10.890342 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:48:10.890357 systemd-journald[1286]: Journal started Jun 25 18:48:10.890384 systemd-journald[1286]: Runtime Journal (/run/log/journal/2c99a8e2fb494daa9ad3fa1d4fa390d9) is 8.0M, max 158.8M, 150.8M free. Jun 25 18:48:10.244114 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:48:10.289595 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 25 18:48:10.289993 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 18:48:10.898902 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:48:10.900075 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:48:10.903458 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:48:10.906541 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:48:10.910213 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:48:10.914259 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:48:10.914541 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:48:10.918250 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:48:10.918543 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:48:10.922149 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:48:10.922448 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:48:10.925981 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:48:10.926216 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:48:10.930222 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:48:10.930502 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:48:10.934109 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:48:10.934300 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:48:10.937782 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:48:10.941049 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:48:10.944441 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:48:10.948650 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:48:10.961715 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:48:10.968979 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:48:10.974098 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:48:10.977501 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:48:10.977636 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:48:10.981564 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:48:10.991027 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:48:10.995341 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:48:10.998093 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:48:11.001108 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:48:11.007108 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:48:11.010192 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:48:11.012720 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:48:11.015510 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:48:11.019003 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:48:11.026864 systemd-journald[1286]: Time spent on flushing to /var/log/journal/2c99a8e2fb494daa9ad3fa1d4fa390d9 is 40.751ms for 937 entries. Jun 25 18:48:11.026864 systemd-journald[1286]: System Journal (/var/log/journal/2c99a8e2fb494daa9ad3fa1d4fa390d9) is 8.0M, max 2.6G, 2.6G free. Jun 25 18:48:11.120896 systemd-journald[1286]: Received client request to flush runtime journal. Jun 25 18:48:11.120938 kernel: loop0: detected capacity change from 0 to 210664 Jun 25 18:48:11.120955 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:48:11.038123 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:48:11.046086 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:48:11.062130 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:48:11.068374 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:48:11.078199 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:48:11.082489 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:48:11.094939 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:48:11.102276 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:48:11.112193 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:48:11.118628 udevadm[1323]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 25 18:48:11.126996 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:48:11.150271 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:48:11.345936 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:48:11.347270 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Jun 25 18:48:11.347296 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Jun 25 18:48:11.354989 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:48:11.369124 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:48:11.404942 kernel: loop1: detected capacity change from 0 to 80568 Jun 25 18:48:11.412484 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:48:11.413324 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:48:11.447162 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:48:11.458551 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:48:11.480300 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Jun 25 18:48:11.480324 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Jun 25 18:48:11.485609 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:48:11.561947 kernel: loop2: detected capacity change from 0 to 139760 Jun 25 18:48:11.710895 kernel: loop3: detected capacity change from 0 to 62456 Jun 25 18:48:11.828959 kernel: loop4: detected capacity change from 0 to 210664 Jun 25 18:48:11.840897 kernel: loop5: detected capacity change from 0 to 80568 Jun 25 18:48:11.854962 kernel: loop6: detected capacity change from 0 to 139760 Jun 25 18:48:11.872949 kernel: loop7: detected capacity change from 0 to 62456 Jun 25 18:48:11.882217 (sd-merge)[1348]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 25 18:48:11.882789 (sd-merge)[1348]: Merged extensions into '/usr'. Jun 25 18:48:11.889077 systemd[1]: Reloading requested from client PID 1320 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:48:11.889207 systemd[1]: Reloading... Jun 25 18:48:11.956992 zram_generator::config[1372]: No configuration found. Jun 25 18:48:12.165247 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:48:12.229299 systemd[1]: Reloading finished in 339 ms. Jun 25 18:48:12.258081 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:48:12.269041 systemd[1]: Starting ensure-sysext.service... Jun 25 18:48:12.272515 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:48:12.385708 systemd-tmpfiles[1431]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:48:12.386362 systemd-tmpfiles[1431]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:48:12.388551 systemd-tmpfiles[1431]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:48:12.388913 systemd-tmpfiles[1431]: ACLs are not supported, ignoring. Jun 25 18:48:12.388970 systemd-tmpfiles[1431]: ACLs are not supported, ignoring. Jun 25 18:48:12.392613 systemd-tmpfiles[1431]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:48:12.392628 systemd-tmpfiles[1431]: Skipping /boot Jun 25 18:48:12.403449 systemd[1]: Reloading requested from client PID 1430 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:48:12.403466 systemd[1]: Reloading... Jun 25 18:48:12.413118 systemd-tmpfiles[1431]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:48:12.413134 systemd-tmpfiles[1431]: Skipping /boot Jun 25 18:48:12.502897 zram_generator::config[1457]: No configuration found. Jun 25 18:48:12.622829 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:48:12.684574 systemd[1]: Reloading finished in 280 ms. Jun 25 18:48:12.701026 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:48:12.708323 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:48:12.727162 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:48:12.735854 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:48:12.744002 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:48:12.753153 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:48:12.768154 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:48:12.778478 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:48:12.787847 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:48:12.788653 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:48:12.798108 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:48:12.810974 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:48:12.827462 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:48:12.836625 systemd-udevd[1529]: Using default interface naming scheme 'v255'. Jun 25 18:48:12.837241 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:48:12.837427 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:48:12.839082 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:48:12.839792 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:48:12.847788 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:48:12.851811 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:48:12.852910 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:48:12.858157 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:48:12.858459 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:48:12.867825 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:48:12.869165 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:48:12.875155 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:48:12.878806 augenrules[1546]: No rules Jun 25 18:48:12.880483 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:48:12.893374 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:48:12.907557 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:48:12.908411 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:48:12.923161 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:48:12.934199 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:48:12.943224 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:48:12.946038 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:48:12.946217 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:48:12.947653 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:48:12.955567 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:48:12.956335 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:48:12.966761 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:48:12.966999 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:48:12.984198 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:48:13.009694 systemd[1]: Finished ensure-sysext.service. Jun 25 18:48:13.017233 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:48:13.017809 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:48:13.036156 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Jun 25 18:48:13.042207 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:48:13.042452 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:48:13.049039 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:48:13.060446 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:48:13.065264 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:48:13.069050 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:48:13.087076 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:48:13.091131 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:48:13.091209 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:48:13.099918 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:48:13.100533 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:48:13.100714 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:48:13.107827 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:48:13.108490 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:48:13.113269 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:48:13.113739 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:48:13.124737 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:48:13.146790 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 18:48:13.147117 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:48:13.147153 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:48:13.217901 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1574) Jun 25 18:48:13.271583 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Jun 25 18:48:13.290207 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:48:13.298956 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:48:13.299212 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:48:13.309628 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:48:13.357965 kernel: hv_vmbus: registering driver hyperv_fb Jun 25 18:48:13.362895 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 18:48:13.365909 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 25 18:48:13.372900 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 25 18:48:13.379980 kernel: Console: switching to colour dummy device 80x25 Jun 25 18:48:13.383398 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 18:48:13.380716 systemd-resolved[1528]: Positive Trust Anchors: Jun 25 18:48:13.380734 systemd-resolved[1528]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:48:13.380795 systemd-resolved[1528]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:48:13.388516 kernel: hv_vmbus: registering driver hv_balloon Jun 25 18:48:13.388574 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 25 18:48:13.391110 systemd-resolved[1528]: Using system hostname 'ci-4012.0.0-a-c5aaeb7e49'. Jun 25 18:48:13.394562 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:48:13.399973 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:48:13.411958 systemd-networkd[1590]: lo: Link UP Jun 25 18:48:13.411968 systemd-networkd[1590]: lo: Gained carrier Jun 25 18:48:13.418092 systemd-networkd[1590]: Enumeration completed Jun 25 18:48:13.419343 systemd-networkd[1590]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:48:13.419426 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:48:13.423902 systemd-networkd[1590]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:48:13.427142 systemd[1]: Reached target network.target - Network. Jun 25 18:48:13.427631 systemd-networkd[1590]: eth0: Link UP Jun 25 18:48:13.430778 systemd-networkd[1590]: eth0: Gained carrier Jun 25 18:48:13.515112 systemd-networkd[1590]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:48:13.529340 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:48:13.545135 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:48:13.545371 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:48:13.563049 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:48:13.610893 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1571) Jun 25 18:48:13.665943 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jun 25 18:48:13.700927 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:48:13.739159 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 18:48:13.747944 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:48:13.752371 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:48:13.758051 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:48:13.771612 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:48:13.792963 lvm[1654]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:48:13.818173 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:48:13.822110 ldconfig[1315]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:48:13.822819 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:48:13.830037 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:48:13.836747 lvm[1658]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:48:13.838661 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:48:13.850061 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:48:13.863225 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:48:13.866495 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:48:13.869145 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:48:13.872343 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:48:13.875686 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:48:13.878687 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:48:13.881804 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:48:13.884939 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:48:13.884977 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:48:13.887371 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:48:13.894326 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:48:13.898563 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:48:13.910122 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:48:13.913480 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:48:13.916766 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:48:13.920270 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:48:13.922762 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:48:13.925320 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:48:13.925358 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:48:13.930967 systemd[1]: Starting chronyd.service - NTP client/server... Jun 25 18:48:13.937006 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:48:13.943954 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 18:48:13.958087 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:48:13.963396 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:48:13.977037 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:48:13.979671 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:48:13.980979 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:48:13.984570 (chronyd)[1665]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 25 18:48:13.995062 jq[1669]: false Jun 25 18:48:13.995018 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:48:14.000043 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:48:14.006045 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:48:14.019058 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:48:14.023335 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:48:14.023849 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:48:14.030062 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:48:14.035990 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:48:14.045270 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:48:14.045971 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:48:14.055388 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:48:14.055592 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:48:14.070892 chronyd[1694]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 25 18:48:14.081829 jq[1685]: true Jun 25 18:48:14.082598 chronyd[1694]: Timezone right/UTC failed leap second check, ignoring Jun 25 18:48:14.082816 chronyd[1694]: Loaded seccomp filter (level 2) Jun 25 18:48:14.092414 systemd[1]: Started chronyd.service - NTP client/server. Jun 25 18:48:14.117242 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:48:14.117454 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:48:14.134983 jq[1698]: true Jun 25 18:48:14.141362 (ntainerd)[1702]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:48:14.143886 extend-filesystems[1670]: Found loop4 Jun 25 18:48:14.143886 extend-filesystems[1670]: Found loop5 Jun 25 18:48:14.154513 extend-filesystems[1670]: Found loop6 Jun 25 18:48:14.158014 extend-filesystems[1670]: Found loop7 Jun 25 18:48:14.158014 extend-filesystems[1670]: Found sda Jun 25 18:48:14.173363 extend-filesystems[1670]: Found sda1 Jun 25 18:48:14.173363 extend-filesystems[1670]: Found sda2 Jun 25 18:48:14.173363 extend-filesystems[1670]: Found sda3 Jun 25 18:48:14.173363 extend-filesystems[1670]: Found usr Jun 25 18:48:14.173363 extend-filesystems[1670]: Found sda4 Jun 25 18:48:14.173363 extend-filesystems[1670]: Found sda6 Jun 25 18:48:14.173363 extend-filesystems[1670]: Found sda7 Jun 25 18:48:14.173363 extend-filesystems[1670]: Found sda9 Jun 25 18:48:14.173363 extend-filesystems[1670]: Checking size of /dev/sda9 Jun 25 18:48:14.196388 update_engine[1683]: I0625 18:48:14.163039 1683 main.cc:92] Flatcar Update Engine starting Jun 25 18:48:14.159744 systemd-networkd[1590]: eth0: DHCPv4 address 10.200.8.15/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 18:48:14.196765 tar[1688]: linux-amd64/helm Jun 25 18:48:14.197341 dbus-daemon[1668]: [system] SELinux support is enabled Jun 25 18:48:14.197541 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:48:14.206646 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:48:14.206678 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:48:14.211637 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:48:14.211665 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:48:14.229345 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:48:14.232157 update_engine[1683]: I0625 18:48:14.232110 1683 update_check_scheduler.cc:74] Next update check in 2m41s Jun 25 18:48:14.235480 extend-filesystems[1670]: Old size kept for /dev/sda9 Jun 25 18:48:14.235480 extend-filesystems[1670]: Found sr0 Jun 25 18:48:14.246517 systemd-logind[1680]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 18:48:14.246750 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:48:14.251590 systemd-logind[1680]: New seat seat0. Jun 25 18:48:14.257898 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:48:14.258227 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:48:14.262682 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:48:14.366144 coreos-metadata[1667]: Jun 25 18:48:14.365 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 18:48:14.395226 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1571) Jun 25 18:48:14.453931 bash[1729]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:48:14.459039 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:48:14.494337 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 18:48:14.545009 coreos-metadata[1667]: Jun 25 18:48:14.544 INFO Fetch successful Jun 25 18:48:14.545009 coreos-metadata[1667]: Jun 25 18:48:14.544 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 25 18:48:14.551790 coreos-metadata[1667]: Jun 25 18:48:14.550 INFO Fetch successful Jun 25 18:48:14.551790 coreos-metadata[1667]: Jun 25 18:48:14.551 INFO Fetching http://168.63.129.16/machine/9b8a0734-d92c-4dcd-9f78-22d0030a9461/22569333%2Ddef2%2D4629%2D9e06%2D1a3fc88301cb.%5Fci%2D4012.0.0%2Da%2Dc5aaeb7e49?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 25 18:48:14.559993 coreos-metadata[1667]: Jun 25 18:48:14.559 INFO Fetch successful Jun 25 18:48:14.559993 coreos-metadata[1667]: Jun 25 18:48:14.559 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 25 18:48:14.561011 systemd-networkd[1590]: eth0: Gained IPv6LL Jun 25 18:48:14.567241 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:48:14.573318 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:48:14.577692 coreos-metadata[1667]: Jun 25 18:48:14.577 INFO Fetch successful Jun 25 18:48:14.590077 sshd_keygen[1684]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:48:14.591098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:48:14.599370 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:48:14.668497 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 18:48:14.672534 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:48:14.700632 locksmithd[1715]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:48:14.701591 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:48:14.714090 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:48:14.720104 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 25 18:48:14.732730 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:48:14.733194 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:48:14.748438 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:48:14.783072 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 25 18:48:14.940109 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:48:14.952611 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:48:14.969285 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:48:14.984970 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 18:48:14.988541 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:48:15.059495 containerd[1702]: time="2024-06-25T18:48:15.058809700Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:48:15.109665 containerd[1702]: time="2024-06-25T18:48:15.109374700Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:48:15.109665 containerd[1702]: time="2024-06-25T18:48:15.109435600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:48:15.114809 containerd[1702]: time="2024-06-25T18:48:15.114254700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:48:15.114809 containerd[1702]: time="2024-06-25T18:48:15.114306800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:48:15.114809 containerd[1702]: time="2024-06-25T18:48:15.114579600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:48:15.114809 containerd[1702]: time="2024-06-25T18:48:15.114607700Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:48:15.114809 containerd[1702]: time="2024-06-25T18:48:15.114704800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:48:15.117190 containerd[1702]: time="2024-06-25T18:48:15.115938800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:48:15.117190 containerd[1702]: time="2024-06-25T18:48:15.115977200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:48:15.117190 containerd[1702]: time="2024-06-25T18:48:15.116069700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:48:15.117190 containerd[1702]: time="2024-06-25T18:48:15.116311900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:48:15.117190 containerd[1702]: time="2024-06-25T18:48:15.116352100Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:48:15.117190 containerd[1702]: time="2024-06-25T18:48:15.116370900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:48:15.117190 containerd[1702]: time="2024-06-25T18:48:15.116579900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:48:15.117190 containerd[1702]: time="2024-06-25T18:48:15.116608600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:48:15.117190 containerd[1702]: time="2024-06-25T18:48:15.116682700Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:48:15.117190 containerd[1702]: time="2024-06-25T18:48:15.116701800Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:48:15.135807 containerd[1702]: time="2024-06-25T18:48:15.135761600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:48:15.135986 containerd[1702]: time="2024-06-25T18:48:15.135967800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:48:15.136102 containerd[1702]: time="2024-06-25T18:48:15.136057600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:48:15.136200 containerd[1702]: time="2024-06-25T18:48:15.136182800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:48:15.137766 containerd[1702]: time="2024-06-25T18:48:15.137737700Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:48:15.137886 containerd[1702]: time="2024-06-25T18:48:15.137861100Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:48:15.137960 containerd[1702]: time="2024-06-25T18:48:15.137946200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:48:15.138893 containerd[1702]: time="2024-06-25T18:48:15.138163900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:48:15.138893 containerd[1702]: time="2024-06-25T18:48:15.138189700Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:48:15.138893 containerd[1702]: time="2024-06-25T18:48:15.138212000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:48:15.138893 containerd[1702]: time="2024-06-25T18:48:15.138232300Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:48:15.138893 containerd[1702]: time="2024-06-25T18:48:15.138252500Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:48:15.138893 containerd[1702]: time="2024-06-25T18:48:15.138275800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:48:15.138893 containerd[1702]: time="2024-06-25T18:48:15.138293600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:48:15.138893 containerd[1702]: time="2024-06-25T18:48:15.138310700Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:48:15.138893 containerd[1702]: time="2024-06-25T18:48:15.138330000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:48:15.138893 containerd[1702]: time="2024-06-25T18:48:15.138349400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:48:15.138893 containerd[1702]: time="2024-06-25T18:48:15.138366400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:48:15.138893 containerd[1702]: time="2024-06-25T18:48:15.138385600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:48:15.138893 containerd[1702]: time="2024-06-25T18:48:15.138515300Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:48:15.139425 containerd[1702]: time="2024-06-25T18:48:15.138843200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:48:15.140661 containerd[1702]: time="2024-06-25T18:48:15.140632600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:48:15.140795 containerd[1702]: time="2024-06-25T18:48:15.140778700Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:48:15.140904 containerd[1702]: time="2024-06-25T18:48:15.140886500Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:48:15.142218 containerd[1702]: time="2024-06-25T18:48:15.142190300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:48:15.143908 containerd[1702]: time="2024-06-25T18:48:15.142305900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:48:15.143908 containerd[1702]: time="2024-06-25T18:48:15.142329900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:48:15.143908 containerd[1702]: time="2024-06-25T18:48:15.142347800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:48:15.143908 containerd[1702]: time="2024-06-25T18:48:15.142366800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:48:15.143908 containerd[1702]: time="2024-06-25T18:48:15.142385300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:48:15.143908 containerd[1702]: time="2024-06-25T18:48:15.142402200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:48:15.143908 containerd[1702]: time="2024-06-25T18:48:15.142419900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:48:15.143908 containerd[1702]: time="2024-06-25T18:48:15.142438800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:48:15.143908 containerd[1702]: time="2024-06-25T18:48:15.142586800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:48:15.143908 containerd[1702]: time="2024-06-25T18:48:15.142606900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:48:15.143908 containerd[1702]: time="2024-06-25T18:48:15.142623900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:48:15.143908 containerd[1702]: time="2024-06-25T18:48:15.142642200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:48:15.143908 containerd[1702]: time="2024-06-25T18:48:15.142663500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:48:15.143908 containerd[1702]: time="2024-06-25T18:48:15.142687200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:48:15.143908 containerd[1702]: time="2024-06-25T18:48:15.142706400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:48:15.144471 containerd[1702]: time="2024-06-25T18:48:15.142723600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:48:15.144516 containerd[1702]: time="2024-06-25T18:48:15.143119900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:48:15.144516 containerd[1702]: time="2024-06-25T18:48:15.143199300Z" level=info msg="Connect containerd service" Jun 25 18:48:15.144516 containerd[1702]: time="2024-06-25T18:48:15.143241700Z" level=info msg="using legacy CRI server" Jun 25 18:48:15.144516 containerd[1702]: time="2024-06-25T18:48:15.143252300Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:48:15.144516 containerd[1702]: time="2024-06-25T18:48:15.143369900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:48:15.146733 containerd[1702]: time="2024-06-25T18:48:15.146690300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:48:15.148626 containerd[1702]: time="2024-06-25T18:48:15.148048500Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:48:15.148626 containerd[1702]: time="2024-06-25T18:48:15.148113200Z" level=info msg="Start subscribing containerd event" Jun 25 18:48:15.148626 containerd[1702]: time="2024-06-25T18:48:15.148223000Z" level=info msg="Start recovering state" Jun 25 18:48:15.148626 containerd[1702]: time="2024-06-25T18:48:15.148300900Z" level=info msg="Start event monitor" Jun 25 18:48:15.148626 containerd[1702]: time="2024-06-25T18:48:15.148323200Z" level=info msg="Start snapshots syncer" Jun 25 18:48:15.148626 containerd[1702]: time="2024-06-25T18:48:15.148335000Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:48:15.148626 containerd[1702]: time="2024-06-25T18:48:15.148346500Z" level=info msg="Start streaming server" Jun 25 18:48:15.149029 containerd[1702]: time="2024-06-25T18:48:15.149002500Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:48:15.149144 containerd[1702]: time="2024-06-25T18:48:15.149126500Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:48:15.149291 containerd[1702]: time="2024-06-25T18:48:15.149225700Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:48:15.150831 containerd[1702]: time="2024-06-25T18:48:15.150758400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:48:15.151015 containerd[1702]: time="2024-06-25T18:48:15.150960900Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:48:15.152370 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:48:15.154955 containerd[1702]: time="2024-06-25T18:48:15.154823400Z" level=info msg="containerd successfully booted in 0.098892s" Jun 25 18:48:15.200148 tar[1688]: linux-amd64/LICENSE Jun 25 18:48:15.202392 tar[1688]: linux-amd64/README.md Jun 25 18:48:15.213412 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:48:15.704068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:48:15.707662 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:48:15.709885 (kubelet)[1816]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:48:15.711971 systemd[1]: Startup finished in 689ms (firmware) + 8.951s (loader) + 999ms (kernel) + 28.807s (initrd) + 6.729s (userspace) = 46.177s. Jun 25 18:48:15.903723 login[1803]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 18:48:15.903946 login[1802]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 18:48:15.916607 systemd-logind[1680]: New session 2 of user core. Jun 25 18:48:15.917389 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:48:15.923153 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:48:15.927537 systemd-logind[1680]: New session 1 of user core. Jun 25 18:48:15.945666 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:48:15.958359 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:48:15.961903 (systemd)[1829]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:16.131121 systemd[1829]: Queued start job for default target default.target. Jun 25 18:48:16.136450 systemd[1829]: Created slice app.slice - User Application Slice. Jun 25 18:48:16.136927 systemd[1829]: Reached target paths.target - Paths. Jun 25 18:48:16.136948 systemd[1829]: Reached target timers.target - Timers. Jun 25 18:48:16.141020 systemd[1829]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:48:16.163392 systemd[1829]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:48:16.163553 systemd[1829]: Reached target sockets.target - Sockets. Jun 25 18:48:16.163573 systemd[1829]: Reached target basic.target - Basic System. Jun 25 18:48:16.163620 systemd[1829]: Reached target default.target - Main User Target. Jun 25 18:48:16.163658 systemd[1829]: Startup finished in 194ms. Jun 25 18:48:16.163789 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:48:16.166257 waagent[1795]: 2024-06-25T18:48:16.165246Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jun 25 18:48:16.166729 waagent[1795]: 2024-06-25T18:48:16.166667Z INFO Daemon Daemon OS: flatcar 4012.0.0 Jun 25 18:48:16.167548 waagent[1795]: 2024-06-25T18:48:16.167503Z INFO Daemon Daemon Python: 3.11.9 Jun 25 18:48:16.168600 waagent[1795]: 2024-06-25T18:48:16.168554Z INFO Daemon Daemon Run daemon Jun 25 18:48:16.169381 waagent[1795]: 2024-06-25T18:48:16.169343Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4012.0.0' Jun 25 18:48:16.170134 waagent[1795]: 2024-06-25T18:48:16.170096Z INFO Daemon Daemon Using waagent for provisioning Jun 25 18:48:16.172187 waagent[1795]: 2024-06-25T18:48:16.172146Z INFO Daemon Daemon Activate resource disk Jun 25 18:48:16.175313 waagent[1795]: 2024-06-25T18:48:16.174787Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 25 18:48:16.179677 waagent[1795]: 2024-06-25T18:48:16.179632Z INFO Daemon Daemon Found device: None Jun 25 18:48:16.180747 waagent[1795]: 2024-06-25T18:48:16.180709Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 25 18:48:16.181792 waagent[1795]: 2024-06-25T18:48:16.181759Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 25 18:48:16.183971 waagent[1795]: 2024-06-25T18:48:16.183924Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 18:48:16.184450 waagent[1795]: 2024-06-25T18:48:16.184415Z INFO Daemon Daemon Running default provisioning handler Jun 25 18:48:16.205099 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:48:16.206163 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:48:16.215885 waagent[1795]: 2024-06-25T18:48:16.215754Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 25 18:48:16.220465 waagent[1795]: 2024-06-25T18:48:16.219628Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 25 18:48:16.221971 waagent[1795]: 2024-06-25T18:48:16.221559Z INFO Daemon Daemon cloud-init is enabled: False Jun 25 18:48:16.223358 waagent[1795]: 2024-06-25T18:48:16.223296Z INFO Daemon Daemon Copying ovf-env.xml Jun 25 18:48:16.294206 waagent[1795]: 2024-06-25T18:48:16.294095Z INFO Daemon Daemon Successfully mounted dvd Jun 25 18:48:16.313629 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 25 18:48:16.316760 waagent[1795]: 2024-06-25T18:48:16.316686Z INFO Daemon Daemon Detect protocol endpoint Jun 25 18:48:16.319056 waagent[1795]: 2024-06-25T18:48:16.318093Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 18:48:16.319555 waagent[1795]: 2024-06-25T18:48:16.319516Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 25 18:48:16.320473 waagent[1795]: 2024-06-25T18:48:16.320438Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 25 18:48:16.321029 waagent[1795]: 2024-06-25T18:48:16.320992Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 25 18:48:16.321746 waagent[1795]: 2024-06-25T18:48:16.321711Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 25 18:48:16.346128 waagent[1795]: 2024-06-25T18:48:16.346056Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 25 18:48:16.347498 waagent[1795]: 2024-06-25T18:48:16.347473Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 25 18:48:16.347734 waagent[1795]: 2024-06-25T18:48:16.347704Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 25 18:48:16.475296 waagent[1795]: 2024-06-25T18:48:16.475168Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 25 18:48:16.476686 waagent[1795]: 2024-06-25T18:48:16.476618Z INFO Daemon Daemon Forcing an update of the goal state. Jun 25 18:48:16.481424 waagent[1795]: 2024-06-25T18:48:16.481364Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 18:48:16.491889 waagent[1795]: 2024-06-25T18:48:16.490452Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.151 Jun 25 18:48:16.492009 waagent[1795]: 2024-06-25T18:48:16.491866Z INFO Daemon Jun 25 18:48:16.493780 waagent[1795]: 2024-06-25T18:48:16.493733Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 2f4ba7a8-213b-43bf-ba50-3a814e16769e eTag: 4536531500977462399 source: Fabric] Jun 25 18:48:16.495154 waagent[1795]: 2024-06-25T18:48:16.495114Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 25 18:48:16.495828 waagent[1795]: 2024-06-25T18:48:16.495788Z INFO Daemon Jun 25 18:48:16.496748 waagent[1795]: 2024-06-25T18:48:16.496712Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 25 18:48:16.508416 waagent[1795]: 2024-06-25T18:48:16.508371Z INFO Daemon Daemon Downloading artifacts profile blob Jun 25 18:48:16.592736 waagent[1795]: 2024-06-25T18:48:16.591734Z INFO Daemon Downloaded certificate {'thumbprint': 'EBD7A3C66507AD7F10DD78A065AFC1DED86EE27A', 'hasPrivateKey': False} Jun 25 18:48:16.597816 waagent[1795]: 2024-06-25T18:48:16.597073Z INFO Daemon Downloaded certificate {'thumbprint': '776FC81E093F54D2F712CD04FD87488E7419BB00', 'hasPrivateKey': True} Jun 25 18:48:16.602645 waagent[1795]: 2024-06-25T18:48:16.601759Z INFO Daemon Fetch goal state completed Jun 25 18:48:16.612927 waagent[1795]: 2024-06-25T18:48:16.611228Z INFO Daemon Daemon Starting provisioning Jun 25 18:48:16.612927 waagent[1795]: 2024-06-25T18:48:16.612507Z INFO Daemon Daemon Handle ovf-env.xml. Jun 25 18:48:16.613338 waagent[1795]: 2024-06-25T18:48:16.613301Z INFO Daemon Daemon Set hostname [ci-4012.0.0-a-c5aaeb7e49] Jun 25 18:48:16.641615 kubelet[1816]: E0625 18:48:16.641548 1816 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:48:16.644166 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:48:16.644351 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:48:16.644737 systemd[1]: kubelet.service: Consumed 1.002s CPU time. Jun 25 18:48:16.680090 waagent[1795]: 2024-06-25T18:48:16.680001Z INFO Daemon Daemon Publish hostname [ci-4012.0.0-a-c5aaeb7e49] Jun 25 18:48:16.688002 waagent[1795]: 2024-06-25T18:48:16.681461Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 25 18:48:16.688002 waagent[1795]: 2024-06-25T18:48:16.682265Z INFO Daemon Daemon Primary interface is [eth0] Jun 25 18:48:16.701715 systemd-networkd[1590]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:48:16.701724 systemd-networkd[1590]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:48:16.701757 systemd-networkd[1590]: eth0: DHCP lease lost Jun 25 18:48:16.702888 waagent[1795]: 2024-06-25T18:48:16.702804Z INFO Daemon Daemon Create user account if not exists Jun 25 18:48:16.705686 waagent[1795]: 2024-06-25T18:48:16.705630Z INFO Daemon Daemon User core already exists, skip useradd Jun 25 18:48:16.719220 waagent[1795]: 2024-06-25T18:48:16.706631Z INFO Daemon Daemon Configure sudoer Jun 25 18:48:16.719220 waagent[1795]: 2024-06-25T18:48:16.707720Z INFO Daemon Daemon Configure sshd Jun 25 18:48:16.719220 waagent[1795]: 2024-06-25T18:48:16.708624Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 25 18:48:16.719220 waagent[1795]: 2024-06-25T18:48:16.709305Z INFO Daemon Daemon Deploy ssh public key. Jun 25 18:48:16.723948 systemd-networkd[1590]: eth0: DHCPv6 lease lost Jun 25 18:48:17.342944 systemd-networkd[1590]: eth0: DHCPv4 address 10.200.8.15/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 18:48:18.752211 kernel: hv_netvsc 6045bddf-8d4f-6045-bddf-8d4f6045bddf eth0: VF slot 1 added Jun 25 18:48:18.823897 kernel: hv_vmbus: registering driver hv_pci Jun 25 18:48:18.826888 kernel: hv_pci 8349d78b-a2ed-4be3-b8a1-87c23b757405: PCI VMBus probing: Using version 0x10004 Jun 25 18:48:18.861347 kernel: hv_pci 8349d78b-a2ed-4be3-b8a1-87c23b757405: PCI host bridge to bus a2ed:00 Jun 25 18:48:18.861564 kernel: pci_bus a2ed:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jun 25 18:48:18.861801 kernel: pci_bus a2ed:00: No busn resource found for root bus, will use [bus 00-ff] Jun 25 18:48:18.862020 kernel: pci a2ed:00:02.0: [15b3:1016] type 00 class 0x020000 Jun 25 18:48:18.862230 kernel: pci a2ed:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 25 18:48:18.862425 kernel: pci a2ed:00:02.0: enabling Extended Tags Jun 25 18:48:18.862611 kernel: pci a2ed:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a2ed:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jun 25 18:48:18.862793 kernel: pci_bus a2ed:00: busn_res: [bus 00-ff] end is updated to 00 Jun 25 18:48:18.863020 kernel: pci a2ed:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 25 18:48:19.276658 kernel: mlx5_core a2ed:00:02.0: enabling device (0000 -> 0002) Jun 25 18:48:19.502051 kernel: mlx5_core a2ed:00:02.0: firmware version: 14.30.1284 Jun 25 18:48:19.502287 kernel: hv_netvsc 6045bddf-8d4f-6045-bddf-8d4f6045bddf eth0: VF registering: eth1 Jun 25 18:48:19.502479 kernel: mlx5_core a2ed:00:02.0 eth1: joined to eth0 Jun 25 18:48:19.502689 kernel: mlx5_core a2ed:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jun 25 18:48:19.512894 kernel: mlx5_core a2ed:00:02.0 enP41709s1: renamed from eth1 Jun 25 18:48:19.518013 systemd-networkd[1590]: eth1: Interface name change detected, renamed to enP41709s1. Jun 25 18:48:19.651633 systemd-networkd[1590]: enP41709s1: Link UP Jun 25 18:48:19.651898 kernel: mlx5_core a2ed:00:02.0 enP41709s1: Link up Jun 25 18:48:19.681365 systemd-networkd[1590]: enP41709s1: Gained carrier Jun 25 18:48:19.681900 kernel: hv_netvsc 6045bddf-8d4f-6045-bddf-8d4f6045bddf eth0: Data path switched to VF: enP41709s1 Jun 25 18:48:21.473192 systemd-networkd[1590]: enP41709s1: Gained IPv6LL Jun 25 18:48:26.842011 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:48:26.851097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:48:27.567585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:48:27.572512 (kubelet)[1904]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:48:28.164385 kubelet[1904]: E0625 18:48:28.164330 1904 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:48:28.168234 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:48:28.168433 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:48:37.887583 chronyd[1694]: Selected source PHC0 Jun 25 18:48:38.341755 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 18:48:38.347433 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:48:38.472919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:48:38.477531 (kubelet)[1920]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:48:39.024487 kubelet[1920]: E0625 18:48:39.024390 1920 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:48:39.027039 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:48:39.027242 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:48:47.019793 waagent[1795]: 2024-06-25T18:48:47.019659Z INFO Daemon Daemon Provisioning complete Jun 25 18:48:47.034308 waagent[1795]: 2024-06-25T18:48:47.034248Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 25 18:48:47.040636 waagent[1795]: 2024-06-25T18:48:47.035738Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 25 18:48:47.040636 waagent[1795]: 2024-06-25T18:48:47.036523Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jun 25 18:48:47.162562 waagent[1930]: 2024-06-25T18:48:47.162471Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jun 25 18:48:47.163015 waagent[1930]: 2024-06-25T18:48:47.162632Z INFO ExtHandler ExtHandler OS: flatcar 4012.0.0 Jun 25 18:48:47.163015 waagent[1930]: 2024-06-25T18:48:47.162714Z INFO ExtHandler ExtHandler Python: 3.11.9 Jun 25 18:48:47.187052 waagent[1930]: 2024-06-25T18:48:47.186967Z INFO ExtHandler ExtHandler Distro: flatcar-4012.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jun 25 18:48:47.187274 waagent[1930]: 2024-06-25T18:48:47.187224Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 18:48:47.187375 waagent[1930]: 2024-06-25T18:48:47.187332Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 18:48:47.194963 waagent[1930]: 2024-06-25T18:48:47.194892Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 18:48:47.199793 waagent[1930]: 2024-06-25T18:48:47.199741Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jun 25 18:48:47.200258 waagent[1930]: 2024-06-25T18:48:47.200201Z INFO ExtHandler Jun 25 18:48:47.200341 waagent[1930]: 2024-06-25T18:48:47.200297Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 3e76f529-8e71-4e9c-af51-d48803331d7c eTag: 4536531500977462399 source: Fabric] Jun 25 18:48:47.200652 waagent[1930]: 2024-06-25T18:48:47.200601Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 25 18:48:47.201245 waagent[1930]: 2024-06-25T18:48:47.201189Z INFO ExtHandler Jun 25 18:48:47.201320 waagent[1930]: 2024-06-25T18:48:47.201274Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 25 18:48:47.204898 waagent[1930]: 2024-06-25T18:48:47.204849Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 25 18:48:47.283628 waagent[1930]: 2024-06-25T18:48:47.283493Z INFO ExtHandler Downloaded certificate {'thumbprint': 'EBD7A3C66507AD7F10DD78A065AFC1DED86EE27A', 'hasPrivateKey': False} Jun 25 18:48:47.284031 waagent[1930]: 2024-06-25T18:48:47.283976Z INFO ExtHandler Downloaded certificate {'thumbprint': '776FC81E093F54D2F712CD04FD87488E7419BB00', 'hasPrivateKey': True} Jun 25 18:48:47.284475 waagent[1930]: 2024-06-25T18:48:47.284425Z INFO ExtHandler Fetch goal state completed Jun 25 18:48:47.299412 waagent[1930]: 2024-06-25T18:48:47.299351Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1930 Jun 25 18:48:47.299569 waagent[1930]: 2024-06-25T18:48:47.299521Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 25 18:48:47.301139 waagent[1930]: 2024-06-25T18:48:47.301081Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4012.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 25 18:48:47.301516 waagent[1930]: 2024-06-25T18:48:47.301465Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 25 18:48:47.309142 waagent[1930]: 2024-06-25T18:48:47.309103Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 25 18:48:47.309317 waagent[1930]: 2024-06-25T18:48:47.309273Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 25 18:48:47.315971 waagent[1930]: 2024-06-25T18:48:47.315927Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 25 18:48:47.322817 systemd[1]: Reloading requested from client PID 1945 ('systemctl') (unit waagent.service)... Jun 25 18:48:47.322835 systemd[1]: Reloading... Jun 25 18:48:47.410892 zram_generator::config[1979]: No configuration found. Jun 25 18:48:47.524563 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:48:47.604390 systemd[1]: Reloading finished in 281 ms. Jun 25 18:48:47.629930 waagent[1930]: 2024-06-25T18:48:47.629422Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jun 25 18:48:47.638378 systemd[1]: Reloading requested from client PID 2033 ('systemctl') (unit waagent.service)... Jun 25 18:48:47.638483 systemd[1]: Reloading... Jun 25 18:48:47.724988 zram_generator::config[2067]: No configuration found. Jun 25 18:48:47.839638 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:48:47.919482 systemd[1]: Reloading finished in 280 ms. Jun 25 18:48:47.945922 waagent[1930]: 2024-06-25T18:48:47.944045Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 25 18:48:47.945922 waagent[1930]: 2024-06-25T18:48:47.944267Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 25 18:48:48.072085 waagent[1930]: 2024-06-25T18:48:48.071996Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 25 18:48:48.074782 waagent[1930]: 2024-06-25T18:48:48.074712Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jun 25 18:48:48.075610 waagent[1930]: 2024-06-25T18:48:48.075542Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 25 18:48:48.075756 waagent[1930]: 2024-06-25T18:48:48.075697Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 18:48:48.075855 waagent[1930]: 2024-06-25T18:48:48.075809Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 18:48:48.076207 waagent[1930]: 2024-06-25T18:48:48.076141Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 25 18:48:48.076648 waagent[1930]: 2024-06-25T18:48:48.076593Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 25 18:48:48.077130 waagent[1930]: 2024-06-25T18:48:48.077079Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 25 18:48:48.077210 waagent[1930]: 2024-06-25T18:48:48.077162Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 18:48:48.077322 waagent[1930]: 2024-06-25T18:48:48.077253Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 18:48:48.077526 waagent[1930]: 2024-06-25T18:48:48.077483Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 25 18:48:48.077937 waagent[1930]: 2024-06-25T18:48:48.077886Z INFO EnvHandler ExtHandler Configure routes Jun 25 18:48:48.078011 waagent[1930]: 2024-06-25T18:48:48.077944Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 25 18:48:48.078011 waagent[1930]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 25 18:48:48.078011 waagent[1930]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jun 25 18:48:48.078011 waagent[1930]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 25 18:48:48.078011 waagent[1930]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 25 18:48:48.078011 waagent[1930]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 18:48:48.078011 waagent[1930]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 18:48:48.078684 waagent[1930]: 2024-06-25T18:48:48.078385Z INFO EnvHandler ExtHandler Gateway:None Jun 25 18:48:48.078684 waagent[1930]: 2024-06-25T18:48:48.078478Z INFO EnvHandler ExtHandler Routes:None Jun 25 18:48:48.078850 waagent[1930]: 2024-06-25T18:48:48.078791Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 25 18:48:48.079134 waagent[1930]: 2024-06-25T18:48:48.079073Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 25 18:48:48.079594 waagent[1930]: 2024-06-25T18:48:48.079552Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 25 18:48:48.085354 waagent[1930]: 2024-06-25T18:48:48.084750Z INFO ExtHandler ExtHandler Jun 25 18:48:48.085354 waagent[1930]: 2024-06-25T18:48:48.084848Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 8b02d637-a9ee-48ce-b938-6eb666547aea correlation 5779d6e5-d7ce-4da0-81bc-03aeddd08742 created: 2024-06-25T18:47:17.766373Z] Jun 25 18:48:48.085354 waagent[1930]: 2024-06-25T18:48:48.085314Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 25 18:48:48.088912 waagent[1930]: 2024-06-25T18:48:48.088724Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Jun 25 18:48:48.105114 waagent[1930]: 2024-06-25T18:48:48.105050Z INFO MonitorHandler ExtHandler Network interfaces: Jun 25 18:48:48.105114 waagent[1930]: Executing ['ip', '-a', '-o', 'link']: Jun 25 18:48:48.105114 waagent[1930]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 25 18:48:48.105114 waagent[1930]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:df:8d:4f brd ff:ff:ff:ff:ff:ff Jun 25 18:48:48.105114 waagent[1930]: 3: enP41709s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:df:8d:4f brd ff:ff:ff:ff:ff:ff\ altname enP41709p0s2 Jun 25 18:48:48.105114 waagent[1930]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 25 18:48:48.105114 waagent[1930]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 25 18:48:48.105114 waagent[1930]: 2: eth0 inet 10.200.8.15/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 25 18:48:48.105114 waagent[1930]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 25 18:48:48.105114 waagent[1930]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 25 18:48:48.105114 waagent[1930]: 2: eth0 inet6 fe80::6245:bdff:fedf:8d4f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 25 18:48:48.105114 waagent[1930]: 3: enP41709s1 inet6 fe80::6245:bdff:fedf:8d4f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 25 18:48:48.122442 waagent[1930]: 2024-06-25T18:48:48.122385Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 96F49D7E-1582-4312-959C-D64BEEDA345F;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jun 25 18:48:48.141170 waagent[1930]: 2024-06-25T18:48:48.141109Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jun 25 18:48:48.141170 waagent[1930]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:48:48.141170 waagent[1930]: pkts bytes target prot opt in out source destination Jun 25 18:48:48.141170 waagent[1930]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:48:48.141170 waagent[1930]: pkts bytes target prot opt in out source destination Jun 25 18:48:48.141170 waagent[1930]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:48:48.141170 waagent[1930]: pkts bytes target prot opt in out source destination Jun 25 18:48:48.141170 waagent[1930]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 18:48:48.141170 waagent[1930]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 18:48:48.141170 waagent[1930]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 18:48:48.144925 waagent[1930]: 2024-06-25T18:48:48.144834Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 25 18:48:48.144925 waagent[1930]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:48:48.144925 waagent[1930]: pkts bytes target prot opt in out source destination Jun 25 18:48:48.144925 waagent[1930]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:48:48.144925 waagent[1930]: pkts bytes target prot opt in out source destination Jun 25 18:48:48.144925 waagent[1930]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:48:48.144925 waagent[1930]: pkts bytes target prot opt in out source destination Jun 25 18:48:48.144925 waagent[1930]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 18:48:48.144925 waagent[1930]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 18:48:48.144925 waagent[1930]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 18:48:48.145343 waagent[1930]: 2024-06-25T18:48:48.145192Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jun 25 18:48:49.091676 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 18:48:49.097118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:48:49.190250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:48:49.195031 (kubelet)[2161]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:48:49.742950 kubelet[2161]: E0625 18:48:49.742895 2161 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:48:49.745227 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:48:49.745442 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:48:59.196096 update_engine[1683]: I0625 18:48:59.196014 1683 update_attempter.cc:509] Updating boot flags... Jun 25 18:48:59.237024 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2169) Jun 25 18:48:59.841695 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 25 18:48:59.848155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:48:59.949807 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:48:59.959219 (kubelet)[2216]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:48:59.996758 kubelet[2216]: E0625 18:48:59.996704 2216 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:48:59.999071 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:48:59.999285 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:49:01.513646 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jun 25 18:49:10.091788 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 25 18:49:10.097085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:49:10.190951 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:49:10.197240 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:49:10.792547 kubelet[2232]: E0625 18:49:10.792475 2232 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:49:10.794981 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:49:10.795175 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:49:20.841677 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jun 25 18:49:20.847117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:49:20.940013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:49:20.944979 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:49:21.520096 kubelet[2247]: E0625 18:49:21.520001 2247 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:49:21.522399 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:49:21.522705 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:49:24.780209 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:49:24.786161 systemd[1]: Started sshd@0-10.200.8.15:22-10.200.16.10:38680.service - OpenSSH per-connection server daemon (10.200.16.10:38680). Jun 25 18:49:25.443763 sshd[2257]: Accepted publickey for core from 10.200.16.10 port 38680 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:49:25.445485 sshd[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:49:25.450960 systemd-logind[1680]: New session 3 of user core. Jun 25 18:49:25.460025 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:49:26.010636 systemd[1]: Started sshd@1-10.200.8.15:22-10.200.16.10:38694.service - OpenSSH per-connection server daemon (10.200.16.10:38694). Jun 25 18:49:26.663811 sshd[2262]: Accepted publickey for core from 10.200.16.10 port 38694 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:49:26.665517 sshd[2262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:49:26.669782 systemd-logind[1680]: New session 4 of user core. Jun 25 18:49:26.677026 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:49:27.128392 sshd[2262]: pam_unix(sshd:session): session closed for user core Jun 25 18:49:27.132759 systemd[1]: sshd@1-10.200.8.15:22-10.200.16.10:38694.service: Deactivated successfully. Jun 25 18:49:27.135048 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 18:49:27.135897 systemd-logind[1680]: Session 4 logged out. Waiting for processes to exit. Jun 25 18:49:27.136953 systemd-logind[1680]: Removed session 4. Jun 25 18:49:27.240858 systemd[1]: Started sshd@2-10.200.8.15:22-10.200.16.10:38702.service - OpenSSH per-connection server daemon (10.200.16.10:38702). Jun 25 18:49:27.908518 sshd[2269]: Accepted publickey for core from 10.200.16.10 port 38702 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:49:27.910281 sshd[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:49:27.915830 systemd-logind[1680]: New session 5 of user core. Jun 25 18:49:27.922032 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:49:28.360018 sshd[2269]: pam_unix(sshd:session): session closed for user core Jun 25 18:49:28.363537 systemd[1]: sshd@2-10.200.8.15:22-10.200.16.10:38702.service: Deactivated successfully. Jun 25 18:49:28.365922 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 18:49:28.367657 systemd-logind[1680]: Session 5 logged out. Waiting for processes to exit. Jun 25 18:49:28.368896 systemd-logind[1680]: Removed session 5. Jun 25 18:49:28.473991 systemd[1]: Started sshd@3-10.200.8.15:22-10.200.16.10:38706.service - OpenSSH per-connection server daemon (10.200.16.10:38706). Jun 25 18:49:29.116222 sshd[2276]: Accepted publickey for core from 10.200.16.10 port 38706 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:49:29.117952 sshd[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:49:29.123441 systemd-logind[1680]: New session 6 of user core. Jun 25 18:49:29.130045 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:49:29.572046 sshd[2276]: pam_unix(sshd:session): session closed for user core Jun 25 18:49:29.575495 systemd[1]: sshd@3-10.200.8.15:22-10.200.16.10:38706.service: Deactivated successfully. Jun 25 18:49:29.577672 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:49:29.579307 systemd-logind[1680]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:49:29.580245 systemd-logind[1680]: Removed session 6. Jun 25 18:49:29.691035 systemd[1]: Started sshd@4-10.200.8.15:22-10.200.16.10:38716.service - OpenSSH per-connection server daemon (10.200.16.10:38716). Jun 25 18:49:30.336473 sshd[2283]: Accepted publickey for core from 10.200.16.10 port 38716 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:49:30.338270 sshd[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:49:30.342945 systemd-logind[1680]: New session 7 of user core. Jun 25 18:49:30.347015 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:49:30.730689 sudo[2286]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:49:30.731038 sudo[2286]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:49:30.742084 sudo[2286]: pam_unix(sudo:session): session closed for user root Jun 25 18:49:30.846149 sshd[2283]: pam_unix(sshd:session): session closed for user core Jun 25 18:49:30.849816 systemd[1]: sshd@4-10.200.8.15:22-10.200.16.10:38716.service: Deactivated successfully. Jun 25 18:49:30.852302 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:49:30.854003 systemd-logind[1680]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:49:30.855110 systemd-logind[1680]: Removed session 7. Jun 25 18:49:30.964843 systemd[1]: Started sshd@5-10.200.8.15:22-10.200.16.10:38728.service - OpenSSH per-connection server daemon (10.200.16.10:38728). Jun 25 18:49:31.591632 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jun 25 18:49:31.597097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:49:31.616912 sshd[2291]: Accepted publickey for core from 10.200.16.10 port 38728 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:49:31.617750 sshd[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:49:31.624216 systemd-logind[1680]: New session 8 of user core. Jun 25 18:49:31.634057 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:49:31.705418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:49:31.716176 (kubelet)[2302]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:49:31.975006 sudo[2309]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:49:31.975333 sudo[2309]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:49:31.978755 sudo[2309]: pam_unix(sudo:session): session closed for user root Jun 25 18:49:31.983397 sudo[2308]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:49:31.983713 sudo[2308]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:49:31.997216 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:49:31.998861 auditctl[2312]: No rules Jun 25 18:49:32.000046 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:49:32.000304 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:49:32.002069 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:49:32.252484 augenrules[2331]: No rules Jun 25 18:49:32.252424 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:49:32.255900 sudo[2308]: pam_unix(sudo:session): session closed for user root Jun 25 18:49:32.265371 kubelet[2302]: E0625 18:49:32.265330 2302 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:49:32.266762 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:49:32.266948 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:49:32.363834 sshd[2291]: pam_unix(sshd:session): session closed for user core Jun 25 18:49:32.368277 systemd[1]: sshd@5-10.200.8.15:22-10.200.16.10:38728.service: Deactivated successfully. Jun 25 18:49:32.370422 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:49:32.371284 systemd-logind[1680]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:49:32.372251 systemd-logind[1680]: Removed session 8. Jun 25 18:49:32.478865 systemd[1]: Started sshd@6-10.200.8.15:22-10.200.16.10:38732.service - OpenSSH per-connection server daemon (10.200.16.10:38732). Jun 25 18:49:33.135806 sshd[2340]: Accepted publickey for core from 10.200.16.10 port 38732 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:49:33.137489 sshd[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:49:33.141429 systemd-logind[1680]: New session 9 of user core. Jun 25 18:49:33.153249 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:49:33.490774 sudo[2343]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:49:33.491128 sudo[2343]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:49:33.664183 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:49:33.664269 (dockerd)[2352]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:49:34.038865 dockerd[2352]: time="2024-06-25T18:49:34.038802490Z" level=info msg="Starting up" Jun 25 18:49:34.173418 dockerd[2352]: time="2024-06-25T18:49:34.173372445Z" level=info msg="Loading containers: start." Jun 25 18:49:34.297900 kernel: Initializing XFRM netlink socket Jun 25 18:49:34.371836 systemd-networkd[1590]: docker0: Link UP Jun 25 18:49:34.400097 dockerd[2352]: time="2024-06-25T18:49:34.400053244Z" level=info msg="Loading containers: done." Jun 25 18:49:34.510105 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4285152405-merged.mount: Deactivated successfully. Jun 25 18:49:34.519403 dockerd[2352]: time="2024-06-25T18:49:34.519357643Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:49:34.519617 dockerd[2352]: time="2024-06-25T18:49:34.519581447Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:49:34.519728 dockerd[2352]: time="2024-06-25T18:49:34.519704749Z" level=info msg="Daemon has completed initialization" Jun 25 18:49:34.585168 dockerd[2352]: time="2024-06-25T18:49:34.585058844Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:49:34.585413 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:49:35.473708 containerd[1702]: time="2024-06-25T18:49:35.473665136Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jun 25 18:49:36.174252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3406862929.mount: Deactivated successfully. Jun 25 18:49:38.035158 containerd[1702]: time="2024-06-25T18:49:38.035093262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:38.039531 containerd[1702]: time="2024-06-25T18:49:38.039387833Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=32771809" Jun 25 18:49:38.042797 containerd[1702]: time="2024-06-25T18:49:38.042739690Z" level=info msg="ImageCreate event name:\"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:38.049227 containerd[1702]: time="2024-06-25T18:49:38.049161297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:38.050450 containerd[1702]: time="2024-06-25T18:49:38.050263716Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"32768601\" in 2.576553779s" Jun 25 18:49:38.050450 containerd[1702]: time="2024-06-25T18:49:38.050306916Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jun 25 18:49:38.071664 containerd[1702]: time="2024-06-25T18:49:38.071618874Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jun 25 18:49:40.094449 containerd[1702]: time="2024-06-25T18:49:40.094389872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:40.101614 containerd[1702]: time="2024-06-25T18:49:40.101542392Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=29588682" Jun 25 18:49:40.110033 containerd[1702]: time="2024-06-25T18:49:40.109993034Z" level=info msg="ImageCreate event name:\"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:40.121795 containerd[1702]: time="2024-06-25T18:49:40.121730230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:40.122896 containerd[1702]: time="2024-06-25T18:49:40.122744347Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"31138657\" in 2.051079173s" Jun 25 18:49:40.122896 containerd[1702]: time="2024-06-25T18:49:40.122788048Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jun 25 18:49:40.144946 containerd[1702]: time="2024-06-25T18:49:40.144905819Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jun 25 18:49:41.435222 containerd[1702]: time="2024-06-25T18:49:41.435095540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:41.439004 containerd[1702]: time="2024-06-25T18:49:41.438739401Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=17778128" Jun 25 18:49:41.444692 containerd[1702]: time="2024-06-25T18:49:41.444625900Z" level=info msg="ImageCreate event name:\"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:41.450271 containerd[1702]: time="2024-06-25T18:49:41.450209393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:41.451331 containerd[1702]: time="2024-06-25T18:49:41.451190510Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"19328121\" in 1.306241691s" Jun 25 18:49:41.451331 containerd[1702]: time="2024-06-25T18:49:41.451229511Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jun 25 18:49:41.472297 containerd[1702]: time="2024-06-25T18:49:41.472254863Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jun 25 18:49:42.341551 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jun 25 18:49:42.347096 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:49:42.913254 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:49:42.917817 (kubelet)[2562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:49:42.955073 kubelet[2562]: E0625 18:49:42.955020 2562 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:49:42.957475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:49:42.957694 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:49:43.437820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1618986073.mount: Deactivated successfully. Jun 25 18:49:43.922824 containerd[1702]: time="2024-06-25T18:49:43.922760047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:43.925074 containerd[1702]: time="2024-06-25T18:49:43.924998084Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=29035446" Jun 25 18:49:43.928799 containerd[1702]: time="2024-06-25T18:49:43.928736445Z" level=info msg="ImageCreate event name:\"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:43.934423 containerd[1702]: time="2024-06-25T18:49:43.934368838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:43.935447 containerd[1702]: time="2024-06-25T18:49:43.934993148Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"29034457\" in 2.462691685s" Jun 25 18:49:43.935447 containerd[1702]: time="2024-06-25T18:49:43.935032949Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jun 25 18:49:43.955972 containerd[1702]: time="2024-06-25T18:49:43.955937293Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 25 18:49:44.614448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3288815404.mount: Deactivated successfully. Jun 25 18:49:45.940572 containerd[1702]: time="2024-06-25T18:49:45.940510054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:45.944210 containerd[1702]: time="2024-06-25T18:49:45.944154314Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jun 25 18:49:45.948235 containerd[1702]: time="2024-06-25T18:49:45.948179380Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:45.953179 containerd[1702]: time="2024-06-25T18:49:45.953098261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:45.955110 containerd[1702]: time="2024-06-25T18:49:45.954149178Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.998171985s" Jun 25 18:49:45.955110 containerd[1702]: time="2024-06-25T18:49:45.954188079Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jun 25 18:49:45.974722 containerd[1702]: time="2024-06-25T18:49:45.974677316Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:49:46.970667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3245200237.mount: Deactivated successfully. Jun 25 18:49:46.999556 containerd[1702]: time="2024-06-25T18:49:46.999503882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:47.003232 containerd[1702]: time="2024-06-25T18:49:47.003172942Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jun 25 18:49:47.018925 containerd[1702]: time="2024-06-25T18:49:47.018842800Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:47.024725 containerd[1702]: time="2024-06-25T18:49:47.024659396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:47.026472 containerd[1702]: time="2024-06-25T18:49:47.025715513Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.050981096s" Jun 25 18:49:47.026472 containerd[1702]: time="2024-06-25T18:49:47.025759714Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 18:49:47.046590 containerd[1702]: time="2024-06-25T18:49:47.046557756Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jun 25 18:49:47.694321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3728849255.mount: Deactivated successfully. Jun 25 18:49:50.184842 containerd[1702]: time="2024-06-25T18:49:50.184774385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:50.187582 containerd[1702]: time="2024-06-25T18:49:50.187507835Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jun 25 18:49:50.191169 containerd[1702]: time="2024-06-25T18:49:50.191127800Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:50.198210 containerd[1702]: time="2024-06-25T18:49:50.198148626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:49:50.199404 containerd[1702]: time="2024-06-25T18:49:50.199258746Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.152660588s" Jun 25 18:49:50.199404 containerd[1702]: time="2024-06-25T18:49:50.199298646Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jun 25 18:49:52.831727 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:49:52.840148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:49:52.868089 systemd[1]: Reloading requested from client PID 2752 ('systemctl') (unit session-9.scope)... Jun 25 18:49:52.868103 systemd[1]: Reloading... Jun 25 18:49:52.950931 zram_generator::config[2787]: No configuration found. Jun 25 18:49:53.079964 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:49:53.161002 systemd[1]: Reloading finished in 292 ms. Jun 25 18:49:53.297745 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 18:49:53.297916 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 18:49:53.298477 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:49:53.310192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:49:54.237792 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:49:54.244360 (kubelet)[2856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:49:54.857178 kubelet[2856]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:49:54.857178 kubelet[2856]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:49:54.857178 kubelet[2856]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:49:54.857646 kubelet[2856]: I0625 18:49:54.857236 2856 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:49:55.279515 kubelet[2856]: I0625 18:49:55.277853 2856 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 18:49:55.279515 kubelet[2856]: I0625 18:49:55.277897 2856 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:49:55.279515 kubelet[2856]: I0625 18:49:55.278121 2856 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 18:49:55.293816 kubelet[2856]: I0625 18:49:55.293786 2856 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:49:55.294470 kubelet[2856]: E0625 18:49:55.294376 2856 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.15:6443: connect: connection refused Jun 25 18:49:55.307235 kubelet[2856]: I0625 18:49:55.306850 2856 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:49:55.308251 kubelet[2856]: I0625 18:49:55.308197 2856 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:49:55.308438 kubelet[2856]: I0625 18:49:55.308247 2856 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4012.0.0-a-c5aaeb7e49","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:49:55.309040 kubelet[2856]: I0625 18:49:55.309019 2856 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:49:55.309119 kubelet[2856]: I0625 18:49:55.309045 2856 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:49:55.309193 kubelet[2856]: I0625 18:49:55.309176 2856 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:49:55.310060 kubelet[2856]: I0625 18:49:55.310042 2856 kubelet.go:400] "Attempting to sync node with API server" Jun 25 18:49:55.310060 kubelet[2856]: I0625 18:49:55.310063 2856 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:49:55.310322 kubelet[2856]: I0625 18:49:55.310088 2856 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:49:55.310322 kubelet[2856]: I0625 18:49:55.310110 2856 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:49:55.314511 kubelet[2856]: W0625 18:49:55.314362 2856 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Jun 25 18:49:55.314511 kubelet[2856]: E0625 18:49:55.314421 2856 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Jun 25 18:49:55.316561 kubelet[2856]: W0625 18:49:55.316177 2856 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-c5aaeb7e49&limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Jun 25 18:49:55.316561 kubelet[2856]: E0625 18:49:55.316236 2856 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-c5aaeb7e49&limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Jun 25 18:49:55.316561 kubelet[2856]: I0625 18:49:55.316341 2856 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:49:55.318678 kubelet[2856]: I0625 18:49:55.317985 2856 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:49:55.318678 kubelet[2856]: W0625 18:49:55.318046 2856 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:49:55.318789 kubelet[2856]: I0625 18:49:55.318711 2856 server.go:1264] "Started kubelet" Jun 25 18:49:55.319709 kubelet[2856]: I0625 18:49:55.319668 2856 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:49:55.320819 kubelet[2856]: I0625 18:49:55.320795 2856 server.go:455] "Adding debug handlers to kubelet server" Jun 25 18:49:55.324525 kubelet[2856]: I0625 18:49:55.323961 2856 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:49:55.324525 kubelet[2856]: I0625 18:49:55.324254 2856 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:49:55.324525 kubelet[2856]: E0625 18:49:55.324402 2856 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.15:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.15:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4012.0.0-a-c5aaeb7e49.17dc53e3bb983513 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4012.0.0-a-c5aaeb7e49,UID:ci-4012.0.0-a-c5aaeb7e49,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4012.0.0-a-c5aaeb7e49,},FirstTimestamp:2024-06-25 18:49:55.318682899 +0000 UTC m=+1.069892718,LastTimestamp:2024-06-25 18:49:55.318682899 +0000 UTC m=+1.069892718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4012.0.0-a-c5aaeb7e49,}" Jun 25 18:49:55.324757 kubelet[2856]: I0625 18:49:55.324727 2856 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:49:55.328945 kubelet[2856]: E0625 18:49:55.328925 2856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-c5aaeb7e49\" not found" Jun 25 18:49:55.329475 kubelet[2856]: I0625 18:49:55.329459 2856 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:49:55.329733 kubelet[2856]: I0625 18:49:55.329718 2856 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 18:49:55.329865 kubelet[2856]: I0625 18:49:55.329853 2856 reconciler.go:26] "Reconciler: start to sync state" Jun 25 18:49:55.330318 kubelet[2856]: W0625 18:49:55.330274 2856 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Jun 25 18:49:55.330434 kubelet[2856]: E0625 18:49:55.330421 2856 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Jun 25 18:49:55.331764 kubelet[2856]: E0625 18:49:55.331182 2856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-c5aaeb7e49?timeout=10s\": dial tcp 10.200.8.15:6443: connect: connection refused" interval="200ms" Jun 25 18:49:55.332268 kubelet[2856]: I0625 18:49:55.332249 2856 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:49:55.332462 kubelet[2856]: I0625 18:49:55.332443 2856 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:49:55.334226 kubelet[2856]: I0625 18:49:55.334208 2856 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:49:55.340306 kubelet[2856]: E0625 18:49:55.340284 2856 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:49:55.358511 kubelet[2856]: I0625 18:49:55.358343 2856 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:49:55.360517 kubelet[2856]: I0625 18:49:55.360397 2856 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:49:55.360517 kubelet[2856]: I0625 18:49:55.360440 2856 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:49:55.360517 kubelet[2856]: I0625 18:49:55.360464 2856 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 18:49:55.360810 kubelet[2856]: E0625 18:49:55.360711 2856 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:49:55.362088 kubelet[2856]: W0625 18:49:55.362002 2856 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Jun 25 18:49:55.362088 kubelet[2856]: E0625 18:49:55.362064 2856 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Jun 25 18:49:55.380632 kubelet[2856]: I0625 18:49:55.380607 2856 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:49:55.380632 kubelet[2856]: I0625 18:49:55.380623 2856 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:49:55.380784 kubelet[2856]: I0625 18:49:55.380643 2856 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:49:55.385711 kubelet[2856]: I0625 18:49:55.385684 2856 policy_none.go:49] "None policy: Start" Jun 25 18:49:55.386289 kubelet[2856]: I0625 18:49:55.386213 2856 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:49:55.386388 kubelet[2856]: I0625 18:49:55.386313 2856 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:49:55.395849 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 18:49:55.410843 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 18:49:55.414113 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 18:49:55.424570 kubelet[2856]: I0625 18:49:55.424547 2856 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:49:55.424813 kubelet[2856]: I0625 18:49:55.424771 2856 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 18:49:55.424949 kubelet[2856]: I0625 18:49:55.424934 2856 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:49:55.427168 kubelet[2856]: E0625 18:49:55.427138 2856 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4012.0.0-a-c5aaeb7e49\" not found" Jun 25 18:49:55.431303 kubelet[2856]: I0625 18:49:55.431280 2856 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:55.431626 kubelet[2856]: E0625 18:49:55.431603 2856 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.15:6443/api/v1/nodes\": dial tcp 10.200.8.15:6443: connect: connection refused" node="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:55.461017 kubelet[2856]: I0625 18:49:55.460946 2856 topology_manager.go:215] "Topology Admit Handler" podUID="c111604c4abfb69ba73eae50b3553fb4" podNamespace="kube-system" podName="kube-apiserver-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:55.462889 kubelet[2856]: I0625 18:49:55.462843 2856 topology_manager.go:215] "Topology Admit Handler" podUID="57499cf3a89df435571a938e1489f479" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:55.468750 kubelet[2856]: I0625 18:49:55.468400 2856 topology_manager.go:215] "Topology Admit Handler" podUID="9ec7a8f2b953c18e7890d61185164baa" podNamespace="kube-system" podName="kube-scheduler-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:55.476408 systemd[1]: Created slice kubepods-burstable-podc111604c4abfb69ba73eae50b3553fb4.slice - libcontainer container kubepods-burstable-podc111604c4abfb69ba73eae50b3553fb4.slice. Jun 25 18:49:55.489569 systemd[1]: Created slice kubepods-burstable-pod57499cf3a89df435571a938e1489f479.slice - libcontainer container kubepods-burstable-pod57499cf3a89df435571a938e1489f479.slice. Jun 25 18:49:55.500664 systemd[1]: Created slice kubepods-burstable-pod9ec7a8f2b953c18e7890d61185164baa.slice - libcontainer container kubepods-burstable-pod9ec7a8f2b953c18e7890d61185164baa.slice. Jun 25 18:49:55.532034 kubelet[2856]: E0625 18:49:55.531906 2856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-c5aaeb7e49?timeout=10s\": dial tcp 10.200.8.15:6443: connect: connection refused" interval="400ms" Jun 25 18:49:55.631440 kubelet[2856]: I0625 18:49:55.631332 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c111604c4abfb69ba73eae50b3553fb4-k8s-certs\") pod \"kube-apiserver-ci-4012.0.0-a-c5aaeb7e49\" (UID: \"c111604c4abfb69ba73eae50b3553fb4\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:55.631440 kubelet[2856]: I0625 18:49:55.631412 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/57499cf3a89df435571a938e1489f479-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49\" (UID: \"57499cf3a89df435571a938e1489f479\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:55.631440 kubelet[2856]: I0625 18:49:55.631446 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/57499cf3a89df435571a938e1489f479-kubeconfig\") pod \"kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49\" (UID: \"57499cf3a89df435571a938e1489f479\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:55.631798 kubelet[2856]: I0625 18:49:55.631482 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57499cf3a89df435571a938e1489f479-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49\" (UID: \"57499cf3a89df435571a938e1489f479\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:55.631798 kubelet[2856]: I0625 18:49:55.631511 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c111604c4abfb69ba73eae50b3553fb4-ca-certs\") pod \"kube-apiserver-ci-4012.0.0-a-c5aaeb7e49\" (UID: \"c111604c4abfb69ba73eae50b3553fb4\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:55.631798 kubelet[2856]: I0625 18:49:55.631544 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c111604c4abfb69ba73eae50b3553fb4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.0.0-a-c5aaeb7e49\" (UID: \"c111604c4abfb69ba73eae50b3553fb4\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:55.631798 kubelet[2856]: I0625 18:49:55.631568 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57499cf3a89df435571a938e1489f479-ca-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49\" (UID: \"57499cf3a89df435571a938e1489f479\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:55.631798 kubelet[2856]: I0625 18:49:55.631594 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57499cf3a89df435571a938e1489f479-k8s-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49\" (UID: \"57499cf3a89df435571a938e1489f479\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:55.632074 kubelet[2856]: I0625 18:49:55.631619 2856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ec7a8f2b953c18e7890d61185164baa-kubeconfig\") pod \"kube-scheduler-ci-4012.0.0-a-c5aaeb7e49\" (UID: \"9ec7a8f2b953c18e7890d61185164baa\") " pod="kube-system/kube-scheduler-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:55.633840 kubelet[2856]: I0625 18:49:55.633779 2856 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:55.634203 kubelet[2856]: E0625 18:49:55.634172 2856 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.15:6443/api/v1/nodes\": dial tcp 10.200.8.15:6443: connect: connection refused" node="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:55.788924 containerd[1702]: time="2024-06-25T18:49:55.788776442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.0.0-a-c5aaeb7e49,Uid:c111604c4abfb69ba73eae50b3553fb4,Namespace:kube-system,Attempt:0,}" Jun 25 18:49:55.800546 containerd[1702]: time="2024-06-25T18:49:55.800469752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49,Uid:57499cf3a89df435571a938e1489f479,Namespace:kube-system,Attempt:0,}" Jun 25 18:49:55.803124 containerd[1702]: time="2024-06-25T18:49:55.803092699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.0.0-a-c5aaeb7e49,Uid:9ec7a8f2b953c18e7890d61185164baa,Namespace:kube-system,Attempt:0,}" Jun 25 18:49:55.932648 kubelet[2856]: E0625 18:49:55.932589 2856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-c5aaeb7e49?timeout=10s\": dial tcp 10.200.8.15:6443: connect: connection refused" interval="800ms" Jun 25 18:49:56.036602 kubelet[2856]: I0625 18:49:56.036572 2856 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:56.036947 kubelet[2856]: E0625 18:49:56.036917 2856 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.15:6443/api/v1/nodes\": dial tcp 10.200.8.15:6443: connect: connection refused" node="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:56.147897 kubelet[2856]: W0625 18:49:56.147775 2856 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Jun 25 18:49:56.147897 kubelet[2856]: E0625 18:49:56.147846 2856 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Jun 25 18:49:56.302400 kubelet[2856]: W0625 18:49:56.302326 2856 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-c5aaeb7e49&limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Jun 25 18:49:56.302400 kubelet[2856]: E0625 18:49:56.302404 2856 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-c5aaeb7e49&limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Jun 25 18:49:56.336401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1633584726.mount: Deactivated successfully. Jun 25 18:49:56.342670 kubelet[2856]: W0625 18:49:56.342611 2856 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Jun 25 18:49:56.342952 kubelet[2856]: E0625 18:49:56.342678 2856 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Jun 25 18:49:56.375115 containerd[1702]: time="2024-06-25T18:49:56.375067973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:49:56.379341 containerd[1702]: time="2024-06-25T18:49:56.379285549Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jun 25 18:49:56.384282 containerd[1702]: time="2024-06-25T18:49:56.384244638Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:49:56.388594 containerd[1702]: time="2024-06-25T18:49:56.388022006Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:49:56.393462 containerd[1702]: time="2024-06-25T18:49:56.393404002Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:49:56.398033 containerd[1702]: time="2024-06-25T18:49:56.397933384Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:49:56.401074 containerd[1702]: time="2024-06-25T18:49:56.400776135Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:49:56.405915 containerd[1702]: time="2024-06-25T18:49:56.405862126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:49:56.406681 containerd[1702]: time="2024-06-25T18:49:56.406647240Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 606.040485ms" Jun 25 18:49:56.408234 containerd[1702]: time="2024-06-25T18:49:56.408198068Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 619.270023ms" Jun 25 18:49:56.412380 containerd[1702]: time="2024-06-25T18:49:56.412345843Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 609.174641ms" Jun 25 18:49:56.691915 containerd[1702]: time="2024-06-25T18:49:56.691490256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:49:56.692686 containerd[1702]: time="2024-06-25T18:49:56.691862163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:49:56.692686 containerd[1702]: time="2024-06-25T18:49:56.692591876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:49:56.692954 containerd[1702]: time="2024-06-25T18:49:56.692797080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:49:56.695625 containerd[1702]: time="2024-06-25T18:49:56.695276324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:49:56.695625 containerd[1702]: time="2024-06-25T18:49:56.695334025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:49:56.695625 containerd[1702]: time="2024-06-25T18:49:56.695370826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:49:56.695625 containerd[1702]: time="2024-06-25T18:49:56.695393127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:49:56.698682 containerd[1702]: time="2024-06-25T18:49:56.698386380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:49:56.698682 containerd[1702]: time="2024-06-25T18:49:56.698446381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:49:56.698682 containerd[1702]: time="2024-06-25T18:49:56.698473982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:49:56.698682 containerd[1702]: time="2024-06-25T18:49:56.698493582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:49:56.722236 kubelet[2856]: W0625 18:49:56.720283 2856 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Jun 25 18:49:56.722236 kubelet[2856]: E0625 18:49:56.720371 2856 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Jun 25 18:49:56.727083 systemd[1]: Started cri-containerd-d64a827134b9c31d9184b27d0882bc902b61e72c538cc4f606a8e6f11a61f304.scope - libcontainer container d64a827134b9c31d9184b27d0882bc902b61e72c538cc4f606a8e6f11a61f304. Jun 25 18:49:56.740785 kubelet[2856]: E0625 18:49:56.740741 2856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-c5aaeb7e49?timeout=10s\": dial tcp 10.200.8.15:6443: connect: connection refused" interval="1.6s" Jun 25 18:49:56.752060 systemd[1]: Started cri-containerd-8fc219d50369f64b6154d3f9e69b288e505fe223c8a216fe3c8272a7f40cb097.scope - libcontainer container 8fc219d50369f64b6154d3f9e69b288e505fe223c8a216fe3c8272a7f40cb097. Jun 25 18:49:56.754206 systemd[1]: Started cri-containerd-d8d540124f37bcb38373c672c339547ebc68e46db8d5fc219dfc15fb82f1689c.scope - libcontainer container d8d540124f37bcb38373c672c339547ebc68e46db8d5fc219dfc15fb82f1689c. Jun 25 18:49:56.827158 containerd[1702]: time="2024-06-25T18:49:56.826999790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.0.0-a-c5aaeb7e49,Uid:9ec7a8f2b953c18e7890d61185164baa,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8d540124f37bcb38373c672c339547ebc68e46db8d5fc219dfc15fb82f1689c\"" Jun 25 18:49:56.837788 containerd[1702]: time="2024-06-25T18:49:56.836932569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49,Uid:57499cf3a89df435571a938e1489f479,Namespace:kube-system,Attempt:0,} returns sandbox id \"d64a827134b9c31d9184b27d0882bc902b61e72c538cc4f606a8e6f11a61f304\"" Jun 25 18:49:56.839770 containerd[1702]: time="2024-06-25T18:49:56.839314912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.0.0-a-c5aaeb7e49,Uid:c111604c4abfb69ba73eae50b3553fb4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fc219d50369f64b6154d3f9e69b288e505fe223c8a216fe3c8272a7f40cb097\"" Jun 25 18:49:56.840648 kubelet[2856]: I0625 18:49:56.840272 2856 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:56.840648 kubelet[2856]: E0625 18:49:56.840614 2856 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.15:6443/api/v1/nodes\": dial tcp 10.200.8.15:6443: connect: connection refused" node="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:56.843181 containerd[1702]: time="2024-06-25T18:49:56.843150481Z" level=info msg="CreateContainer within sandbox \"d64a827134b9c31d9184b27d0882bc902b61e72c538cc4f606a8e6f11a61f304\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:49:56.843352 containerd[1702]: time="2024-06-25T18:49:56.843320084Z" level=info msg="CreateContainer within sandbox \"8fc219d50369f64b6154d3f9e69b288e505fe223c8a216fe3c8272a7f40cb097\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:49:56.843617 containerd[1702]: time="2024-06-25T18:49:56.843592688Z" level=info msg="CreateContainer within sandbox \"d8d540124f37bcb38373c672c339547ebc68e46db8d5fc219dfc15fb82f1689c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:49:56.904806 containerd[1702]: time="2024-06-25T18:49:56.904755587Z" level=info msg="CreateContainer within sandbox \"d64a827134b9c31d9184b27d0882bc902b61e72c538cc4f606a8e6f11a61f304\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c41debbf45013c86eb27febc5c41255a6657bc99631fc6107409868405b04cf3\"" Jun 25 18:49:56.905447 containerd[1702]: time="2024-06-25T18:49:56.905416499Z" level=info msg="StartContainer for \"c41debbf45013c86eb27febc5c41255a6657bc99631fc6107409868405b04cf3\"" Jun 25 18:49:56.930046 systemd[1]: Started cri-containerd-c41debbf45013c86eb27febc5c41255a6657bc99631fc6107409868405b04cf3.scope - libcontainer container c41debbf45013c86eb27febc5c41255a6657bc99631fc6107409868405b04cf3. Jun 25 18:49:56.943584 containerd[1702]: time="2024-06-25T18:49:56.943389381Z" level=info msg="CreateContainer within sandbox \"8fc219d50369f64b6154d3f9e69b288e505fe223c8a216fe3c8272a7f40cb097\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"778507357f48ff01110002eaf5e49f6d2d768fe21ddf50be0045c9aefe0d3db0\"" Jun 25 18:49:56.945610 containerd[1702]: time="2024-06-25T18:49:56.945068011Z" level=info msg="StartContainer for \"778507357f48ff01110002eaf5e49f6d2d768fe21ddf50be0045c9aefe0d3db0\"" Jun 25 18:49:56.961891 containerd[1702]: time="2024-06-25T18:49:56.960081981Z" level=info msg="CreateContainer within sandbox \"d8d540124f37bcb38373c672c339547ebc68e46db8d5fc219dfc15fb82f1689c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0fd381d20334d268c783fe43760be40c4ed6d56289a679bfa31091033b73b4cb\"" Jun 25 18:49:56.963704 containerd[1702]: time="2024-06-25T18:49:56.962528625Z" level=info msg="StartContainer for \"0fd381d20334d268c783fe43760be40c4ed6d56289a679bfa31091033b73b4cb\"" Jun 25 18:49:56.982246 systemd[1]: Started cri-containerd-778507357f48ff01110002eaf5e49f6d2d768fe21ddf50be0045c9aefe0d3db0.scope - libcontainer container 778507357f48ff01110002eaf5e49f6d2d768fe21ddf50be0045c9aefe0d3db0. Jun 25 18:49:57.008683 systemd[1]: Started cri-containerd-0fd381d20334d268c783fe43760be40c4ed6d56289a679bfa31091033b73b4cb.scope - libcontainer container 0fd381d20334d268c783fe43760be40c4ed6d56289a679bfa31091033b73b4cb. Jun 25 18:49:57.016420 containerd[1702]: time="2024-06-25T18:49:57.016292990Z" level=info msg="StartContainer for \"c41debbf45013c86eb27febc5c41255a6657bc99631fc6107409868405b04cf3\" returns successfully" Jun 25 18:49:57.096192 containerd[1702]: time="2024-06-25T18:49:57.096136225Z" level=info msg="StartContainer for \"778507357f48ff01110002eaf5e49f6d2d768fe21ddf50be0045c9aefe0d3db0\" returns successfully" Jun 25 18:49:57.128320 containerd[1702]: time="2024-06-25T18:49:57.128265502Z" level=info msg="StartContainer for \"0fd381d20334d268c783fe43760be40c4ed6d56289a679bfa31091033b73b4cb\" returns successfully" Jun 25 18:49:58.444629 kubelet[2856]: I0625 18:49:58.444093 2856 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:59.144898 kubelet[2856]: E0625 18:49:59.144422 2856 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4012.0.0-a-c5aaeb7e49\" not found" node="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:59.230564 kubelet[2856]: I0625 18:49:59.229789 2856 kubelet_node_status.go:76] "Successfully registered node" node="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:49:59.296997 kubelet[2856]: E0625 18:49:59.296818 2856 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4012.0.0-a-c5aaeb7e49.17dc53e3bb983513 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4012.0.0-a-c5aaeb7e49,UID:ci-4012.0.0-a-c5aaeb7e49,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4012.0.0-a-c5aaeb7e49,},FirstTimestamp:2024-06-25 18:49:55.318682899 +0000 UTC m=+1.069892718,LastTimestamp:2024-06-25 18:49:55.318682899 +0000 UTC m=+1.069892718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4012.0.0-a-c5aaeb7e49,}" Jun 25 18:49:59.315386 kubelet[2856]: I0625 18:49:59.315341 2856 apiserver.go:52] "Watching apiserver" Jun 25 18:49:59.330685 kubelet[2856]: I0625 18:49:59.330614 2856 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 18:49:59.404588 kubelet[2856]: E0625 18:49:59.404363 2856 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4012.0.0-a-c5aaeb7e49.17dc53e3bce1acc6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4012.0.0-a-c5aaeb7e49,UID:ci-4012.0.0-a-c5aaeb7e49,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4012.0.0-a-c5aaeb7e49,},FirstTimestamp:2024-06-25 18:49:55.340274886 +0000 UTC m=+1.091484705,LastTimestamp:2024-06-25 18:49:55.340274886 +0000 UTC m=+1.091484705,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4012.0.0-a-c5aaeb7e49,}" Jun 25 18:49:59.488601 kubelet[2856]: E0625 18:49:59.488449 2856 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4012.0.0-a-c5aaeb7e49.17dc53e3bf3f02d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4012.0.0-a-c5aaeb7e49,UID:ci-4012.0.0-a-c5aaeb7e49,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4012.0.0-a-c5aaeb7e49 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4012.0.0-a-c5aaeb7e49,},FirstTimestamp:2024-06-25 18:49:55.379946199 +0000 UTC m=+1.131155918,LastTimestamp:2024-06-25 18:49:55.379946199 +0000 UTC m=+1.131155918,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4012.0.0-a-c5aaeb7e49,}" Jun 25 18:50:00.741747 kubelet[2856]: W0625 18:50:00.741647 2856 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:50:01.276641 systemd[1]: Reloading requested from client PID 3131 ('systemctl') (unit session-9.scope)... Jun 25 18:50:01.276661 systemd[1]: Reloading... Jun 25 18:50:01.373011 zram_generator::config[3168]: No configuration found. Jun 25 18:50:01.497014 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:50:01.590831 systemd[1]: Reloading finished in 313 ms. Jun 25 18:50:01.633087 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:50:01.645124 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:50:01.645375 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:50:01.650163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:50:01.745326 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:50:01.753146 (kubelet)[3235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:50:02.320263 kubelet[3235]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:50:02.320263 kubelet[3235]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:50:02.320263 kubelet[3235]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:50:02.320263 kubelet[3235]: I0625 18:50:02.319264 3235 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:50:02.328565 kubelet[3235]: I0625 18:50:02.328537 3235 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 18:50:02.328773 kubelet[3235]: I0625 18:50:02.328761 3235 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:50:02.329274 kubelet[3235]: I0625 18:50:02.329224 3235 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 18:50:02.331242 kubelet[3235]: I0625 18:50:02.331209 3235 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:50:02.333091 kubelet[3235]: I0625 18:50:02.333066 3235 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:50:02.342551 kubelet[3235]: I0625 18:50:02.342481 3235 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:50:02.342811 kubelet[3235]: I0625 18:50:02.342775 3235 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:50:02.343006 kubelet[3235]: I0625 18:50:02.342809 3235 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4012.0.0-a-c5aaeb7e49","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:50:02.343142 kubelet[3235]: I0625 18:50:02.343030 3235 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:50:02.343142 kubelet[3235]: I0625 18:50:02.343045 3235 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:50:02.343142 kubelet[3235]: I0625 18:50:02.343097 3235 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:50:02.343264 kubelet[3235]: I0625 18:50:02.343201 3235 kubelet.go:400] "Attempting to sync node with API server" Jun 25 18:50:02.343264 kubelet[3235]: I0625 18:50:02.343215 3235 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:50:02.343264 kubelet[3235]: I0625 18:50:02.343241 3235 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:50:02.343264 kubelet[3235]: I0625 18:50:02.343261 3235 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:50:02.345914 kubelet[3235]: I0625 18:50:02.345028 3235 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:50:02.345914 kubelet[3235]: I0625 18:50:02.345218 3235 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:50:02.345914 kubelet[3235]: I0625 18:50:02.345642 3235 server.go:1264] "Started kubelet" Jun 25 18:50:02.352437 kubelet[3235]: I0625 18:50:02.352393 3235 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:50:02.352980 kubelet[3235]: I0625 18:50:02.352847 3235 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:50:02.353221 kubelet[3235]: I0625 18:50:02.353199 3235 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:50:02.353553 kubelet[3235]: I0625 18:50:02.353533 3235 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:50:02.354426 kubelet[3235]: I0625 18:50:02.354410 3235 server.go:455] "Adding debug handlers to kubelet server" Jun 25 18:50:02.365254 kubelet[3235]: I0625 18:50:02.365231 3235 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:50:02.365918 kubelet[3235]: I0625 18:50:02.365671 3235 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 18:50:02.365918 kubelet[3235]: I0625 18:50:02.365822 3235 reconciler.go:26] "Reconciler: start to sync state" Jun 25 18:50:02.388951 kubelet[3235]: I0625 18:50:02.388920 3235 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:50:02.390014 kubelet[3235]: I0625 18:50:02.389262 3235 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:50:02.390476 kubelet[3235]: E0625 18:50:02.390454 3235 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:50:02.392549 kubelet[3235]: I0625 18:50:02.392523 3235 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:50:02.394957 kubelet[3235]: I0625 18:50:02.394933 3235 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:50:02.395038 kubelet[3235]: I0625 18:50:02.394972 3235 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:50:02.395038 kubelet[3235]: I0625 18:50:02.394996 3235 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 18:50:02.395110 kubelet[3235]: E0625 18:50:02.395038 3235 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:50:02.397896 kubelet[3235]: I0625 18:50:02.395944 3235 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:50:02.472828 kubelet[3235]: I0625 18:50:02.472799 3235 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:02.477822 kubelet[3235]: I0625 18:50:02.477798 3235 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:50:02.478024 kubelet[3235]: I0625 18:50:02.478008 3235 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:50:02.478115 kubelet[3235]: I0625 18:50:02.478105 3235 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:50:02.478564 kubelet[3235]: I0625 18:50:02.478543 3235 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:50:02.479203 kubelet[3235]: I0625 18:50:02.479158 3235 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:50:02.479294 kubelet[3235]: I0625 18:50:02.479286 3235 policy_none.go:49] "None policy: Start" Jun 25 18:50:02.481217 kubelet[3235]: I0625 18:50:02.481202 3235 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:50:02.481333 kubelet[3235]: I0625 18:50:02.481323 3235 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:50:02.481626 kubelet[3235]: I0625 18:50:02.481608 3235 state_mem.go:75] "Updated machine memory state" Jun 25 18:50:02.489847 kubelet[3235]: I0625 18:50:02.489825 3235 kubelet_node_status.go:112] "Node was previously registered" node="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:02.490097 kubelet[3235]: I0625 18:50:02.490082 3235 kubelet_node_status.go:76] "Successfully registered node" node="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:02.496864 kubelet[3235]: E0625 18:50:02.496441 3235 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:50:02.501975 kubelet[3235]: I0625 18:50:02.501790 3235 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:50:02.503200 kubelet[3235]: I0625 18:50:02.503152 3235 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 18:50:02.503291 kubelet[3235]: I0625 18:50:02.503276 3235 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:50:02.697635 kubelet[3235]: I0625 18:50:02.697213 3235 topology_manager.go:215] "Topology Admit Handler" podUID="c111604c4abfb69ba73eae50b3553fb4" podNamespace="kube-system" podName="kube-apiserver-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:02.697635 kubelet[3235]: I0625 18:50:02.697379 3235 topology_manager.go:215] "Topology Admit Handler" podUID="57499cf3a89df435571a938e1489f479" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:02.697635 kubelet[3235]: I0625 18:50:02.697462 3235 topology_manager.go:215] "Topology Admit Handler" podUID="9ec7a8f2b953c18e7890d61185164baa" podNamespace="kube-system" podName="kube-scheduler-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:02.705159 kubelet[3235]: W0625 18:50:02.704687 3235 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:50:02.705159 kubelet[3235]: W0625 18:50:02.704755 3235 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:50:02.705159 kubelet[3235]: W0625 18:50:02.704979 3235 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:50:02.705159 kubelet[3235]: E0625 18:50:02.705039 3235 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4012.0.0-a-c5aaeb7e49\" already exists" pod="kube-system/kube-scheduler-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:02.866892 kubelet[3235]: I0625 18:50:02.866841 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57499cf3a89df435571a938e1489f479-k8s-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49\" (UID: \"57499cf3a89df435571a938e1489f479\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:02.867059 kubelet[3235]: I0625 18:50:02.866941 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/57499cf3a89df435571a938e1489f479-kubeconfig\") pod \"kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49\" (UID: \"57499cf3a89df435571a938e1489f479\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:02.867059 kubelet[3235]: I0625 18:50:02.866975 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c111604c4abfb69ba73eae50b3553fb4-k8s-certs\") pod \"kube-apiserver-ci-4012.0.0-a-c5aaeb7e49\" (UID: \"c111604c4abfb69ba73eae50b3553fb4\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:02.867059 kubelet[3235]: I0625 18:50:02.867003 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c111604c4abfb69ba73eae50b3553fb4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.0.0-a-c5aaeb7e49\" (UID: \"c111604c4abfb69ba73eae50b3553fb4\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:02.867059 kubelet[3235]: I0625 18:50:02.867026 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57499cf3a89df435571a938e1489f479-ca-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49\" (UID: \"57499cf3a89df435571a938e1489f479\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:02.867059 kubelet[3235]: I0625 18:50:02.867048 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ec7a8f2b953c18e7890d61185164baa-kubeconfig\") pod \"kube-scheduler-ci-4012.0.0-a-c5aaeb7e49\" (UID: \"9ec7a8f2b953c18e7890d61185164baa\") " pod="kube-system/kube-scheduler-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:02.867250 kubelet[3235]: I0625 18:50:02.867069 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c111604c4abfb69ba73eae50b3553fb4-ca-certs\") pod \"kube-apiserver-ci-4012.0.0-a-c5aaeb7e49\" (UID: \"c111604c4abfb69ba73eae50b3553fb4\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:02.867250 kubelet[3235]: I0625 18:50:02.867092 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/57499cf3a89df435571a938e1489f479-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49\" (UID: \"57499cf3a89df435571a938e1489f479\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:02.867250 kubelet[3235]: I0625 18:50:02.867116 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57499cf3a89df435571a938e1489f479-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49\" (UID: \"57499cf3a89df435571a938e1489f479\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:03.344630 kubelet[3235]: I0625 18:50:03.344579 3235 apiserver.go:52] "Watching apiserver" Jun 25 18:50:03.365995 kubelet[3235]: I0625 18:50:03.365961 3235 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 18:50:03.500887 kubelet[3235]: I0625 18:50:03.500768 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4012.0.0-a-c5aaeb7e49" podStartSLOduration=3.500743413 podStartE2EDuration="3.500743413s" podCreationTimestamp="2024-06-25 18:50:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:50:03.488651484 +0000 UTC m=+1.731019616" watchObservedRunningTime="2024-06-25 18:50:03.500743413 +0000 UTC m=+1.743111645" Jun 25 18:50:03.544662 kubelet[3235]: I0625 18:50:03.544593 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4012.0.0-a-c5aaeb7e49" podStartSLOduration=1.544561441 podStartE2EDuration="1.544561441s" podCreationTimestamp="2024-06-25 18:50:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:50:03.501605929 +0000 UTC m=+1.743974161" watchObservedRunningTime="2024-06-25 18:50:03.544561441 +0000 UTC m=+1.786929673" Jun 25 18:50:03.595477 kubelet[3235]: I0625 18:50:03.595113 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4012.0.0-a-c5aaeb7e49" podStartSLOduration=1.595089396 podStartE2EDuration="1.595089396s" podCreationTimestamp="2024-06-25 18:50:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:50:03.546060769 +0000 UTC m=+1.788428901" watchObservedRunningTime="2024-06-25 18:50:03.595089396 +0000 UTC m=+1.837457528" Jun 25 18:50:07.405840 sudo[2343]: pam_unix(sudo:session): session closed for user root Jun 25 18:50:07.510620 sshd[2340]: pam_unix(sshd:session): session closed for user core Jun 25 18:50:07.515387 systemd[1]: sshd@6-10.200.8.15:22-10.200.16.10:38732.service: Deactivated successfully. Jun 25 18:50:07.517441 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:50:07.517661 systemd[1]: session-9.scope: Consumed 4.409s CPU time, 137.8M memory peak, 0B memory swap peak. Jun 25 18:50:07.518383 systemd-logind[1680]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:50:07.519644 systemd-logind[1680]: Removed session 9. Jun 25 18:50:15.860600 kubelet[3235]: I0625 18:50:15.860542 3235 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:50:15.861773 kubelet[3235]: I0625 18:50:15.861376 3235 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:50:15.861819 containerd[1702]: time="2024-06-25T18:50:15.861044641Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:50:16.230837 kubelet[3235]: I0625 18:50:16.230637 3235 topology_manager.go:215] "Topology Admit Handler" podUID="bb7208d6-af30-4521-8aaa-5a544ac8331e" podNamespace="kube-system" podName="kube-proxy-qcsg9" Jun 25 18:50:16.242565 systemd[1]: Created slice kubepods-besteffort-podbb7208d6_af30_4521_8aaa_5a544ac8331e.slice - libcontainer container kubepods-besteffort-podbb7208d6_af30_4521_8aaa_5a544ac8331e.slice. Jun 25 18:50:16.258568 kubelet[3235]: I0625 18:50:16.258529 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bb7208d6-af30-4521-8aaa-5a544ac8331e-kube-proxy\") pod \"kube-proxy-qcsg9\" (UID: \"bb7208d6-af30-4521-8aaa-5a544ac8331e\") " pod="kube-system/kube-proxy-qcsg9" Jun 25 18:50:16.258711 kubelet[3235]: I0625 18:50:16.258580 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb7208d6-af30-4521-8aaa-5a544ac8331e-xtables-lock\") pod \"kube-proxy-qcsg9\" (UID: \"bb7208d6-af30-4521-8aaa-5a544ac8331e\") " pod="kube-system/kube-proxy-qcsg9" Jun 25 18:50:16.258711 kubelet[3235]: I0625 18:50:16.258634 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb7208d6-af30-4521-8aaa-5a544ac8331e-lib-modules\") pod \"kube-proxy-qcsg9\" (UID: \"bb7208d6-af30-4521-8aaa-5a544ac8331e\") " pod="kube-system/kube-proxy-qcsg9" Jun 25 18:50:16.258711 kubelet[3235]: I0625 18:50:16.258676 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq46n\" (UniqueName: \"kubernetes.io/projected/bb7208d6-af30-4521-8aaa-5a544ac8331e-kube-api-access-rq46n\") pod \"kube-proxy-qcsg9\" (UID: \"bb7208d6-af30-4521-8aaa-5a544ac8331e\") " pod="kube-system/kube-proxy-qcsg9" Jun 25 18:50:16.365289 kubelet[3235]: E0625 18:50:16.365219 3235 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 25 18:50:16.365709 kubelet[3235]: E0625 18:50:16.365486 3235 projected.go:200] Error preparing data for projected volume kube-api-access-rq46n for pod kube-system/kube-proxy-qcsg9: configmap "kube-root-ca.crt" not found Jun 25 18:50:16.365709 kubelet[3235]: E0625 18:50:16.365605 3235 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bb7208d6-af30-4521-8aaa-5a544ac8331e-kube-api-access-rq46n podName:bb7208d6-af30-4521-8aaa-5a544ac8331e nodeName:}" failed. No retries permitted until 2024-06-25 18:50:16.865578461 +0000 UTC m=+15.107946593 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rq46n" (UniqueName: "kubernetes.io/projected/bb7208d6-af30-4521-8aaa-5a544ac8331e-kube-api-access-rq46n") pod "kube-proxy-qcsg9" (UID: "bb7208d6-af30-4521-8aaa-5a544ac8331e") : configmap "kube-root-ca.crt" not found Jun 25 18:50:16.957898 kubelet[3235]: I0625 18:50:16.955754 3235 topology_manager.go:215] "Topology Admit Handler" podUID="6b69e6a5-f799-4091-aa2d-8e70d90a421e" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-qnssj" Jun 25 18:50:16.964287 kubelet[3235]: I0625 18:50:16.964241 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9vp2\" (UniqueName: \"kubernetes.io/projected/6b69e6a5-f799-4091-aa2d-8e70d90a421e-kube-api-access-q9vp2\") pod \"tigera-operator-76ff79f7fd-qnssj\" (UID: \"6b69e6a5-f799-4091-aa2d-8e70d90a421e\") " pod="tigera-operator/tigera-operator-76ff79f7fd-qnssj" Jun 25 18:50:16.964428 kubelet[3235]: I0625 18:50:16.964312 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6b69e6a5-f799-4091-aa2d-8e70d90a421e-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-qnssj\" (UID: \"6b69e6a5-f799-4091-aa2d-8e70d90a421e\") " pod="tigera-operator/tigera-operator-76ff79f7fd-qnssj" Jun 25 18:50:16.969719 systemd[1]: Created slice kubepods-besteffort-pod6b69e6a5_f799_4091_aa2d_8e70d90a421e.slice - libcontainer container kubepods-besteffort-pod6b69e6a5_f799_4091_aa2d_8e70d90a421e.slice. Jun 25 18:50:17.150393 containerd[1702]: time="2024-06-25T18:50:17.150341512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qcsg9,Uid:bb7208d6-af30-4521-8aaa-5a544ac8331e,Namespace:kube-system,Attempt:0,}" Jun 25 18:50:17.194152 containerd[1702]: time="2024-06-25T18:50:17.193999627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:50:17.194152 containerd[1702]: time="2024-06-25T18:50:17.194078928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:50:17.194152 containerd[1702]: time="2024-06-25T18:50:17.194107129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:50:17.194465 containerd[1702]: time="2024-06-25T18:50:17.194171130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:50:17.223041 systemd[1]: Started cri-containerd-7a9fe1654c055250d6a371ee47b5e1aa512300b003e6e886168dd644c91c7871.scope - libcontainer container 7a9fe1654c055250d6a371ee47b5e1aa512300b003e6e886168dd644c91c7871. Jun 25 18:50:17.245435 containerd[1702]: time="2024-06-25T18:50:17.245388886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qcsg9,Uid:bb7208d6-af30-4521-8aaa-5a544ac8331e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a9fe1654c055250d6a371ee47b5e1aa512300b003e6e886168dd644c91c7871\"" Jun 25 18:50:17.248755 containerd[1702]: time="2024-06-25T18:50:17.248556946Z" level=info msg="CreateContainer within sandbox \"7a9fe1654c055250d6a371ee47b5e1aa512300b003e6e886168dd644c91c7871\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:50:17.280038 containerd[1702]: time="2024-06-25T18:50:17.279985132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-qnssj,Uid:6b69e6a5-f799-4091-aa2d-8e70d90a421e,Namespace:tigera-operator,Attempt:0,}" Jun 25 18:50:17.315974 containerd[1702]: time="2024-06-25T18:50:17.315927503Z" level=info msg="CreateContainer within sandbox \"7a9fe1654c055250d6a371ee47b5e1aa512300b003e6e886168dd644c91c7871\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9ab68eecdc6fd241faf84855b712f2377fab69d59753f1891fc1e4645944be7a\"" Jun 25 18:50:17.318050 containerd[1702]: time="2024-06-25T18:50:17.316642417Z" level=info msg="StartContainer for \"9ab68eecdc6fd241faf84855b712f2377fab69d59753f1891fc1e4645944be7a\"" Jun 25 18:50:17.346065 systemd[1]: Started cri-containerd-9ab68eecdc6fd241faf84855b712f2377fab69d59753f1891fc1e4645944be7a.scope - libcontainer container 9ab68eecdc6fd241faf84855b712f2377fab69d59753f1891fc1e4645944be7a. Jun 25 18:50:17.387913 containerd[1702]: time="2024-06-25T18:50:17.386113714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:50:17.387913 containerd[1702]: time="2024-06-25T18:50:17.386172315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:50:17.387913 containerd[1702]: time="2024-06-25T18:50:17.386192315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:50:17.387913 containerd[1702]: time="2024-06-25T18:50:17.386207315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:50:17.388481 containerd[1702]: time="2024-06-25T18:50:17.388444057Z" level=info msg="StartContainer for \"9ab68eecdc6fd241faf84855b712f2377fab69d59753f1891fc1e4645944be7a\" returns successfully" Jun 25 18:50:17.413431 systemd[1]: Started cri-containerd-01f625643f181f7bfa035d0b050cd616464e04a45559b63f6c04b8be74c0b8cf.scope - libcontainer container 01f625643f181f7bfa035d0b050cd616464e04a45559b63f6c04b8be74c0b8cf. Jun 25 18:50:17.488830 kubelet[3235]: I0625 18:50:17.488616 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qcsg9" podStartSLOduration=1.488396723 podStartE2EDuration="1.488396723s" podCreationTimestamp="2024-06-25 18:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:50:17.486758493 +0000 UTC m=+15.729126625" watchObservedRunningTime="2024-06-25 18:50:17.488396723 +0000 UTC m=+15.730764955" Jun 25 18:50:17.490671 containerd[1702]: time="2024-06-25T18:50:17.490527363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-qnssj,Uid:6b69e6a5-f799-4091-aa2d-8e70d90a421e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"01f625643f181f7bfa035d0b050cd616464e04a45559b63f6c04b8be74c0b8cf\"" Jun 25 18:50:17.494511 containerd[1702]: time="2024-06-25T18:50:17.494412936Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 18:50:19.217563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2657437558.mount: Deactivated successfully. Jun 25 18:50:19.986251 containerd[1702]: time="2024-06-25T18:50:19.986196857Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:50:19.991447 containerd[1702]: time="2024-06-25T18:50:19.991383953Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076076" Jun 25 18:50:19.996926 containerd[1702]: time="2024-06-25T18:50:19.996856456Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:50:20.002304 containerd[1702]: time="2024-06-25T18:50:20.002247856Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:50:20.003502 containerd[1702]: time="2024-06-25T18:50:20.002967670Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.508514333s" Jun 25 18:50:20.003502 containerd[1702]: time="2024-06-25T18:50:20.003006170Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 18:50:20.005751 containerd[1702]: time="2024-06-25T18:50:20.005287213Z" level=info msg="CreateContainer within sandbox \"01f625643f181f7bfa035d0b050cd616464e04a45559b63f6c04b8be74c0b8cf\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 18:50:20.048031 containerd[1702]: time="2024-06-25T18:50:20.047985410Z" level=info msg="CreateContainer within sandbox \"01f625643f181f7bfa035d0b050cd616464e04a45559b63f6c04b8be74c0b8cf\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"188601a25da20b6c280307513c7ee437bd8376dd654ef9df40dd72f2df29afe1\"" Jun 25 18:50:20.048689 containerd[1702]: time="2024-06-25T18:50:20.048626922Z" level=info msg="StartContainer for \"188601a25da20b6c280307513c7ee437bd8376dd654ef9df40dd72f2df29afe1\"" Jun 25 18:50:20.077054 systemd[1]: Started cri-containerd-188601a25da20b6c280307513c7ee437bd8376dd654ef9df40dd72f2df29afe1.scope - libcontainer container 188601a25da20b6c280307513c7ee437bd8376dd654ef9df40dd72f2df29afe1. Jun 25 18:50:20.109569 containerd[1702]: time="2024-06-25T18:50:20.109449958Z" level=info msg="StartContainer for \"188601a25da20b6c280307513c7ee437bd8376dd654ef9df40dd72f2df29afe1\" returns successfully" Jun 25 18:50:23.167693 kubelet[3235]: I0625 18:50:23.164134 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-qnssj" podStartSLOduration=4.65348733 podStartE2EDuration="7.164110503s" podCreationTimestamp="2024-06-25 18:50:16 +0000 UTC" firstStartedPulling="2024-06-25 18:50:17.493277314 +0000 UTC m=+15.735645446" lastFinishedPulling="2024-06-25 18:50:20.003900487 +0000 UTC m=+18.246268619" observedRunningTime="2024-06-25 18:50:20.492255305 +0000 UTC m=+18.734623437" watchObservedRunningTime="2024-06-25 18:50:23.164110503 +0000 UTC m=+21.406478735" Jun 25 18:50:23.167693 kubelet[3235]: I0625 18:50:23.164411 3235 topology_manager.go:215] "Topology Admit Handler" podUID="75358c6b-8a1e-4f35-b1bb-5719efbeaed8" podNamespace="calico-system" podName="calico-typha-6c4b8445b7-s64zf" Jun 25 18:50:23.181049 systemd[1]: Created slice kubepods-besteffort-pod75358c6b_8a1e_4f35_b1bb_5719efbeaed8.slice - libcontainer container kubepods-besteffort-pod75358c6b_8a1e_4f35_b1bb_5719efbeaed8.slice. Jun 25 18:50:23.210700 kubelet[3235]: I0625 18:50:23.210652 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/75358c6b-8a1e-4f35-b1bb-5719efbeaed8-typha-certs\") pod \"calico-typha-6c4b8445b7-s64zf\" (UID: \"75358c6b-8a1e-4f35-b1bb-5719efbeaed8\") " pod="calico-system/calico-typha-6c4b8445b7-s64zf" Jun 25 18:50:23.210700 kubelet[3235]: I0625 18:50:23.210709 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75358c6b-8a1e-4f35-b1bb-5719efbeaed8-tigera-ca-bundle\") pod \"calico-typha-6c4b8445b7-s64zf\" (UID: \"75358c6b-8a1e-4f35-b1bb-5719efbeaed8\") " pod="calico-system/calico-typha-6c4b8445b7-s64zf" Jun 25 18:50:23.210957 kubelet[3235]: I0625 18:50:23.210734 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4bj6\" (UniqueName: \"kubernetes.io/projected/75358c6b-8a1e-4f35-b1bb-5719efbeaed8-kube-api-access-q4bj6\") pod \"calico-typha-6c4b8445b7-s64zf\" (UID: \"75358c6b-8a1e-4f35-b1bb-5719efbeaed8\") " pod="calico-system/calico-typha-6c4b8445b7-s64zf" Jun 25 18:50:23.251774 kubelet[3235]: I0625 18:50:23.251720 3235 topology_manager.go:215] "Topology Admit Handler" podUID="40af61f3-39ee-49e8-9983-7645d139a77e" podNamespace="calico-system" podName="calico-node-vpdqr" Jun 25 18:50:23.261250 systemd[1]: Created slice kubepods-besteffort-pod40af61f3_39ee_49e8_9983_7645d139a77e.slice - libcontainer container kubepods-besteffort-pod40af61f3_39ee_49e8_9983_7645d139a77e.slice. Jun 25 18:50:23.311346 kubelet[3235]: I0625 18:50:23.311295 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-flexvol-driver-host\") pod \"calico-node-vpdqr\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " pod="calico-system/calico-node-vpdqr" Jun 25 18:50:23.311346 kubelet[3235]: I0625 18:50:23.311358 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-cni-bin-dir\") pod \"calico-node-vpdqr\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " pod="calico-system/calico-node-vpdqr" Jun 25 18:50:23.311608 kubelet[3235]: I0625 18:50:23.311391 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-cni-net-dir\") pod \"calico-node-vpdqr\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " pod="calico-system/calico-node-vpdqr" Jun 25 18:50:23.311608 kubelet[3235]: I0625 18:50:23.311415 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-cni-log-dir\") pod \"calico-node-vpdqr\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " pod="calico-system/calico-node-vpdqr" Jun 25 18:50:23.311608 kubelet[3235]: I0625 18:50:23.311439 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-xtables-lock\") pod \"calico-node-vpdqr\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " pod="calico-system/calico-node-vpdqr" Jun 25 18:50:23.311608 kubelet[3235]: I0625 18:50:23.311463 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-policysync\") pod \"calico-node-vpdqr\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " pod="calico-system/calico-node-vpdqr" Jun 25 18:50:23.311608 kubelet[3235]: I0625 18:50:23.311507 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-lib-modules\") pod \"calico-node-vpdqr\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " pod="calico-system/calico-node-vpdqr" Jun 25 18:50:23.311893 kubelet[3235]: I0625 18:50:23.311537 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c26np\" (UniqueName: \"kubernetes.io/projected/40af61f3-39ee-49e8-9983-7645d139a77e-kube-api-access-c26np\") pod \"calico-node-vpdqr\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " pod="calico-system/calico-node-vpdqr" Jun 25 18:50:23.311893 kubelet[3235]: I0625 18:50:23.311585 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/40af61f3-39ee-49e8-9983-7645d139a77e-node-certs\") pod \"calico-node-vpdqr\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " pod="calico-system/calico-node-vpdqr" Jun 25 18:50:23.311893 kubelet[3235]: I0625 18:50:23.311616 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-var-run-calico\") pod \"calico-node-vpdqr\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " pod="calico-system/calico-node-vpdqr" Jun 25 18:50:23.311893 kubelet[3235]: I0625 18:50:23.311645 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-var-lib-calico\") pod \"calico-node-vpdqr\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " pod="calico-system/calico-node-vpdqr" Jun 25 18:50:23.311893 kubelet[3235]: I0625 18:50:23.311693 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40af61f3-39ee-49e8-9983-7645d139a77e-tigera-ca-bundle\") pod \"calico-node-vpdqr\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " pod="calico-system/calico-node-vpdqr" Jun 25 18:50:23.366228 kubelet[3235]: I0625 18:50:23.365071 3235 topology_manager.go:215] "Topology Admit Handler" podUID="0e18d57e-4106-4db0-b549-0f6c4c21f68a" podNamespace="calico-system" podName="csi-node-driver-7hj8p" Jun 25 18:50:23.366228 kubelet[3235]: E0625 18:50:23.365446 3235 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7hj8p" podUID="0e18d57e-4106-4db0-b549-0f6c4c21f68a" Jun 25 18:50:23.413671 kubelet[3235]: I0625 18:50:23.412471 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0e18d57e-4106-4db0-b549-0f6c4c21f68a-varrun\") pod \"csi-node-driver-7hj8p\" (UID: \"0e18d57e-4106-4db0-b549-0f6c4c21f68a\") " pod="calico-system/csi-node-driver-7hj8p" Jun 25 18:50:23.413671 kubelet[3235]: I0625 18:50:23.412526 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0e18d57e-4106-4db0-b549-0f6c4c21f68a-socket-dir\") pod \"csi-node-driver-7hj8p\" (UID: \"0e18d57e-4106-4db0-b549-0f6c4c21f68a\") " pod="calico-system/csi-node-driver-7hj8p" Jun 25 18:50:23.413671 kubelet[3235]: I0625 18:50:23.412612 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e18d57e-4106-4db0-b549-0f6c4c21f68a-kubelet-dir\") pod \"csi-node-driver-7hj8p\" (UID: \"0e18d57e-4106-4db0-b549-0f6c4c21f68a\") " pod="calico-system/csi-node-driver-7hj8p" Jun 25 18:50:23.413671 kubelet[3235]: I0625 18:50:23.412651 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0e18d57e-4106-4db0-b549-0f6c4c21f68a-registration-dir\") pod \"csi-node-driver-7hj8p\" (UID: \"0e18d57e-4106-4db0-b549-0f6c4c21f68a\") " pod="calico-system/csi-node-driver-7hj8p" Jun 25 18:50:23.413671 kubelet[3235]: I0625 18:50:23.412678 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgrv8\" (UniqueName: \"kubernetes.io/projected/0e18d57e-4106-4db0-b549-0f6c4c21f68a-kube-api-access-cgrv8\") pod \"csi-node-driver-7hj8p\" (UID: \"0e18d57e-4106-4db0-b549-0f6c4c21f68a\") " pod="calico-system/csi-node-driver-7hj8p" Jun 25 18:50:23.415802 kubelet[3235]: E0625 18:50:23.415756 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.415802 kubelet[3235]: W0625 18:50:23.415795 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.415994 kubelet[3235]: E0625 18:50:23.415817 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.419073 kubelet[3235]: E0625 18:50:23.418990 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.419073 kubelet[3235]: W0625 18:50:23.419012 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.419073 kubelet[3235]: E0625 18:50:23.419032 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.422761 kubelet[3235]: E0625 18:50:23.420794 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.422761 kubelet[3235]: W0625 18:50:23.420810 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.422761 kubelet[3235]: E0625 18:50:23.420826 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.422761 kubelet[3235]: E0625 18:50:23.422447 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.422761 kubelet[3235]: W0625 18:50:23.422460 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.422761 kubelet[3235]: E0625 18:50:23.422475 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.424428 kubelet[3235]: E0625 18:50:23.424412 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.424529 kubelet[3235]: W0625 18:50:23.424517 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.424614 kubelet[3235]: E0625 18:50:23.424602 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.441070 kubelet[3235]: E0625 18:50:23.441042 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.441070 kubelet[3235]: W0625 18:50:23.441065 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.441240 kubelet[3235]: E0625 18:50:23.441085 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.484891 containerd[1702]: time="2024-06-25T18:50:23.484836058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c4b8445b7-s64zf,Uid:75358c6b-8a1e-4f35-b1bb-5719efbeaed8,Namespace:calico-system,Attempt:0,}" Jun 25 18:50:23.513904 kubelet[3235]: E0625 18:50:23.513434 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.513904 kubelet[3235]: W0625 18:50:23.513468 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.513904 kubelet[3235]: E0625 18:50:23.513492 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.513904 kubelet[3235]: E0625 18:50:23.513789 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.513904 kubelet[3235]: W0625 18:50:23.513798 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.513904 kubelet[3235]: E0625 18:50:23.513820 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.514272 kubelet[3235]: E0625 18:50:23.514220 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.514272 kubelet[3235]: W0625 18:50:23.514233 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.514385 kubelet[3235]: E0625 18:50:23.514367 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.514765 kubelet[3235]: E0625 18:50:23.514729 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.514765 kubelet[3235]: W0625 18:50:23.514762 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.514940 kubelet[3235]: E0625 18:50:23.514783 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.515608 kubelet[3235]: E0625 18:50:23.515065 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.515608 kubelet[3235]: W0625 18:50:23.515078 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.515608 kubelet[3235]: E0625 18:50:23.515091 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.515608 kubelet[3235]: E0625 18:50:23.515372 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.515608 kubelet[3235]: W0625 18:50:23.515383 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.515853 kubelet[3235]: E0625 18:50:23.515619 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.515853 kubelet[3235]: E0625 18:50:23.515707 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.515853 kubelet[3235]: W0625 18:50:23.515721 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.515853 kubelet[3235]: E0625 18:50:23.515779 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.516088 kubelet[3235]: E0625 18:50:23.516067 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.516138 kubelet[3235]: W0625 18:50:23.516104 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.516138 kubelet[3235]: E0625 18:50:23.516122 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.516470 kubelet[3235]: E0625 18:50:23.516454 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.516470 kubelet[3235]: W0625 18:50:23.516471 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.516572 kubelet[3235]: E0625 18:50:23.516489 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.516899 kubelet[3235]: E0625 18:50:23.516763 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.516899 kubelet[3235]: W0625 18:50:23.516776 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.516899 kubelet[3235]: E0625 18:50:23.516795 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.517904 kubelet[3235]: E0625 18:50:23.517238 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.517904 kubelet[3235]: W0625 18:50:23.517251 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.517904 kubelet[3235]: E0625 18:50:23.517271 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.517904 kubelet[3235]: E0625 18:50:23.517520 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.517904 kubelet[3235]: W0625 18:50:23.517528 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.517904 kubelet[3235]: E0625 18:50:23.517565 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.520766 kubelet[3235]: E0625 18:50:23.520742 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.520766 kubelet[3235]: W0625 18:50:23.520766 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.520926 kubelet[3235]: E0625 18:50:23.520792 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.521066 kubelet[3235]: E0625 18:50:23.521041 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.521066 kubelet[3235]: W0625 18:50:23.521058 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.521946 kubelet[3235]: E0625 18:50:23.521142 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.521946 kubelet[3235]: E0625 18:50:23.521342 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.521946 kubelet[3235]: W0625 18:50:23.521352 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.521946 kubelet[3235]: E0625 18:50:23.521475 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.522181 kubelet[3235]: E0625 18:50:23.521983 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.522181 kubelet[3235]: W0625 18:50:23.521995 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.522181 kubelet[3235]: E0625 18:50:23.522080 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.525178 kubelet[3235]: E0625 18:50:23.524180 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.525178 kubelet[3235]: W0625 18:50:23.524192 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.525178 kubelet[3235]: E0625 18:50:23.524416 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.525178 kubelet[3235]: E0625 18:50:23.524572 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.525178 kubelet[3235]: W0625 18:50:23.524582 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.525178 kubelet[3235]: E0625 18:50:23.524630 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.525178 kubelet[3235]: E0625 18:50:23.524776 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.525178 kubelet[3235]: W0625 18:50:23.524783 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.525178 kubelet[3235]: E0625 18:50:23.525042 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.527247 kubelet[3235]: E0625 18:50:23.526336 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.527247 kubelet[3235]: W0625 18:50:23.526351 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.527247 kubelet[3235]: E0625 18:50:23.526369 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.527770 kubelet[3235]: E0625 18:50:23.527591 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.527770 kubelet[3235]: W0625 18:50:23.527606 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.527770 kubelet[3235]: E0625 18:50:23.527693 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.527979 kubelet[3235]: E0625 18:50:23.527858 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.527979 kubelet[3235]: W0625 18:50:23.527876 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.527979 kubelet[3235]: E0625 18:50:23.527903 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.530299 kubelet[3235]: E0625 18:50:23.529934 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.530299 kubelet[3235]: W0625 18:50:23.529948 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.530299 kubelet[3235]: E0625 18:50:23.529966 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.530299 kubelet[3235]: E0625 18:50:23.530205 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.530299 kubelet[3235]: W0625 18:50:23.530215 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.530841 kubelet[3235]: E0625 18:50:23.530530 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.531236 kubelet[3235]: E0625 18:50:23.531164 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.531236 kubelet[3235]: W0625 18:50:23.531179 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.531236 kubelet[3235]: E0625 18:50:23.531193 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.549051 kubelet[3235]: E0625 18:50:23.548828 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:23.549051 kubelet[3235]: W0625 18:50:23.548847 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:23.549051 kubelet[3235]: E0625 18:50:23.548865 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:23.556674 containerd[1702]: time="2024-06-25T18:50:23.556440442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:50:23.556674 containerd[1702]: time="2024-06-25T18:50:23.556508543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:50:23.556947 containerd[1702]: time="2024-06-25T18:50:23.556536644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:50:23.556947 containerd[1702]: time="2024-06-25T18:50:23.556559544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:50:23.567941 containerd[1702]: time="2024-06-25T18:50:23.567587411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vpdqr,Uid:40af61f3-39ee-49e8-9983-7645d139a77e,Namespace:calico-system,Attempt:0,}" Jun 25 18:50:23.583565 systemd[1]: Started cri-containerd-9815506f0a5ca14675af604c4cc2a14d7a1fe650dd93b668f7df2d8ce1b7e9d5.scope - libcontainer container 9815506f0a5ca14675af604c4cc2a14d7a1fe650dd93b668f7df2d8ce1b7e9d5. Jun 25 18:50:23.630720 containerd[1702]: time="2024-06-25T18:50:23.629775252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:50:23.630720 containerd[1702]: time="2024-06-25T18:50:23.629828353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:50:23.630720 containerd[1702]: time="2024-06-25T18:50:23.629853053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:50:23.630720 containerd[1702]: time="2024-06-25T18:50:23.629883054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:50:23.671706 systemd[1]: Started cri-containerd-a78fd717861968d1caec128099b6547cf303a987ed441d2adc47acafa25e3681.scope - libcontainer container a78fd717861968d1caec128099b6547cf303a987ed441d2adc47acafa25e3681. Jun 25 18:50:23.678743 containerd[1702]: time="2024-06-25T18:50:23.678661792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c4b8445b7-s64zf,Uid:75358c6b-8a1e-4f35-b1bb-5719efbeaed8,Namespace:calico-system,Attempt:0,} returns sandbox id \"9815506f0a5ca14675af604c4cc2a14d7a1fe650dd93b668f7df2d8ce1b7e9d5\"" Jun 25 18:50:23.682331 containerd[1702]: time="2024-06-25T18:50:23.681558536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 18:50:23.707297 containerd[1702]: time="2024-06-25T18:50:23.707256925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vpdqr,Uid:40af61f3-39ee-49e8-9983-7645d139a77e,Namespace:calico-system,Attempt:0,} returns sandbox id \"a78fd717861968d1caec128099b6547cf303a987ed441d2adc47acafa25e3681\"" Jun 25 18:50:24.398349 kubelet[3235]: E0625 18:50:24.396850 3235 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7hj8p" podUID="0e18d57e-4106-4db0-b549-0f6c4c21f68a" Jun 25 18:50:26.396107 kubelet[3235]: E0625 18:50:26.395684 3235 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7hj8p" podUID="0e18d57e-4106-4db0-b549-0f6c4c21f68a" Jun 25 18:50:28.032145 containerd[1702]: time="2024-06-25T18:50:28.032095295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:50:28.035377 containerd[1702]: time="2024-06-25T18:50:28.035274543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 18:50:28.045064 containerd[1702]: time="2024-06-25T18:50:28.044774287Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:50:28.055888 containerd[1702]: time="2024-06-25T18:50:28.054501234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:50:28.055888 containerd[1702]: time="2024-06-25T18:50:28.055513549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 4.373912312s" Jun 25 18:50:28.055888 containerd[1702]: time="2024-06-25T18:50:28.055548050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 18:50:28.058396 containerd[1702]: time="2024-06-25T18:50:28.058361392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 18:50:28.075849 containerd[1702]: time="2024-06-25T18:50:28.075808757Z" level=info msg="CreateContainer within sandbox \"9815506f0a5ca14675af604c4cc2a14d7a1fe650dd93b668f7df2d8ce1b7e9d5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 18:50:28.137154 containerd[1702]: time="2024-06-25T18:50:28.137103384Z" level=info msg="CreateContainer within sandbox \"9815506f0a5ca14675af604c4cc2a14d7a1fe650dd93b668f7df2d8ce1b7e9d5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf\"" Jun 25 18:50:28.138512 containerd[1702]: time="2024-06-25T18:50:28.137530091Z" level=info msg="StartContainer for \"0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf\"" Jun 25 18:50:28.169453 systemd[1]: Started cri-containerd-0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf.scope - libcontainer container 0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf. Jun 25 18:50:28.222761 containerd[1702]: time="2024-06-25T18:50:28.222649279Z" level=info msg="StartContainer for \"0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf\" returns successfully" Jun 25 18:50:28.395621 kubelet[3235]: E0625 18:50:28.395567 3235 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7hj8p" podUID="0e18d57e-4106-4db0-b549-0f6c4c21f68a" Jun 25 18:50:28.505516 containerd[1702]: time="2024-06-25T18:50:28.504726450Z" level=info msg="StopContainer for \"0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf\" with timeout 300 (s)" Jun 25 18:50:28.505723 containerd[1702]: time="2024-06-25T18:50:28.505695464Z" level=info msg="Stop container \"0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf\" with signal terminated" Jun 25 18:50:28.518540 systemd[1]: cri-containerd-0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf.scope: Deactivated successfully. Jun 25 18:50:29.064280 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf-rootfs.mount: Deactivated successfully. Jun 25 18:50:29.702275 containerd[1702]: time="2024-06-25T18:50:29.702204643Z" level=info msg="shim disconnected" id=0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf namespace=k8s.io Jun 25 18:50:29.702275 containerd[1702]: time="2024-06-25T18:50:29.702266344Z" level=warning msg="cleaning up after shim disconnected" id=0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf namespace=k8s.io Jun 25 18:50:29.702275 containerd[1702]: time="2024-06-25T18:50:29.702277244Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:50:29.721253 containerd[1702]: time="2024-06-25T18:50:29.720778874Z" level=info msg="StopContainer for \"0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf\" returns successfully" Jun 25 18:50:29.721629 containerd[1702]: time="2024-06-25T18:50:29.721598089Z" level=info msg="StopPodSandbox for \"9815506f0a5ca14675af604c4cc2a14d7a1fe650dd93b668f7df2d8ce1b7e9d5\"" Jun 25 18:50:29.721908 containerd[1702]: time="2024-06-25T18:50:29.721647790Z" level=info msg="Container to stop \"0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:50:29.724583 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9815506f0a5ca14675af604c4cc2a14d7a1fe650dd93b668f7df2d8ce1b7e9d5-shm.mount: Deactivated successfully. Jun 25 18:50:29.732146 systemd[1]: cri-containerd-9815506f0a5ca14675af604c4cc2a14d7a1fe650dd93b668f7df2d8ce1b7e9d5.scope: Deactivated successfully. Jun 25 18:50:29.753550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9815506f0a5ca14675af604c4cc2a14d7a1fe650dd93b668f7df2d8ce1b7e9d5-rootfs.mount: Deactivated successfully. Jun 25 18:50:29.766753 containerd[1702]: time="2024-06-25T18:50:29.766689293Z" level=info msg="shim disconnected" id=9815506f0a5ca14675af604c4cc2a14d7a1fe650dd93b668f7df2d8ce1b7e9d5 namespace=k8s.io Jun 25 18:50:29.767572 containerd[1702]: time="2024-06-25T18:50:29.766949498Z" level=warning msg="cleaning up after shim disconnected" id=9815506f0a5ca14675af604c4cc2a14d7a1fe650dd93b668f7df2d8ce1b7e9d5 namespace=k8s.io Jun 25 18:50:29.767572 containerd[1702]: time="2024-06-25T18:50:29.766973699Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:50:29.780150 containerd[1702]: time="2024-06-25T18:50:29.780112333Z" level=info msg="TearDown network for sandbox \"9815506f0a5ca14675af604c4cc2a14d7a1fe650dd93b668f7df2d8ce1b7e9d5\" successfully" Jun 25 18:50:29.780150 containerd[1702]: time="2024-06-25T18:50:29.780142833Z" level=info msg="StopPodSandbox for \"9815506f0a5ca14675af604c4cc2a14d7a1fe650dd93b668f7df2d8ce1b7e9d5\" returns successfully" Jun 25 18:50:29.803375 kubelet[3235]: I0625 18:50:29.802090 3235 topology_manager.go:215] "Topology Admit Handler" podUID="0f7e032d-6f0a-4649-a125-f0c286ed7947" podNamespace="calico-system" podName="calico-typha-5b878f84d8-s52qn" Jun 25 18:50:29.803375 kubelet[3235]: E0625 18:50:29.802161 3235 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="75358c6b-8a1e-4f35-b1bb-5719efbeaed8" containerName="calico-typha" Jun 25 18:50:29.803375 kubelet[3235]: I0625 18:50:29.802198 3235 memory_manager.go:354] "RemoveStaleState removing state" podUID="75358c6b-8a1e-4f35-b1bb-5719efbeaed8" containerName="calico-typha" Jun 25 18:50:29.815923 systemd[1]: Created slice kubepods-besteffort-pod0f7e032d_6f0a_4649_a125_f0c286ed7947.slice - libcontainer container kubepods-besteffort-pod0f7e032d_6f0a_4649_a125_f0c286ed7947.slice. Jun 25 18:50:29.853081 kubelet[3235]: E0625 18:50:29.853045 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.853081 kubelet[3235]: W0625 18:50:29.853070 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.853317 kubelet[3235]: E0625 18:50:29.853098 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.853366 kubelet[3235]: E0625 18:50:29.853341 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.853366 kubelet[3235]: W0625 18:50:29.853353 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.853470 kubelet[3235]: E0625 18:50:29.853368 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.853575 kubelet[3235]: E0625 18:50:29.853555 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.853575 kubelet[3235]: W0625 18:50:29.853570 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.853689 kubelet[3235]: E0625 18:50:29.853583 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.853857 kubelet[3235]: E0625 18:50:29.853818 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.853857 kubelet[3235]: W0625 18:50:29.853854 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.854044 kubelet[3235]: E0625 18:50:29.853886 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.854149 kubelet[3235]: E0625 18:50:29.854133 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.854149 kubelet[3235]: W0625 18:50:29.854147 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.854251 kubelet[3235]: E0625 18:50:29.854161 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.854373 kubelet[3235]: E0625 18:50:29.854359 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.854373 kubelet[3235]: W0625 18:50:29.854371 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.854482 kubelet[3235]: E0625 18:50:29.854384 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.854596 kubelet[3235]: E0625 18:50:29.854573 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.854596 kubelet[3235]: W0625 18:50:29.854592 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.854704 kubelet[3235]: E0625 18:50:29.854606 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.854823 kubelet[3235]: E0625 18:50:29.854788 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.854823 kubelet[3235]: W0625 18:50:29.854798 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.854823 kubelet[3235]: E0625 18:50:29.854811 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.855085 kubelet[3235]: E0625 18:50:29.855065 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.855085 kubelet[3235]: W0625 18:50:29.855080 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.855212 kubelet[3235]: E0625 18:50:29.855095 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.855298 kubelet[3235]: E0625 18:50:29.855279 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.855298 kubelet[3235]: W0625 18:50:29.855293 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.855402 kubelet[3235]: E0625 18:50:29.855305 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.855519 kubelet[3235]: E0625 18:50:29.855499 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.855519 kubelet[3235]: W0625 18:50:29.855511 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.855617 kubelet[3235]: E0625 18:50:29.855523 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.855850 kubelet[3235]: E0625 18:50:29.855737 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.855850 kubelet[3235]: W0625 18:50:29.855750 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.855850 kubelet[3235]: E0625 18:50:29.855763 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.864164 kubelet[3235]: E0625 18:50:29.864143 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.864164 kubelet[3235]: W0625 18:50:29.864159 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.864299 kubelet[3235]: E0625 18:50:29.864173 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.864299 kubelet[3235]: I0625 18:50:29.864212 3235 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4bj6\" (UniqueName: \"kubernetes.io/projected/75358c6b-8a1e-4f35-b1bb-5719efbeaed8-kube-api-access-q4bj6\") pod \"75358c6b-8a1e-4f35-b1bb-5719efbeaed8\" (UID: \"75358c6b-8a1e-4f35-b1bb-5719efbeaed8\") " Jun 25 18:50:29.864508 kubelet[3235]: E0625 18:50:29.864487 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.864508 kubelet[3235]: W0625 18:50:29.864504 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.864618 kubelet[3235]: E0625 18:50:29.864523 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.864618 kubelet[3235]: I0625 18:50:29.864564 3235 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75358c6b-8a1e-4f35-b1bb-5719efbeaed8-tigera-ca-bundle\") pod \"75358c6b-8a1e-4f35-b1bb-5719efbeaed8\" (UID: \"75358c6b-8a1e-4f35-b1bb-5719efbeaed8\") " Jun 25 18:50:29.865079 kubelet[3235]: E0625 18:50:29.864797 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.865079 kubelet[3235]: W0625 18:50:29.864813 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.865079 kubelet[3235]: E0625 18:50:29.864826 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.865079 kubelet[3235]: I0625 18:50:29.864854 3235 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/75358c6b-8a1e-4f35-b1bb-5719efbeaed8-typha-certs\") pod \"75358c6b-8a1e-4f35-b1bb-5719efbeaed8\" (UID: \"75358c6b-8a1e-4f35-b1bb-5719efbeaed8\") " Jun 25 18:50:29.865429 kubelet[3235]: E0625 18:50:29.865410 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.865429 kubelet[3235]: W0625 18:50:29.865429 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.865539 kubelet[3235]: E0625 18:50:29.865443 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.865539 kubelet[3235]: I0625 18:50:29.865487 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0f7e032d-6f0a-4649-a125-f0c286ed7947-typha-certs\") pod \"calico-typha-5b878f84d8-s52qn\" (UID: \"0f7e032d-6f0a-4649-a125-f0c286ed7947\") " pod="calico-system/calico-typha-5b878f84d8-s52qn" Jun 25 18:50:29.867889 kubelet[3235]: E0625 18:50:29.865753 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.867889 kubelet[3235]: W0625 18:50:29.865770 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.867889 kubelet[3235]: E0625 18:50:29.865785 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.867889 kubelet[3235]: I0625 18:50:29.865828 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f7e032d-6f0a-4649-a125-f0c286ed7947-tigera-ca-bundle\") pod \"calico-typha-5b878f84d8-s52qn\" (UID: \"0f7e032d-6f0a-4649-a125-f0c286ed7947\") " pod="calico-system/calico-typha-5b878f84d8-s52qn" Jun 25 18:50:29.867889 kubelet[3235]: E0625 18:50:29.866152 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.867889 kubelet[3235]: W0625 18:50:29.866166 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.867889 kubelet[3235]: E0625 18:50:29.866178 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.867889 kubelet[3235]: I0625 18:50:29.866203 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csgxd\" (UniqueName: \"kubernetes.io/projected/0f7e032d-6f0a-4649-a125-f0c286ed7947-kube-api-access-csgxd\") pod \"calico-typha-5b878f84d8-s52qn\" (UID: \"0f7e032d-6f0a-4649-a125-f0c286ed7947\") " pod="calico-system/calico-typha-5b878f84d8-s52qn" Jun 25 18:50:29.868260 kubelet[3235]: E0625 18:50:29.866775 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.868260 kubelet[3235]: W0625 18:50:29.866911 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.868260 kubelet[3235]: E0625 18:50:29.866927 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.868260 kubelet[3235]: E0625 18:50:29.867325 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.868260 kubelet[3235]: W0625 18:50:29.867338 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.868260 kubelet[3235]: E0625 18:50:29.867353 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.868260 kubelet[3235]: E0625 18:50:29.867569 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.868260 kubelet[3235]: W0625 18:50:29.867579 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.868260 kubelet[3235]: E0625 18:50:29.867591 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.870942 kubelet[3235]: I0625 18:50:29.870904 3235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75358c6b-8a1e-4f35-b1bb-5719efbeaed8-kube-api-access-q4bj6" (OuterVolumeSpecName: "kube-api-access-q4bj6") pod "75358c6b-8a1e-4f35-b1bb-5719efbeaed8" (UID: "75358c6b-8a1e-4f35-b1bb-5719efbeaed8"). InnerVolumeSpecName "kube-api-access-q4bj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:50:29.873859 kubelet[3235]: E0625 18:50:29.871195 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.873859 kubelet[3235]: W0625 18:50:29.871214 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.873859 kubelet[3235]: E0625 18:50:29.871229 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.875603 systemd[1]: var-lib-kubelet-pods-75358c6b\x2d8a1e\x2d4f35\x2db1bb\x2d5719efbeaed8-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jun 25 18:50:29.876095 kubelet[3235]: E0625 18:50:29.875921 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.876095 kubelet[3235]: W0625 18:50:29.875939 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.876522 systemd[1]: var-lib-kubelet-pods-75358c6b\x2d8a1e\x2d4f35\x2db1bb\x2d5719efbeaed8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq4bj6.mount: Deactivated successfully. Jun 25 18:50:29.878541 kubelet[3235]: E0625 18:50:29.876966 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.878541 kubelet[3235]: E0625 18:50:29.877962 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.878541 kubelet[3235]: W0625 18:50:29.877975 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.878541 kubelet[3235]: E0625 18:50:29.878135 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.878541 kubelet[3235]: E0625 18:50:29.878428 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.878915 kubelet[3235]: W0625 18:50:29.878440 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.878915 kubelet[3235]: E0625 18:50:29.878774 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.879299 kubelet[3235]: I0625 18:50:29.879145 3235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75358c6b-8a1e-4f35-b1bb-5719efbeaed8-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "75358c6b-8a1e-4f35-b1bb-5719efbeaed8" (UID: "75358c6b-8a1e-4f35-b1bb-5719efbeaed8"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 18:50:29.880341 kubelet[3235]: E0625 18:50:29.879367 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.880341 kubelet[3235]: W0625 18:50:29.879400 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.880341 kubelet[3235]: E0625 18:50:29.879424 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.880341 kubelet[3235]: E0625 18:50:29.879739 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.880341 kubelet[3235]: W0625 18:50:29.879752 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.880341 kubelet[3235]: E0625 18:50:29.879766 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.880341 kubelet[3235]: I0625 18:50:29.880266 3235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75358c6b-8a1e-4f35-b1bb-5719efbeaed8-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "75358c6b-8a1e-4f35-b1bb-5719efbeaed8" (UID: "75358c6b-8a1e-4f35-b1bb-5719efbeaed8"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:50:29.881319 systemd[1]: var-lib-kubelet-pods-75358c6b\x2d8a1e\x2d4f35\x2db1bb\x2d5719efbeaed8-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jun 25 18:50:29.967863 kubelet[3235]: E0625 18:50:29.967714 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.967863 kubelet[3235]: W0625 18:50:29.967743 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.967863 kubelet[3235]: E0625 18:50:29.967773 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.968758 kubelet[3235]: E0625 18:50:29.968721 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.968758 kubelet[3235]: W0625 18:50:29.968743 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.968930 kubelet[3235]: E0625 18:50:29.968768 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.970898 kubelet[3235]: E0625 18:50:29.969168 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.970898 kubelet[3235]: W0625 18:50:29.969185 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.970898 kubelet[3235]: E0625 18:50:29.969215 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.970898 kubelet[3235]: I0625 18:50:29.969450 3235 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/75358c6b-8a1e-4f35-b1bb-5719efbeaed8-typha-certs\") on node \"ci-4012.0.0-a-c5aaeb7e49\" DevicePath \"\"" Jun 25 18:50:29.970898 kubelet[3235]: I0625 18:50:29.969472 3235 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-q4bj6\" (UniqueName: \"kubernetes.io/projected/75358c6b-8a1e-4f35-b1bb-5719efbeaed8-kube-api-access-q4bj6\") on node \"ci-4012.0.0-a-c5aaeb7e49\" DevicePath \"\"" Jun 25 18:50:29.970898 kubelet[3235]: I0625 18:50:29.969488 3235 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75358c6b-8a1e-4f35-b1bb-5719efbeaed8-tigera-ca-bundle\") on node \"ci-4012.0.0-a-c5aaeb7e49\" DevicePath \"\"" Jun 25 18:50:29.970898 kubelet[3235]: E0625 18:50:29.969555 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.970898 kubelet[3235]: W0625 18:50:29.969566 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.970898 kubelet[3235]: E0625 18:50:29.969592 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.970898 kubelet[3235]: E0625 18:50:29.969824 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.971540 kubelet[3235]: W0625 18:50:29.969835 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.971540 kubelet[3235]: E0625 18:50:29.969857 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.971540 kubelet[3235]: E0625 18:50:29.970081 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.971540 kubelet[3235]: W0625 18:50:29.970092 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.971540 kubelet[3235]: E0625 18:50:29.970116 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.971540 kubelet[3235]: E0625 18:50:29.970332 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.971540 kubelet[3235]: W0625 18:50:29.970343 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.971540 kubelet[3235]: E0625 18:50:29.970365 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.971540 kubelet[3235]: E0625 18:50:29.970574 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.971540 kubelet[3235]: W0625 18:50:29.970584 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.972184 kubelet[3235]: E0625 18:50:29.970608 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.972184 kubelet[3235]: E0625 18:50:29.970864 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.972184 kubelet[3235]: W0625 18:50:29.970885 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.972184 kubelet[3235]: E0625 18:50:29.970970 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.972184 kubelet[3235]: E0625 18:50:29.971909 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.972184 kubelet[3235]: W0625 18:50:29.971922 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.973490 kubelet[3235]: E0625 18:50:29.972292 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.973490 kubelet[3235]: E0625 18:50:29.972373 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.973490 kubelet[3235]: W0625 18:50:29.972385 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.973490 kubelet[3235]: E0625 18:50:29.972400 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.973490 kubelet[3235]: E0625 18:50:29.972630 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.973490 kubelet[3235]: W0625 18:50:29.972642 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.973490 kubelet[3235]: E0625 18:50:29.972657 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.973490 kubelet[3235]: E0625 18:50:29.973092 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.973490 kubelet[3235]: W0625 18:50:29.973105 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.973490 kubelet[3235]: E0625 18:50:29.973120 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.975313 kubelet[3235]: E0625 18:50:29.974027 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.975313 kubelet[3235]: W0625 18:50:29.974039 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.975313 kubelet[3235]: E0625 18:50:29.974055 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.975313 kubelet[3235]: E0625 18:50:29.974319 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.975313 kubelet[3235]: W0625 18:50:29.974332 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.975313 kubelet[3235]: E0625 18:50:29.974346 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.975313 kubelet[3235]: E0625 18:50:29.975204 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.975313 kubelet[3235]: W0625 18:50:29.975217 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.975313 kubelet[3235]: E0625 18:50:29.975232 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.979431 kubelet[3235]: E0625 18:50:29.979314 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.979431 kubelet[3235]: W0625 18:50:29.979331 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.979431 kubelet[3235]: E0625 18:50:29.979345 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:29.985022 kubelet[3235]: E0625 18:50:29.985003 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:29.985022 kubelet[3235]: W0625 18:50:29.985017 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:29.985131 kubelet[3235]: E0625 18:50:29.985032 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.126728 containerd[1702]: time="2024-06-25T18:50:30.126672214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b878f84d8-s52qn,Uid:0f7e032d-6f0a-4649-a125-f0c286ed7947,Namespace:calico-system,Attempt:0,}" Jun 25 18:50:30.210128 containerd[1702]: time="2024-06-25T18:50:30.210015197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:50:30.210128 containerd[1702]: time="2024-06-25T18:50:30.210066798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:50:30.210128 containerd[1702]: time="2024-06-25T18:50:30.210083998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:50:30.210128 containerd[1702]: time="2024-06-25T18:50:30.210097798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:50:30.236055 systemd[1]: Started cri-containerd-1ced311a1a8e9a14b901c98f67f57eaefe3e63e5c4efe94a9eefde35bc3c867c.scope - libcontainer container 1ced311a1a8e9a14b901c98f67f57eaefe3e63e5c4efe94a9eefde35bc3c867c. Jun 25 18:50:30.273642 containerd[1702]: time="2024-06-25T18:50:30.273577751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b878f84d8-s52qn,Uid:0f7e032d-6f0a-4649-a125-f0c286ed7947,Namespace:calico-system,Attempt:0,} returns sandbox id \"1ced311a1a8e9a14b901c98f67f57eaefe3e63e5c4efe94a9eefde35bc3c867c\"" Jun 25 18:50:30.289769 containerd[1702]: time="2024-06-25T18:50:30.289730419Z" level=info msg="CreateContainer within sandbox \"1ced311a1a8e9a14b901c98f67f57eaefe3e63e5c4efe94a9eefde35bc3c867c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 18:50:30.331631 containerd[1702]: time="2024-06-25T18:50:30.331579713Z" level=info msg="CreateContainer within sandbox \"1ced311a1a8e9a14b901c98f67f57eaefe3e63e5c4efe94a9eefde35bc3c867c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"06a1fe33ad872afe7630d86cd2d400ac5b9802a9da1164f4e624c03fcd7ad074\"" Jun 25 18:50:30.332284 containerd[1702]: time="2024-06-25T18:50:30.332204224Z" level=info msg="StartContainer for \"06a1fe33ad872afe7630d86cd2d400ac5b9802a9da1164f4e624c03fcd7ad074\"" Jun 25 18:50:30.357037 systemd[1]: Started cri-containerd-06a1fe33ad872afe7630d86cd2d400ac5b9802a9da1164f4e624c03fcd7ad074.scope - libcontainer container 06a1fe33ad872afe7630d86cd2d400ac5b9802a9da1164f4e624c03fcd7ad074. Jun 25 18:50:30.398952 kubelet[3235]: E0625 18:50:30.397808 3235 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7hj8p" podUID="0e18d57e-4106-4db0-b549-0f6c4c21f68a" Jun 25 18:50:30.405273 containerd[1702]: time="2024-06-25T18:50:30.403862912Z" level=info msg="StartContainer for \"06a1fe33ad872afe7630d86cd2d400ac5b9802a9da1164f4e624c03fcd7ad074\" returns successfully" Jun 25 18:50:30.415994 systemd[1]: Removed slice kubepods-besteffort-pod75358c6b_8a1e_4f35_b1bb_5719efbeaed8.slice - libcontainer container kubepods-besteffort-pod75358c6b_8a1e_4f35_b1bb_5719efbeaed8.slice. Jun 25 18:50:30.509885 kubelet[3235]: I0625 18:50:30.509738 3235 scope.go:117] "RemoveContainer" containerID="0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf" Jun 25 18:50:30.514802 containerd[1702]: time="2024-06-25T18:50:30.514590149Z" level=info msg="RemoveContainer for \"0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf\"" Jun 25 18:50:30.526376 kubelet[3235]: I0625 18:50:30.526325 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b878f84d8-s52qn" podStartSLOduration=7.52606334 podStartE2EDuration="7.52606334s" podCreationTimestamp="2024-06-25 18:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:50:30.523970505 +0000 UTC m=+28.766338737" watchObservedRunningTime="2024-06-25 18:50:30.52606334 +0000 UTC m=+28.768431572" Jun 25 18:50:30.530197 containerd[1702]: time="2024-06-25T18:50:30.529845802Z" level=info msg="RemoveContainer for \"0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf\" returns successfully" Jun 25 18:50:30.530591 kubelet[3235]: I0625 18:50:30.530496 3235 scope.go:117] "RemoveContainer" containerID="0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf" Jun 25 18:50:30.531139 containerd[1702]: time="2024-06-25T18:50:30.531087123Z" level=error msg="ContainerStatus for \"0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf\": not found" Jun 25 18:50:30.531355 kubelet[3235]: E0625 18:50:30.531233 3235 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf\": not found" containerID="0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf" Jun 25 18:50:30.531355 kubelet[3235]: I0625 18:50:30.531270 3235 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf"} err="failed to get container status \"0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf\": rpc error: code = NotFound desc = an error occurred when try to find container \"0b10c7e2fc9e666f242c346839dfa3ca87d69a27e023f9212dadb730d5e83ccf\": not found" Jun 25 18:50:30.560970 kubelet[3235]: E0625 18:50:30.560933 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.560970 kubelet[3235]: W0625 18:50:30.560960 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.560970 kubelet[3235]: E0625 18:50:30.560984 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.561437 kubelet[3235]: E0625 18:50:30.561274 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.561437 kubelet[3235]: W0625 18:50:30.561286 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.561437 kubelet[3235]: E0625 18:50:30.561304 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.561644 kubelet[3235]: E0625 18:50:30.561533 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.561644 kubelet[3235]: W0625 18:50:30.561544 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.561644 kubelet[3235]: E0625 18:50:30.561560 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.561934 kubelet[3235]: E0625 18:50:30.561777 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.561934 kubelet[3235]: W0625 18:50:30.561788 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.561934 kubelet[3235]: E0625 18:50:30.561885 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.562452 kubelet[3235]: E0625 18:50:30.562197 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.562452 kubelet[3235]: W0625 18:50:30.562219 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.562452 kubelet[3235]: E0625 18:50:30.562234 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.562989 kubelet[3235]: E0625 18:50:30.562781 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.562989 kubelet[3235]: W0625 18:50:30.562797 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.562989 kubelet[3235]: E0625 18:50:30.562811 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.563403 kubelet[3235]: E0625 18:50:30.563361 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.563403 kubelet[3235]: W0625 18:50:30.563375 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.563641 kubelet[3235]: E0625 18:50:30.563548 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.564014 kubelet[3235]: E0625 18:50:30.563916 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.564014 kubelet[3235]: W0625 18:50:30.563929 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.564014 kubelet[3235]: E0625 18:50:30.563946 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.564593 kubelet[3235]: E0625 18:50:30.564511 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.564593 kubelet[3235]: W0625 18:50:30.564526 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.564593 kubelet[3235]: E0625 18:50:30.564552 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.565006 kubelet[3235]: E0625 18:50:30.564995 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.565243 kubelet[3235]: W0625 18:50:30.565089 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.565243 kubelet[3235]: E0625 18:50:30.565106 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.565612 kubelet[3235]: E0625 18:50:30.565453 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.565612 kubelet[3235]: W0625 18:50:30.565464 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.565612 kubelet[3235]: E0625 18:50:30.565484 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.566192 kubelet[3235]: E0625 18:50:30.566015 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.566192 kubelet[3235]: W0625 18:50:30.566063 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.566192 kubelet[3235]: E0625 18:50:30.566078 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.566640 kubelet[3235]: E0625 18:50:30.566324 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.566640 kubelet[3235]: W0625 18:50:30.566483 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.566640 kubelet[3235]: E0625 18:50:30.566498 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.567182 kubelet[3235]: E0625 18:50:30.567043 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.567182 kubelet[3235]: W0625 18:50:30.567085 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.567182 kubelet[3235]: E0625 18:50:30.567098 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.567611 kubelet[3235]: E0625 18:50:30.567554 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.567611 kubelet[3235]: W0625 18:50:30.567567 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.567852 kubelet[3235]: E0625 18:50:30.567719 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.574618 kubelet[3235]: E0625 18:50:30.574604 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.574792 kubelet[3235]: W0625 18:50:30.574684 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.574792 kubelet[3235]: E0625 18:50:30.574699 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.575161 kubelet[3235]: E0625 18:50:30.575147 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.575323 kubelet[3235]: W0625 18:50:30.575200 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.575323 kubelet[3235]: E0625 18:50:30.575223 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.575785 kubelet[3235]: E0625 18:50:30.575699 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.575785 kubelet[3235]: W0625 18:50:30.575712 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.575785 kubelet[3235]: E0625 18:50:30.575728 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.576146 kubelet[3235]: E0625 18:50:30.576127 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.576146 kubelet[3235]: W0625 18:50:30.576142 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.576278 kubelet[3235]: E0625 18:50:30.576171 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.576427 kubelet[3235]: E0625 18:50:30.576410 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.576427 kubelet[3235]: W0625 18:50:30.576423 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.576547 kubelet[3235]: E0625 18:50:30.576511 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.576827 kubelet[3235]: E0625 18:50:30.576736 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.576827 kubelet[3235]: W0625 18:50:30.576751 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.576827 kubelet[3235]: E0625 18:50:30.576765 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.577041 kubelet[3235]: E0625 18:50:30.576967 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.577041 kubelet[3235]: W0625 18:50:30.576979 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.577041 kubelet[3235]: E0625 18:50:30.576996 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.577250 kubelet[3235]: E0625 18:50:30.577232 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.577250 kubelet[3235]: W0625 18:50:30.577247 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.577349 kubelet[3235]: E0625 18:50:30.577265 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.577521 kubelet[3235]: E0625 18:50:30.577505 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.577521 kubelet[3235]: W0625 18:50:30.577518 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.577625 kubelet[3235]: E0625 18:50:30.577538 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.577785 kubelet[3235]: E0625 18:50:30.577769 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.577785 kubelet[3235]: W0625 18:50:30.577782 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.578135 kubelet[3235]: E0625 18:50:30.577801 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.578135 kubelet[3235]: E0625 18:50:30.578086 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.578135 kubelet[3235]: W0625 18:50:30.578098 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.578777 kubelet[3235]: E0625 18:50:30.578414 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.578777 kubelet[3235]: E0625 18:50:30.578536 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.578777 kubelet[3235]: W0625 18:50:30.578547 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.578777 kubelet[3235]: E0625 18:50:30.578564 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.579277 kubelet[3235]: E0625 18:50:30.579154 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.579277 kubelet[3235]: W0625 18:50:30.579167 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.579277 kubelet[3235]: E0625 18:50:30.579185 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.579739 kubelet[3235]: E0625 18:50:30.579716 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.579808 kubelet[3235]: W0625 18:50:30.579777 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.579808 kubelet[3235]: E0625 18:50:30.579804 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.580112 kubelet[3235]: E0625 18:50:30.580094 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.580112 kubelet[3235]: W0625 18:50:30.580108 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.580231 kubelet[3235]: E0625 18:50:30.580195 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.580448 kubelet[3235]: E0625 18:50:30.580431 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.580448 kubelet[3235]: W0625 18:50:30.580444 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.580576 kubelet[3235]: E0625 18:50:30.580462 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.580850 kubelet[3235]: E0625 18:50:30.580778 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.580850 kubelet[3235]: W0625 18:50:30.580797 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.580850 kubelet[3235]: E0625 18:50:30.580816 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.581150 kubelet[3235]: E0625 18:50:30.581134 3235 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:50:30.581150 kubelet[3235]: W0625 18:50:30.581149 3235 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:50:30.581238 kubelet[3235]: E0625 18:50:30.581162 3235 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:50:30.762156 containerd[1702]: time="2024-06-25T18:50:30.762014854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:50:30.765371 containerd[1702]: time="2024-06-25T18:50:30.765302509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 18:50:30.772891 containerd[1702]: time="2024-06-25T18:50:30.772321225Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:50:30.778396 containerd[1702]: time="2024-06-25T18:50:30.778347825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:50:30.779159 containerd[1702]: time="2024-06-25T18:50:30.779106738Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 2.720705144s" Jun 25 18:50:30.779246 containerd[1702]: time="2024-06-25T18:50:30.779156338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 18:50:30.781560 containerd[1702]: time="2024-06-25T18:50:30.781532178Z" level=info msg="CreateContainer within sandbox \"a78fd717861968d1caec128099b6547cf303a987ed441d2adc47acafa25e3681\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 18:50:30.834065 containerd[1702]: time="2024-06-25T18:50:30.834013348Z" level=info msg="CreateContainer within sandbox \"a78fd717861968d1caec128099b6547cf303a987ed441d2adc47acafa25e3681\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9c86037f0195d2956130ba8d07d8ae73b78a16db5f12fe0b88bbe4784e64dbad\"" Jun 25 18:50:30.836066 containerd[1702]: time="2024-06-25T18:50:30.834516257Z" level=info msg="StartContainer for \"9c86037f0195d2956130ba8d07d8ae73b78a16db5f12fe0b88bbe4784e64dbad\"" Jun 25 18:50:30.862049 systemd[1]: Started cri-containerd-9c86037f0195d2956130ba8d07d8ae73b78a16db5f12fe0b88bbe4784e64dbad.scope - libcontainer container 9c86037f0195d2956130ba8d07d8ae73b78a16db5f12fe0b88bbe4784e64dbad. Jun 25 18:50:30.890909 containerd[1702]: time="2024-06-25T18:50:30.890629488Z" level=info msg="StartContainer for \"9c86037f0195d2956130ba8d07d8ae73b78a16db5f12fe0b88bbe4784e64dbad\" returns successfully" Jun 25 18:50:30.899425 systemd[1]: cri-containerd-9c86037f0195d2956130ba8d07d8ae73b78a16db5f12fe0b88bbe4784e64dbad.scope: Deactivated successfully. Jun 25 18:50:31.037215 containerd[1702]: time="2024-06-25T18:50:31.037033116Z" level=info msg="shim disconnected" id=9c86037f0195d2956130ba8d07d8ae73b78a16db5f12fe0b88bbe4784e64dbad namespace=k8s.io Jun 25 18:50:31.037215 containerd[1702]: time="2024-06-25T18:50:31.037113018Z" level=warning msg="cleaning up after shim disconnected" id=9c86037f0195d2956130ba8d07d8ae73b78a16db5f12fe0b88bbe4784e64dbad namespace=k8s.io Jun 25 18:50:31.037215 containerd[1702]: time="2024-06-25T18:50:31.037149918Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:50:31.519794 containerd[1702]: time="2024-06-25T18:50:31.519609022Z" level=info msg="StopPodSandbox for \"a78fd717861968d1caec128099b6547cf303a987ed441d2adc47acafa25e3681\"" Jun 25 18:50:31.519794 containerd[1702]: time="2024-06-25T18:50:31.519663223Z" level=info msg="Container to stop \"9c86037f0195d2956130ba8d07d8ae73b78a16db5f12fe0b88bbe4784e64dbad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:50:31.527024 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a78fd717861968d1caec128099b6547cf303a987ed441d2adc47acafa25e3681-shm.mount: Deactivated successfully. Jun 25 18:50:31.534859 systemd[1]: cri-containerd-a78fd717861968d1caec128099b6547cf303a987ed441d2adc47acafa25e3681.scope: Deactivated successfully. Jun 25 18:50:31.562837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a78fd717861968d1caec128099b6547cf303a987ed441d2adc47acafa25e3681-rootfs.mount: Deactivated successfully. Jun 25 18:50:31.598739 containerd[1702]: time="2024-06-25T18:50:31.598339028Z" level=info msg="shim disconnected" id=a78fd717861968d1caec128099b6547cf303a987ed441d2adc47acafa25e3681 namespace=k8s.io Jun 25 18:50:31.598739 containerd[1702]: time="2024-06-25T18:50:31.598519131Z" level=warning msg="cleaning up after shim disconnected" id=a78fd717861968d1caec128099b6547cf303a987ed441d2adc47acafa25e3681 namespace=k8s.io Jun 25 18:50:31.598739 containerd[1702]: time="2024-06-25T18:50:31.598533431Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:50:31.611661 containerd[1702]: time="2024-06-25T18:50:31.611618548Z" level=info msg="TearDown network for sandbox \"a78fd717861968d1caec128099b6547cf303a987ed441d2adc47acafa25e3681\" successfully" Jun 25 18:50:31.611661 containerd[1702]: time="2024-06-25T18:50:31.611652349Z" level=info msg="StopPodSandbox for \"a78fd717861968d1caec128099b6547cf303a987ed441d2adc47acafa25e3681\" returns successfully" Jun 25 18:50:31.687766 kubelet[3235]: I0625 18:50:31.687731 3235 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40af61f3-39ee-49e8-9983-7645d139a77e-tigera-ca-bundle\") pod \"40af61f3-39ee-49e8-9983-7645d139a77e\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " Jun 25 18:50:31.687766 kubelet[3235]: I0625 18:50:31.687775 3235 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-cni-log-dir\") pod \"40af61f3-39ee-49e8-9983-7645d139a77e\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " Jun 25 18:50:31.688840 kubelet[3235]: I0625 18:50:31.687796 3235 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-xtables-lock\") pod \"40af61f3-39ee-49e8-9983-7645d139a77e\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " Jun 25 18:50:31.688840 kubelet[3235]: I0625 18:50:31.687822 3235 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c26np\" (UniqueName: \"kubernetes.io/projected/40af61f3-39ee-49e8-9983-7645d139a77e-kube-api-access-c26np\") pod \"40af61f3-39ee-49e8-9983-7645d139a77e\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " Jun 25 18:50:31.688840 kubelet[3235]: I0625 18:50:31.687842 3235 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-cni-bin-dir\") pod \"40af61f3-39ee-49e8-9983-7645d139a77e\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " Jun 25 18:50:31.688840 kubelet[3235]: I0625 18:50:31.687860 3235 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-policysync\") pod \"40af61f3-39ee-49e8-9983-7645d139a77e\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " Jun 25 18:50:31.688840 kubelet[3235]: I0625 18:50:31.687905 3235 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-flexvol-driver-host\") pod \"40af61f3-39ee-49e8-9983-7645d139a77e\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " Jun 25 18:50:31.688840 kubelet[3235]: I0625 18:50:31.687929 3235 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-var-lib-calico\") pod \"40af61f3-39ee-49e8-9983-7645d139a77e\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " Jun 25 18:50:31.689156 kubelet[3235]: I0625 18:50:31.687944 3235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "40af61f3-39ee-49e8-9983-7645d139a77e" (UID: "40af61f3-39ee-49e8-9983-7645d139a77e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:50:31.689156 kubelet[3235]: I0625 18:50:31.687972 3235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "40af61f3-39ee-49e8-9983-7645d139a77e" (UID: "40af61f3-39ee-49e8-9983-7645d139a77e"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:50:31.689156 kubelet[3235]: I0625 18:50:31.687950 3235 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-cni-net-dir\") pod \"40af61f3-39ee-49e8-9983-7645d139a77e\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " Jun 25 18:50:31.689156 kubelet[3235]: I0625 18:50:31.688008 3235 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-lib-modules\") pod \"40af61f3-39ee-49e8-9983-7645d139a77e\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " Jun 25 18:50:31.689156 kubelet[3235]: I0625 18:50:31.688034 3235 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-var-run-calico\") pod \"40af61f3-39ee-49e8-9983-7645d139a77e\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " Jun 25 18:50:31.689156 kubelet[3235]: I0625 18:50:31.688062 3235 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/40af61f3-39ee-49e8-9983-7645d139a77e-node-certs\") pod \"40af61f3-39ee-49e8-9983-7645d139a77e\" (UID: \"40af61f3-39ee-49e8-9983-7645d139a77e\") " Jun 25 18:50:31.689422 kubelet[3235]: I0625 18:50:31.688115 3235 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-xtables-lock\") on node \"ci-4012.0.0-a-c5aaeb7e49\" DevicePath \"\"" Jun 25 18:50:31.689422 kubelet[3235]: I0625 18:50:31.688130 3235 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-cni-net-dir\") on node \"ci-4012.0.0-a-c5aaeb7e49\" DevicePath \"\"" Jun 25 18:50:31.693264 kubelet[3235]: I0625 18:50:31.691434 3235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40af61f3-39ee-49e8-9983-7645d139a77e-kube-api-access-c26np" (OuterVolumeSpecName: "kube-api-access-c26np") pod "40af61f3-39ee-49e8-9983-7645d139a77e" (UID: "40af61f3-39ee-49e8-9983-7645d139a77e"). InnerVolumeSpecName "kube-api-access-c26np". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:50:31.693264 kubelet[3235]: I0625 18:50:31.691516 3235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "40af61f3-39ee-49e8-9983-7645d139a77e" (UID: "40af61f3-39ee-49e8-9983-7645d139a77e"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:50:31.693264 kubelet[3235]: I0625 18:50:31.691541 3235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-policysync" (OuterVolumeSpecName: "policysync") pod "40af61f3-39ee-49e8-9983-7645d139a77e" (UID: "40af61f3-39ee-49e8-9983-7645d139a77e"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:50:31.693264 kubelet[3235]: I0625 18:50:31.691565 3235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "40af61f3-39ee-49e8-9983-7645d139a77e" (UID: "40af61f3-39ee-49e8-9983-7645d139a77e"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:50:31.693264 kubelet[3235]: I0625 18:50:31.691587 3235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "40af61f3-39ee-49e8-9983-7645d139a77e" (UID: "40af61f3-39ee-49e8-9983-7645d139a77e"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:50:31.693560 kubelet[3235]: I0625 18:50:31.691609 3235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "40af61f3-39ee-49e8-9983-7645d139a77e" (UID: "40af61f3-39ee-49e8-9983-7645d139a77e"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:50:31.693560 kubelet[3235]: I0625 18:50:31.692034 3235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40af61f3-39ee-49e8-9983-7645d139a77e-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "40af61f3-39ee-49e8-9983-7645d139a77e" (UID: "40af61f3-39ee-49e8-9983-7645d139a77e"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:50:31.693560 kubelet[3235]: I0625 18:50:31.692078 3235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "40af61f3-39ee-49e8-9983-7645d139a77e" (UID: "40af61f3-39ee-49e8-9983-7645d139a77e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:50:31.693560 kubelet[3235]: I0625 18:50:31.692099 3235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "40af61f3-39ee-49e8-9983-7645d139a77e" (UID: "40af61f3-39ee-49e8-9983-7645d139a77e"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:50:31.693560 kubelet[3235]: I0625 18:50:31.693170 3235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40af61f3-39ee-49e8-9983-7645d139a77e-node-certs" (OuterVolumeSpecName: "node-certs") pod "40af61f3-39ee-49e8-9983-7645d139a77e" (UID: "40af61f3-39ee-49e8-9983-7645d139a77e"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 18:50:31.696442 systemd[1]: var-lib-kubelet-pods-40af61f3\x2d39ee\x2d49e8\x2d9983\x2d7645d139a77e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc26np.mount: Deactivated successfully. Jun 25 18:50:31.696575 systemd[1]: var-lib-kubelet-pods-40af61f3\x2d39ee\x2d49e8\x2d9983\x2d7645d139a77e-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jun 25 18:50:31.788584 kubelet[3235]: I0625 18:50:31.788433 3235 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-cni-bin-dir\") on node \"ci-4012.0.0-a-c5aaeb7e49\" DevicePath \"\"" Jun 25 18:50:31.788584 kubelet[3235]: I0625 18:50:31.788480 3235 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-policysync\") on node \"ci-4012.0.0-a-c5aaeb7e49\" DevicePath \"\"" Jun 25 18:50:31.788584 kubelet[3235]: I0625 18:50:31.788495 3235 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-flexvol-driver-host\") on node \"ci-4012.0.0-a-c5aaeb7e49\" DevicePath \"\"" Jun 25 18:50:31.788584 kubelet[3235]: I0625 18:50:31.788507 3235 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-var-lib-calico\") on node \"ci-4012.0.0-a-c5aaeb7e49\" DevicePath \"\"" Jun 25 18:50:31.788584 kubelet[3235]: I0625 18:50:31.788524 3235 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-lib-modules\") on node \"ci-4012.0.0-a-c5aaeb7e49\" DevicePath \"\"" Jun 25 18:50:31.788584 kubelet[3235]: I0625 18:50:31.788537 3235 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-var-run-calico\") on node \"ci-4012.0.0-a-c5aaeb7e49\" DevicePath \"\"" Jun 25 18:50:31.788584 kubelet[3235]: I0625 18:50:31.788550 3235 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/40af61f3-39ee-49e8-9983-7645d139a77e-node-certs\") on node \"ci-4012.0.0-a-c5aaeb7e49\" DevicePath \"\"" Jun 25 18:50:31.788584 kubelet[3235]: I0625 18:50:31.788563 3235 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40af61f3-39ee-49e8-9983-7645d139a77e-tigera-ca-bundle\") on node \"ci-4012.0.0-a-c5aaeb7e49\" DevicePath \"\"" Jun 25 18:50:31.789183 kubelet[3235]: I0625 18:50:31.788576 3235 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/40af61f3-39ee-49e8-9983-7645d139a77e-cni-log-dir\") on node \"ci-4012.0.0-a-c5aaeb7e49\" DevicePath \"\"" Jun 25 18:50:31.789183 kubelet[3235]: I0625 18:50:31.788589 3235 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-c26np\" (UniqueName: \"kubernetes.io/projected/40af61f3-39ee-49e8-9983-7645d139a77e-kube-api-access-c26np\") on node \"ci-4012.0.0-a-c5aaeb7e49\" DevicePath \"\"" Jun 25 18:50:32.398436 kubelet[3235]: E0625 18:50:32.398002 3235 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7hj8p" podUID="0e18d57e-4106-4db0-b549-0f6c4c21f68a" Jun 25 18:50:32.402913 kubelet[3235]: I0625 18:50:32.400493 3235 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75358c6b-8a1e-4f35-b1bb-5719efbeaed8" path="/var/lib/kubelet/pods/75358c6b-8a1e-4f35-b1bb-5719efbeaed8/volumes" Jun 25 18:50:32.409282 systemd[1]: Removed slice kubepods-besteffort-pod40af61f3_39ee_49e8_9983_7645d139a77e.slice - libcontainer container kubepods-besteffort-pod40af61f3_39ee_49e8_9983_7645d139a77e.slice. Jun 25 18:50:32.529317 kubelet[3235]: I0625 18:50:32.529288 3235 scope.go:117] "RemoveContainer" containerID="9c86037f0195d2956130ba8d07d8ae73b78a16db5f12fe0b88bbe4784e64dbad" Jun 25 18:50:32.532239 containerd[1702]: time="2024-06-25T18:50:32.532203620Z" level=info msg="RemoveContainer for \"9c86037f0195d2956130ba8d07d8ae73b78a16db5f12fe0b88bbe4784e64dbad\"" Jun 25 18:50:32.545451 containerd[1702]: time="2024-06-25T18:50:32.545052633Z" level=info msg="RemoveContainer for \"9c86037f0195d2956130ba8d07d8ae73b78a16db5f12fe0b88bbe4784e64dbad\" returns successfully" Jun 25 18:50:32.578848 kubelet[3235]: I0625 18:50:32.578236 3235 topology_manager.go:215] "Topology Admit Handler" podUID="1209ac07-a306-47f1-b9eb-c1ede876a0b8" podNamespace="calico-system" podName="calico-node-ptl54" Jun 25 18:50:32.580973 kubelet[3235]: E0625 18:50:32.580949 3235 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="40af61f3-39ee-49e8-9983-7645d139a77e" containerName="flexvol-driver" Jun 25 18:50:32.581678 kubelet[3235]: I0625 18:50:32.581492 3235 memory_manager.go:354] "RemoveStaleState removing state" podUID="40af61f3-39ee-49e8-9983-7645d139a77e" containerName="flexvol-driver" Jun 25 18:50:32.594475 kubelet[3235]: I0625 18:50:32.594446 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1209ac07-a306-47f1-b9eb-c1ede876a0b8-tigera-ca-bundle\") pod \"calico-node-ptl54\" (UID: \"1209ac07-a306-47f1-b9eb-c1ede876a0b8\") " pod="calico-system/calico-node-ptl54" Jun 25 18:50:32.594936 kubelet[3235]: I0625 18:50:32.594735 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1209ac07-a306-47f1-b9eb-c1ede876a0b8-lib-modules\") pod \"calico-node-ptl54\" (UID: \"1209ac07-a306-47f1-b9eb-c1ede876a0b8\") " pod="calico-system/calico-node-ptl54" Jun 25 18:50:32.594936 kubelet[3235]: I0625 18:50:32.594766 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1209ac07-a306-47f1-b9eb-c1ede876a0b8-cni-bin-dir\") pod \"calico-node-ptl54\" (UID: \"1209ac07-a306-47f1-b9eb-c1ede876a0b8\") " pod="calico-system/calico-node-ptl54" Jun 25 18:50:32.595796 kubelet[3235]: I0625 18:50:32.595364 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1209ac07-a306-47f1-b9eb-c1ede876a0b8-cni-log-dir\") pod \"calico-node-ptl54\" (UID: \"1209ac07-a306-47f1-b9eb-c1ede876a0b8\") " pod="calico-system/calico-node-ptl54" Jun 25 18:50:32.595796 kubelet[3235]: I0625 18:50:32.595404 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1209ac07-a306-47f1-b9eb-c1ede876a0b8-policysync\") pod \"calico-node-ptl54\" (UID: \"1209ac07-a306-47f1-b9eb-c1ede876a0b8\") " pod="calico-system/calico-node-ptl54" Jun 25 18:50:32.595796 kubelet[3235]: I0625 18:50:32.595426 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1209ac07-a306-47f1-b9eb-c1ede876a0b8-cni-net-dir\") pod \"calico-node-ptl54\" (UID: \"1209ac07-a306-47f1-b9eb-c1ede876a0b8\") " pod="calico-system/calico-node-ptl54" Jun 25 18:50:32.595796 kubelet[3235]: I0625 18:50:32.595449 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1209ac07-a306-47f1-b9eb-c1ede876a0b8-var-lib-calico\") pod \"calico-node-ptl54\" (UID: \"1209ac07-a306-47f1-b9eb-c1ede876a0b8\") " pod="calico-system/calico-node-ptl54" Jun 25 18:50:32.595796 kubelet[3235]: I0625 18:50:32.595472 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spfng\" (UniqueName: \"kubernetes.io/projected/1209ac07-a306-47f1-b9eb-c1ede876a0b8-kube-api-access-spfng\") pod \"calico-node-ptl54\" (UID: \"1209ac07-a306-47f1-b9eb-c1ede876a0b8\") " pod="calico-system/calico-node-ptl54" Jun 25 18:50:32.597024 kubelet[3235]: I0625 18:50:32.595495 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1209ac07-a306-47f1-b9eb-c1ede876a0b8-node-certs\") pod \"calico-node-ptl54\" (UID: \"1209ac07-a306-47f1-b9eb-c1ede876a0b8\") " pod="calico-system/calico-node-ptl54" Jun 25 18:50:32.597024 kubelet[3235]: I0625 18:50:32.595516 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1209ac07-a306-47f1-b9eb-c1ede876a0b8-var-run-calico\") pod \"calico-node-ptl54\" (UID: \"1209ac07-a306-47f1-b9eb-c1ede876a0b8\") " pod="calico-system/calico-node-ptl54" Jun 25 18:50:32.597024 kubelet[3235]: I0625 18:50:32.595540 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1209ac07-a306-47f1-b9eb-c1ede876a0b8-xtables-lock\") pod \"calico-node-ptl54\" (UID: \"1209ac07-a306-47f1-b9eb-c1ede876a0b8\") " pod="calico-system/calico-node-ptl54" Jun 25 18:50:32.597024 kubelet[3235]: I0625 18:50:32.595568 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1209ac07-a306-47f1-b9eb-c1ede876a0b8-flexvol-driver-host\") pod \"calico-node-ptl54\" (UID: \"1209ac07-a306-47f1-b9eb-c1ede876a0b8\") " pod="calico-system/calico-node-ptl54" Jun 25 18:50:32.598076 systemd[1]: Created slice kubepods-besteffort-pod1209ac07_a306_47f1_b9eb_c1ede876a0b8.slice - libcontainer container kubepods-besteffort-pod1209ac07_a306_47f1_b9eb_c1ede876a0b8.slice. Jun 25 18:50:32.903513 containerd[1702]: time="2024-06-25T18:50:32.903460979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ptl54,Uid:1209ac07-a306-47f1-b9eb-c1ede876a0b8,Namespace:calico-system,Attempt:0,}" Jun 25 18:50:32.955309 containerd[1702]: time="2024-06-25T18:50:32.955130036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:50:32.955309 containerd[1702]: time="2024-06-25T18:50:32.955193837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:50:32.955309 containerd[1702]: time="2024-06-25T18:50:32.955226838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:50:32.955309 containerd[1702]: time="2024-06-25T18:50:32.955265539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:50:32.987023 systemd[1]: Started cri-containerd-e4dd49b8dcceb62c90c2904cb38f511cc30afb8e48052fa44cc9db99bc16a21b.scope - libcontainer container e4dd49b8dcceb62c90c2904cb38f511cc30afb8e48052fa44cc9db99bc16a21b. Jun 25 18:50:33.009694 containerd[1702]: time="2024-06-25T18:50:33.009645041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ptl54,Uid:1209ac07-a306-47f1-b9eb-c1ede876a0b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"e4dd49b8dcceb62c90c2904cb38f511cc30afb8e48052fa44cc9db99bc16a21b\"" Jun 25 18:50:33.012918 containerd[1702]: time="2024-06-25T18:50:33.012845694Z" level=info msg="CreateContainer within sandbox \"e4dd49b8dcceb62c90c2904cb38f511cc30afb8e48052fa44cc9db99bc16a21b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 18:50:33.059720 containerd[1702]: time="2024-06-25T18:50:33.059669171Z" level=info msg="CreateContainer within sandbox \"e4dd49b8dcceb62c90c2904cb38f511cc30afb8e48052fa44cc9db99bc16a21b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"524dfe3d7b055c5b410dd97446d88b4e413af9e3a2d111a1167e55f65f1a0258\"" Jun 25 18:50:33.060408 containerd[1702]: time="2024-06-25T18:50:33.060370182Z" level=info msg="StartContainer for \"524dfe3d7b055c5b410dd97446d88b4e413af9e3a2d111a1167e55f65f1a0258\"" Jun 25 18:50:33.087029 systemd[1]: Started cri-containerd-524dfe3d7b055c5b410dd97446d88b4e413af9e3a2d111a1167e55f65f1a0258.scope - libcontainer container 524dfe3d7b055c5b410dd97446d88b4e413af9e3a2d111a1167e55f65f1a0258. Jun 25 18:50:33.125348 containerd[1702]: time="2024-06-25T18:50:33.125303759Z" level=info msg="StartContainer for \"524dfe3d7b055c5b410dd97446d88b4e413af9e3a2d111a1167e55f65f1a0258\" returns successfully" Jun 25 18:50:33.136440 systemd[1]: cri-containerd-524dfe3d7b055c5b410dd97446d88b4e413af9e3a2d111a1167e55f65f1a0258.scope: Deactivated successfully. Jun 25 18:50:33.203163 containerd[1702]: time="2024-06-25T18:50:33.202910547Z" level=info msg="shim disconnected" id=524dfe3d7b055c5b410dd97446d88b4e413af9e3a2d111a1167e55f65f1a0258 namespace=k8s.io Jun 25 18:50:33.203163 containerd[1702]: time="2024-06-25T18:50:33.203087950Z" level=warning msg="cleaning up after shim disconnected" id=524dfe3d7b055c5b410dd97446d88b4e413af9e3a2d111a1167e55f65f1a0258 namespace=k8s.io Jun 25 18:50:33.203163 containerd[1702]: time="2024-06-25T18:50:33.203101650Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:50:33.217024 containerd[1702]: time="2024-06-25T18:50:33.216965280Z" level=warning msg="cleanup warnings time=\"2024-06-25T18:50:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 25 18:50:33.534996 containerd[1702]: time="2024-06-25T18:50:33.534754752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 18:50:34.396913 kubelet[3235]: E0625 18:50:34.395985 3235 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7hj8p" podUID="0e18d57e-4106-4db0-b549-0f6c4c21f68a" Jun 25 18:50:34.399795 kubelet[3235]: I0625 18:50:34.399741 3235 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40af61f3-39ee-49e8-9983-7645d139a77e" path="/var/lib/kubelet/pods/40af61f3-39ee-49e8-9983-7645d139a77e/volumes" Jun 25 18:50:36.395951 kubelet[3235]: E0625 18:50:36.395523 3235 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7hj8p" podUID="0e18d57e-4106-4db0-b549-0f6c4c21f68a" Jun 25 18:50:38.395987 kubelet[3235]: E0625 18:50:38.395941 3235 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7hj8p" podUID="0e18d57e-4106-4db0-b549-0f6c4c21f68a" Jun 25 18:50:39.202754 containerd[1702]: time="2024-06-25T18:50:39.202691484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:50:39.205131 containerd[1702]: time="2024-06-25T18:50:39.205045222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 18:50:39.210110 containerd[1702]: time="2024-06-25T18:50:39.210047702Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:50:39.216382 containerd[1702]: time="2024-06-25T18:50:39.216331803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:50:39.217529 containerd[1702]: time="2024-06-25T18:50:39.217050515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 5.682250561s" Jun 25 18:50:39.217529 containerd[1702]: time="2024-06-25T18:50:39.217087115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 18:50:39.220275 containerd[1702]: time="2024-06-25T18:50:39.219995462Z" level=info msg="CreateContainer within sandbox \"e4dd49b8dcceb62c90c2904cb38f511cc30afb8e48052fa44cc9db99bc16a21b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 18:50:39.271935 containerd[1702]: time="2024-06-25T18:50:39.271864196Z" level=info msg="CreateContainer within sandbox \"e4dd49b8dcceb62c90c2904cb38f511cc30afb8e48052fa44cc9db99bc16a21b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"70177b3c0c79af1709cb02716c55d686374200bdfae71f6edbcc63b4cced99e5\"" Jun 25 18:50:39.272638 containerd[1702]: time="2024-06-25T18:50:39.272488606Z" level=info msg="StartContainer for \"70177b3c0c79af1709cb02716c55d686374200bdfae71f6edbcc63b4cced99e5\"" Jun 25 18:50:39.307025 systemd[1]: Started cri-containerd-70177b3c0c79af1709cb02716c55d686374200bdfae71f6edbcc63b4cced99e5.scope - libcontainer container 70177b3c0c79af1709cb02716c55d686374200bdfae71f6edbcc63b4cced99e5. Jun 25 18:50:39.341214 containerd[1702]: time="2024-06-25T18:50:39.340537299Z" level=info msg="StartContainer for \"70177b3c0c79af1709cb02716c55d686374200bdfae71f6edbcc63b4cced99e5\" returns successfully" Jun 25 18:50:40.397502 kubelet[3235]: E0625 18:50:40.396113 3235 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7hj8p" podUID="0e18d57e-4106-4db0-b549-0f6c4c21f68a" Jun 25 18:50:40.661218 containerd[1702]: time="2024-06-25T18:50:40.661074821Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:50:40.663473 systemd[1]: cri-containerd-70177b3c0c79af1709cb02716c55d686374200bdfae71f6edbcc63b4cced99e5.scope: Deactivated successfully. Jun 25 18:50:40.671106 kubelet[3235]: I0625 18:50:40.670913 3235 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 25 18:50:40.696673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70177b3c0c79af1709cb02716c55d686374200bdfae71f6edbcc63b4cced99e5-rootfs.mount: Deactivated successfully. Jun 25 18:50:40.706467 kubelet[3235]: I0625 18:50:40.705790 3235 topology_manager.go:215] "Topology Admit Handler" podUID="d53de7a9-dec5-41a7-8464-1fd397879242" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bm8fl" Jun 25 18:50:40.715727 kubelet[3235]: I0625 18:50:40.714827 3235 topology_manager.go:215] "Topology Admit Handler" podUID="42e4e351-9586-4756-9f34-473dd41b7f92" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9txp5" Jun 25 18:50:41.188616 kubelet[3235]: I0625 18:50:40.717864 3235 topology_manager.go:215] "Topology Admit Handler" podUID="cdf8ad2a-e8cc-4207-ae7f-a4803ee52206" podNamespace="calico-system" podName="calico-kube-controllers-69b7dfffc8-mbwz4" Jun 25 18:50:41.188616 kubelet[3235]: I0625 18:50:40.749989 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42e4e351-9586-4756-9f34-473dd41b7f92-config-volume\") pod \"coredns-7db6d8ff4d-9txp5\" (UID: \"42e4e351-9586-4756-9f34-473dd41b7f92\") " pod="kube-system/coredns-7db6d8ff4d-9txp5" Jun 25 18:50:41.188616 kubelet[3235]: I0625 18:50:40.750018 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dwkm\" (UniqueName: \"kubernetes.io/projected/cdf8ad2a-e8cc-4207-ae7f-a4803ee52206-kube-api-access-4dwkm\") pod \"calico-kube-controllers-69b7dfffc8-mbwz4\" (UID: \"cdf8ad2a-e8cc-4207-ae7f-a4803ee52206\") " pod="calico-system/calico-kube-controllers-69b7dfffc8-mbwz4" Jun 25 18:50:41.188616 kubelet[3235]: I0625 18:50:40.750040 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d53de7a9-dec5-41a7-8464-1fd397879242-config-volume\") pod \"coredns-7db6d8ff4d-bm8fl\" (UID: \"d53de7a9-dec5-41a7-8464-1fd397879242\") " pod="kube-system/coredns-7db6d8ff4d-bm8fl" Jun 25 18:50:41.188616 kubelet[3235]: I0625 18:50:40.750067 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d6rq\" (UniqueName: \"kubernetes.io/projected/d53de7a9-dec5-41a7-8464-1fd397879242-kube-api-access-7d6rq\") pod \"coredns-7db6d8ff4d-bm8fl\" (UID: \"d53de7a9-dec5-41a7-8464-1fd397879242\") " pod="kube-system/coredns-7db6d8ff4d-bm8fl" Jun 25 18:50:40.723426 systemd[1]: Created slice kubepods-burstable-podd53de7a9_dec5_41a7_8464_1fd397879242.slice - libcontainer container kubepods-burstable-podd53de7a9_dec5_41a7_8464_1fd397879242.slice. Jun 25 18:50:41.189151 kubelet[3235]: I0625 18:50:40.750123 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zv2m\" (UniqueName: \"kubernetes.io/projected/42e4e351-9586-4756-9f34-473dd41b7f92-kube-api-access-9zv2m\") pod \"coredns-7db6d8ff4d-9txp5\" (UID: \"42e4e351-9586-4756-9f34-473dd41b7f92\") " pod="kube-system/coredns-7db6d8ff4d-9txp5" Jun 25 18:50:41.189151 kubelet[3235]: I0625 18:50:40.750156 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdf8ad2a-e8cc-4207-ae7f-a4803ee52206-tigera-ca-bundle\") pod \"calico-kube-controllers-69b7dfffc8-mbwz4\" (UID: \"cdf8ad2a-e8cc-4207-ae7f-a4803ee52206\") " pod="calico-system/calico-kube-controllers-69b7dfffc8-mbwz4" Jun 25 18:50:40.734097 systemd[1]: Created slice kubepods-burstable-pod42e4e351_9586_4756_9f34_473dd41b7f92.slice - libcontainer container kubepods-burstable-pod42e4e351_9586_4756_9f34_473dd41b7f92.slice. Jun 25 18:50:40.743632 systemd[1]: Created slice kubepods-besteffort-podcdf8ad2a_e8cc_4207_ae7f_a4803ee52206.slice - libcontainer container kubepods-besteffort-podcdf8ad2a_e8cc_4207_ae7f_a4803ee52206.slice. Jun 25 18:50:41.491494 containerd[1702]: time="2024-06-25T18:50:41.491299863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69b7dfffc8-mbwz4,Uid:cdf8ad2a-e8cc-4207-ae7f-a4803ee52206,Namespace:calico-system,Attempt:0,}" Jun 25 18:50:41.495064 containerd[1702]: time="2024-06-25T18:50:41.495027123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bm8fl,Uid:d53de7a9-dec5-41a7-8464-1fd397879242,Namespace:kube-system,Attempt:0,}" Jun 25 18:50:41.497622 containerd[1702]: time="2024-06-25T18:50:41.497585964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9txp5,Uid:42e4e351-9586-4756-9f34-473dd41b7f92,Namespace:kube-system,Attempt:0,}" Jun 25 18:50:42.299082 containerd[1702]: time="2024-06-25T18:50:42.298994243Z" level=info msg="shim disconnected" id=70177b3c0c79af1709cb02716c55d686374200bdfae71f6edbcc63b4cced99e5 namespace=k8s.io Jun 25 18:50:42.299082 containerd[1702]: time="2024-06-25T18:50:42.299067144Z" level=warning msg="cleaning up after shim disconnected" id=70177b3c0c79af1709cb02716c55d686374200bdfae71f6edbcc63b4cced99e5 namespace=k8s.io Jun 25 18:50:42.299082 containerd[1702]: time="2024-06-25T18:50:42.299079844Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:50:42.412661 systemd[1]: Created slice kubepods-besteffort-pod0e18d57e_4106_4db0_b549_0f6c4c21f68a.slice - libcontainer container kubepods-besteffort-pod0e18d57e_4106_4db0_b549_0f6c4c21f68a.slice. Jun 25 18:50:42.424419 containerd[1702]: time="2024-06-25T18:50:42.424346057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7hj8p,Uid:0e18d57e-4106-4db0-b549-0f6c4c21f68a,Namespace:calico-system,Attempt:0,}" Jun 25 18:50:42.525266 containerd[1702]: time="2024-06-25T18:50:42.525043575Z" level=error msg="Failed to destroy network for sandbox \"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:50:42.525478 containerd[1702]: time="2024-06-25T18:50:42.525443682Z" level=error msg="encountered an error cleaning up failed sandbox \"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:50:42.525724 containerd[1702]: time="2024-06-25T18:50:42.525502183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9txp5,Uid:42e4e351-9586-4756-9f34-473dd41b7f92,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:50:42.525931 kubelet[3235]: E0625 18:50:42.525671 3235 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:50:42.525931 kubelet[3235]: E0625 18:50:42.525743 3235 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9txp5" Jun 25 18:50:42.525931 kubelet[3235]: E0625 18:50:42.525770 3235 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9txp5" Jun 25 18:50:42.526547 kubelet[3235]: E0625 18:50:42.525910 3235 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9txp5_kube-system(42e4e351-9586-4756-9f34-473dd41b7f92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9txp5_kube-system(42e4e351-9586-4756-9f34-473dd41b7f92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9txp5" podUID="42e4e351-9586-4756-9f34-473dd41b7f92" Jun 25 18:50:42.541061 containerd[1702]: time="2024-06-25T18:50:42.540405722Z" level=error msg="Failed to destroy network for sandbox \"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:50:42.541671 containerd[1702]: time="2024-06-25T18:50:42.541623642Z" level=error msg="encountered an error cleaning up failed sandbox \"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:50:42.541757 containerd[1702]: time="2024-06-25T18:50:42.541692743Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bm8fl,Uid:d53de7a9-dec5-41a7-8464-1fd397879242,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:50:42.542047 kubelet[3235]: E0625 18:50:42.541855 3235 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:50:42.542047 kubelet[3235]: E0625 18:50:42.541971 3235 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bm8fl" Jun 25 18:50:42.542047 kubelet[3235]: E0625 18:50:42.542013 3235 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bm8fl" Jun 25 18:50:42.543041 kubelet[3235]: E0625 18:50:42.542846 3235 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bm8fl_kube-system(d53de7a9-dec5-41a7-8464-1fd397879242)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bm8fl_kube-system(d53de7a9-dec5-41a7-8464-1fd397879242)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bm8fl" podUID="d53de7a9-dec5-41a7-8464-1fd397879242" Jun 25 18:50:42.546756 containerd[1702]: time="2024-06-25T18:50:42.546525520Z" level=error msg="Failed to destroy network for sandbox \"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:50:42.547087 containerd[1702]: time="2024-06-25T18:50:42.547023628Z" level=error msg="encountered an error cleaning up failed sandbox \"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:50:42.547270 containerd[1702]: time="2024-06-25T18:50:42.547144130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69b7dfffc8-mbwz4,Uid:cdf8ad2a-e8cc-4207-ae7f-a4803ee52206,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:50:42.547493 kubelet[3235]: E0625 18:50:42.547464 3235 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:50:42.547660 kubelet[3235]: E0625 18:50:42.547516 3235 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69b7dfffc8-mbwz4" Jun 25 18:50:42.547660 kubelet[3235]: E0625 18:50:42.547541 3235 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69b7dfffc8-mbwz4" Jun 25 18:50:42.547660 kubelet[3235]: E0625 18:50:42.547581 3235 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69b7dfffc8-mbwz4_calico-system(cdf8ad2a-e8cc-4207-ae7f-a4803ee52206)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69b7dfffc8-mbwz4_calico-system(cdf8ad2a-e8cc-4207-ae7f-a4803ee52206)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69b7dfffc8-mbwz4" podUID="cdf8ad2a-e8cc-4207-ae7f-a4803ee52206" Jun 25 18:50:42.561444 containerd[1702]: time="2024-06-25T18:50:42.559441928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 18:50:42.568606 kubelet[3235]: I0625 18:50:42.568469 3235 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Jun 25 18:50:42.574909 containerd[1702]: time="2024-06-25T18:50:42.573429253Z" level=info msg="StopPodSandbox for \"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\"" Jun 25 18:50:42.574909 containerd[1702]: time="2024-06-25T18:50:42.573691857Z" level=info msg="Ensure that sandbox 3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc in task-service has been cleanup successfully" Jun 25 18:50:42.592513 containerd[1702]: time="2024-06-25T18:50:42.592455359Z" level=error msg="Failed to destroy network for sandbox \"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:50:42.592885 containerd[1702]: time="2024-06-25T18:50:42.592830365Z" level=error msg="encountered an error cleaning up failed sandbox \"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:50:42.593002 containerd[1702]: time="2024-06-25T18:50:42.592925866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7hj8p,Uid:0e18d57e-4106-4db0-b549-0f6c4c21f68a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:50:42.593531 kubelet[3235]: E0625 18:50:42.593496 3235 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:50:42.593915 kubelet[3235]: E0625 18:50:42.593711 3235 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7hj8p" Jun 25 18:50:42.593915 kubelet[3235]: E0625 18:50:42.593739 3235 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7hj8p" Jun 25 18:50:42.593915 kubelet[3235]: E0625 18:50:42.593800 3235 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7hj8p_calico-system(0e18d57e-4106-4db0-b549-0f6c4c21f68a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7hj8p_calico-system(0e18d57e-4106-4db0-b549-0f6c4c21f68a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7hj8p" podUID="0e18d57e-4106-4db0-b549-0f6c4c21f68a" Jun 25 18:50:42.593915 kubelet[3235]: I0625 18:50:42.593635 3235 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Jun 25 18:50:42.594831 containerd[1702]: time="2024-06-25T18:50:42.594306488Z" level=info msg="StopPodSandbox for \"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\"" Jun 25 18:50:42.594831 containerd[1702]: time="2024-06-25T18:50:42.594543392Z" level=info msg="Ensure that sandbox 72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12 in task-service has been cleanup successfully" Jun 25 18:50:42.609892 kubelet[3235]: I0625 18:50:42.609858 3235 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Jun 25 18:50:42.610737 containerd[1702]: time="2024-06-25T18:50:42.610702352Z" level=info msg="StopPodSandbox for \"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\"" Jun 25 18:50:42.612014 containerd[1702]: time="2024-06-25T18:50:42.611849170Z" level=info msg="Ensure that sandbox 332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39 in task-service has been cleanup successfully" Jun 25 18:50:42.667516 containerd[1702]: time="2024-06-25T18:50:42.667456064Z" level=error msg="StopPodSandbox for \"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\" failed" error="failed to destroy network for sandbox \"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:50:42.667981 kubelet[3235]: E0625 18:50:42.667921 3235 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Jun 25 18:50:42.668108 kubelet[3235]: E0625 18:50:42.667976 3235 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc"} Jun 25 18:50:42.668108 kubelet[3235]: E0625 18:50:42.668017 3235 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d53de7a9-dec5-41a7-8464-1fd397879242\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:50:42.668108 kubelet[3235]: E0625 18:50:42.668046 3235 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d53de7a9-dec5-41a7-8464-1fd397879242\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bm8fl" podUID="d53de7a9-dec5-41a7-8464-1fd397879242" Jun 25 18:50:42.675495 containerd[1702]: time="2024-06-25T18:50:42.675403292Z" level=error msg="StopPodSandbox for \"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\" failed" error="failed to destroy network for sandbox \"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:50:42.676030 kubelet[3235]: E0625 18:50:42.675993 3235 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Jun 25 18:50:42.676245 kubelet[3235]: E0625 18:50:42.676042 3235 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12"} Jun 25 18:50:42.676245 kubelet[3235]: E0625 18:50:42.676080 3235 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42e4e351-9586-4756-9f34-473dd41b7f92\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:50:42.676245 kubelet[3235]: E0625 18:50:42.676116 3235 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42e4e351-9586-4756-9f34-473dd41b7f92\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9txp5" podUID="42e4e351-9586-4756-9f34-473dd41b7f92" Jun 25 18:50:42.679795 containerd[1702]: time="2024-06-25T18:50:42.679756362Z" level=error msg="StopPodSandbox for \"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\" failed" error="failed to destroy network for sandbox \"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:50:42.680081 kubelet[3235]: E0625 18:50:42.680047 3235 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Jun 25 18:50:42.680181 kubelet[3235]: E0625 18:50:42.680093 3235 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39"} Jun 25 18:50:42.680181 kubelet[3235]: E0625 18:50:42.680129 3235 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cdf8ad2a-e8cc-4207-ae7f-a4803ee52206\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:50:42.680181 kubelet[3235]: E0625 18:50:42.680157 3235 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cdf8ad2a-e8cc-4207-ae7f-a4803ee52206\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69b7dfffc8-mbwz4" podUID="cdf8ad2a-e8cc-4207-ae7f-a4803ee52206" Jun 25 18:50:43.086781 kubelet[3235]: I0625 18:50:43.086463 3235 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:50:43.356130 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12-shm.mount: Deactivated successfully. Jun 25 18:50:43.356238 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39-shm.mount: Deactivated successfully. Jun 25 18:50:43.356318 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc-shm.mount: Deactivated successfully. Jun 25 18:50:43.612616 kubelet[3235]: I0625 18:50:43.612487 3235 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Jun 25 18:50:43.614505 containerd[1702]: time="2024-06-25T18:50:43.613949574Z" level=info msg="StopPodSandbox for \"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\"" Jun 25 18:50:43.614505 containerd[1702]: time="2024-06-25T18:50:43.614185878Z" level=info msg="Ensure that sandbox 502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb in task-service has been cleanup successfully" Jun 25 18:50:43.638340 containerd[1702]: time="2024-06-25T18:50:43.638292566Z" level=error msg="StopPodSandbox for \"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\" failed" error="failed to destroy network for sandbox \"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:50:43.638570 kubelet[3235]: E0625 18:50:43.638529 3235 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Jun 25 18:50:43.638673 kubelet[3235]: E0625 18:50:43.638585 3235 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb"} Jun 25 18:50:43.638673 kubelet[3235]: E0625 18:50:43.638630 3235 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0e18d57e-4106-4db0-b549-0f6c4c21f68a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:50:43.638673 kubelet[3235]: E0625 18:50:43.638660 3235 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0e18d57e-4106-4db0-b549-0f6c4c21f68a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7hj8p" podUID="0e18d57e-4106-4db0-b549-0f6c4c21f68a" Jun 25 18:50:49.924919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2457827739.mount: Deactivated successfully. Jun 25 18:50:49.969851 containerd[1702]: time="2024-06-25T18:50:49.969794682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:50:49.972919 containerd[1702]: time="2024-06-25T18:50:49.972836731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 18:50:49.977071 containerd[1702]: time="2024-06-25T18:50:49.976991499Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:50:49.983467 containerd[1702]: time="2024-06-25T18:50:49.983409704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:50:49.984147 containerd[1702]: time="2024-06-25T18:50:49.983986813Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 7.424496384s" Jun 25 18:50:49.984147 containerd[1702]: time="2024-06-25T18:50:49.984029414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 18:50:50.001407 containerd[1702]: time="2024-06-25T18:50:50.001356097Z" level=info msg="CreateContainer within sandbox \"e4dd49b8dcceb62c90c2904cb38f511cc30afb8e48052fa44cc9db99bc16a21b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 18:50:50.055325 containerd[1702]: time="2024-06-25T18:50:50.055279076Z" level=info msg="CreateContainer within sandbox \"e4dd49b8dcceb62c90c2904cb38f511cc30afb8e48052fa44cc9db99bc16a21b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e05ea1a5374b95c4bf03ab387ef0c1828873f653d13a68d87dd27e110fd4af9b\"" Jun 25 18:50:50.057443 containerd[1702]: time="2024-06-25T18:50:50.055848586Z" level=info msg="StartContainer for \"e05ea1a5374b95c4bf03ab387ef0c1828873f653d13a68d87dd27e110fd4af9b\"" Jun 25 18:50:50.089069 systemd[1]: Started cri-containerd-e05ea1a5374b95c4bf03ab387ef0c1828873f653d13a68d87dd27e110fd4af9b.scope - libcontainer container e05ea1a5374b95c4bf03ab387ef0c1828873f653d13a68d87dd27e110fd4af9b. Jun 25 18:50:50.124260 containerd[1702]: time="2024-06-25T18:50:50.124208401Z" level=info msg="StartContainer for \"e05ea1a5374b95c4bf03ab387ef0c1828873f653d13a68d87dd27e110fd4af9b\" returns successfully" Jun 25 18:50:50.462319 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 18:50:50.462444 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 18:50:50.657551 kubelet[3235]: I0625 18:50:50.657469 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ptl54" podStartSLOduration=2.206160808 podStartE2EDuration="18.657444203s" podCreationTimestamp="2024-06-25 18:50:32 +0000 UTC" firstStartedPulling="2024-06-25 18:50:33.533695134 +0000 UTC m=+31.776063266" lastFinishedPulling="2024-06-25 18:50:49.984978429 +0000 UTC m=+48.227346661" observedRunningTime="2024-06-25 18:50:50.654732658 +0000 UTC m=+48.897100790" watchObservedRunningTime="2024-06-25 18:50:50.657444203 +0000 UTC m=+48.899812435" Jun 25 18:50:52.261946 systemd-networkd[1590]: vxlan.calico: Link UP Jun 25 18:50:52.261959 systemd-networkd[1590]: vxlan.calico: Gained carrier Jun 25 18:50:53.729030 systemd-networkd[1590]: vxlan.calico: Gained IPv6LL Jun 25 18:50:55.180010 update_engine[1683]: I0625 18:50:55.179965 1683 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jun 25 18:50:55.180010 update_engine[1683]: I0625 18:50:55.180006 1683 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jun 25 18:50:55.180653 update_engine[1683]: I0625 18:50:55.180202 1683 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jun 25 18:50:55.180769 update_engine[1683]: I0625 18:50:55.180742 1683 omaha_request_params.cc:62] Current group set to alpha Jun 25 18:50:55.181057 update_engine[1683]: I0625 18:50:55.180881 1683 update_attempter.cc:499] Already updated boot flags. Skipping. Jun 25 18:50:55.181057 update_engine[1683]: I0625 18:50:55.180893 1683 update_attempter.cc:643] Scheduling an action processor start. Jun 25 18:50:55.181057 update_engine[1683]: I0625 18:50:55.180910 1683 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 25 18:50:55.181057 update_engine[1683]: I0625 18:50:55.180944 1683 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jun 25 18:50:55.181057 update_engine[1683]: I0625 18:50:55.181023 1683 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 25 18:50:55.181057 update_engine[1683]: I0625 18:50:55.181028 1683 omaha_request_action.cc:272] Request: Jun 25 18:50:55.181057 update_engine[1683]: Jun 25 18:50:55.181057 update_engine[1683]: Jun 25 18:50:55.181057 update_engine[1683]: Jun 25 18:50:55.181057 update_engine[1683]: Jun 25 18:50:55.181057 update_engine[1683]: Jun 25 18:50:55.181057 update_engine[1683]: Jun 25 18:50:55.181057 update_engine[1683]: Jun 25 18:50:55.181057 update_engine[1683]: Jun 25 18:50:55.181057 update_engine[1683]: I0625 18:50:55.181034 1683 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 18:50:55.181855 locksmithd[1715]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jun 25 18:50:55.182399 update_engine[1683]: I0625 18:50:55.182374 1683 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 18:50:55.182698 update_engine[1683]: I0625 18:50:55.182667 1683 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 18:50:55.202717 update_engine[1683]: E0625 18:50:55.202677 1683 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 18:50:55.202822 update_engine[1683]: I0625 18:50:55.202757 1683 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jun 25 18:50:55.397552 containerd[1702]: time="2024-06-25T18:50:55.397086948Z" level=info msg="StopPodSandbox for \"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\"" Jun 25 18:50:55.398899 containerd[1702]: time="2024-06-25T18:50:55.398173567Z" level=info msg="StopPodSandbox for \"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\"" Jun 25 18:50:55.398899 containerd[1702]: time="2024-06-25T18:50:55.398565673Z" level=info msg="StopPodSandbox for \"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\"" Jun 25 18:50:55.576748 containerd[1702]: 2024-06-25 18:50:55.487 [INFO][4879] k8s.go 608: Cleaning up netns ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Jun 25 18:50:55.576748 containerd[1702]: 2024-06-25 18:50:55.487 [INFO][4879] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" iface="eth0" netns="/var/run/netns/cni-7580db67-4a10-8c2f-cd4c-5b3fe341d53d" Jun 25 18:50:55.576748 containerd[1702]: 2024-06-25 18:50:55.488 [INFO][4879] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" iface="eth0" netns="/var/run/netns/cni-7580db67-4a10-8c2f-cd4c-5b3fe341d53d" Jun 25 18:50:55.576748 containerd[1702]: 2024-06-25 18:50:55.490 [INFO][4879] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" iface="eth0" netns="/var/run/netns/cni-7580db67-4a10-8c2f-cd4c-5b3fe341d53d" Jun 25 18:50:55.576748 containerd[1702]: 2024-06-25 18:50:55.491 [INFO][4879] k8s.go 615: Releasing IP address(es) ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Jun 25 18:50:55.576748 containerd[1702]: 2024-06-25 18:50:55.491 [INFO][4879] utils.go 188: Calico CNI releasing IP address ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Jun 25 18:50:55.576748 containerd[1702]: 2024-06-25 18:50:55.554 [INFO][4892] ipam_plugin.go 411: Releasing address using handleID ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" HandleID="k8s-pod-network.332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0" Jun 25 18:50:55.576748 containerd[1702]: 2024-06-25 18:50:55.555 [INFO][4892] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:50:55.576748 containerd[1702]: 2024-06-25 18:50:55.555 [INFO][4892] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:50:55.576748 containerd[1702]: 2024-06-25 18:50:55.563 [WARNING][4892] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" HandleID="k8s-pod-network.332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0" Jun 25 18:50:55.576748 containerd[1702]: 2024-06-25 18:50:55.563 [INFO][4892] ipam_plugin.go 439: Releasing address using workloadID ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" HandleID="k8s-pod-network.332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0" Jun 25 18:50:55.576748 containerd[1702]: 2024-06-25 18:50:55.565 [INFO][4892] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:50:55.576748 containerd[1702]: 2024-06-25 18:50:55.569 [INFO][4879] k8s.go 621: Teardown processing complete. ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Jun 25 18:50:55.582317 containerd[1702]: time="2024-06-25T18:50:55.576945879Z" level=info msg="TearDown network for sandbox \"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\" successfully" Jun 25 18:50:55.582317 containerd[1702]: time="2024-06-25T18:50:55.577079381Z" level=info msg="StopPodSandbox for \"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\" returns successfully" Jun 25 18:50:55.583196 containerd[1702]: time="2024-06-25T18:50:55.582631175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69b7dfffc8-mbwz4,Uid:cdf8ad2a-e8cc-4207-ae7f-a4803ee52206,Namespace:calico-system,Attempt:1,}" Jun 25 18:50:55.587701 systemd[1]: run-netns-cni\x2d7580db67\x2d4a10\x2d8c2f\x2dcd4c\x2d5b3fe341d53d.mount: Deactivated successfully. Jun 25 18:50:55.603426 containerd[1702]: 2024-06-25 18:50:55.490 [INFO][4871] k8s.go 608: Cleaning up netns ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Jun 25 18:50:55.603426 containerd[1702]: 2024-06-25 18:50:55.491 [INFO][4871] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" iface="eth0" netns="/var/run/netns/cni-91420aa8-c3ff-e723-1618-074c897e1467" Jun 25 18:50:55.603426 containerd[1702]: 2024-06-25 18:50:55.491 [INFO][4871] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" iface="eth0" netns="/var/run/netns/cni-91420aa8-c3ff-e723-1618-074c897e1467" Jun 25 18:50:55.603426 containerd[1702]: 2024-06-25 18:50:55.493 [INFO][4871] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" iface="eth0" netns="/var/run/netns/cni-91420aa8-c3ff-e723-1618-074c897e1467" Jun 25 18:50:55.603426 containerd[1702]: 2024-06-25 18:50:55.493 [INFO][4871] k8s.go 615: Releasing IP address(es) ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Jun 25 18:50:55.603426 containerd[1702]: 2024-06-25 18:50:55.493 [INFO][4871] utils.go 188: Calico CNI releasing IP address ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Jun 25 18:50:55.603426 containerd[1702]: 2024-06-25 18:50:55.562 [INFO][4893] ipam_plugin.go 411: Releasing address using handleID ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" HandleID="k8s-pod-network.72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0" Jun 25 18:50:55.603426 containerd[1702]: 2024-06-25 18:50:55.563 [INFO][4893] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:50:55.603426 containerd[1702]: 2024-06-25 18:50:55.566 [INFO][4893] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:50:55.603426 containerd[1702]: 2024-06-25 18:50:55.587 [WARNING][4893] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" HandleID="k8s-pod-network.72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0" Jun 25 18:50:55.603426 containerd[1702]: 2024-06-25 18:50:55.587 [INFO][4893] ipam_plugin.go 439: Releasing address using workloadID ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" HandleID="k8s-pod-network.72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0" Jun 25 18:50:55.603426 containerd[1702]: 2024-06-25 18:50:55.589 [INFO][4893] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:50:55.603426 containerd[1702]: 2024-06-25 18:50:55.598 [INFO][4871] k8s.go 621: Teardown processing complete. ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Jun 25 18:50:55.606127 containerd[1702]: time="2024-06-25T18:50:55.603667929Z" level=info msg="TearDown network for sandbox \"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\" successfully" Jun 25 18:50:55.606127 containerd[1702]: time="2024-06-25T18:50:55.603699430Z" level=info msg="StopPodSandbox for \"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\" returns successfully" Jun 25 18:50:55.610331 containerd[1702]: time="2024-06-25T18:50:55.610008536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9txp5,Uid:42e4e351-9586-4756-9f34-473dd41b7f92,Namespace:kube-system,Attempt:1,}" Jun 25 18:50:55.618212 containerd[1702]: 2024-06-25 18:50:55.500 [INFO][4866] k8s.go 608: Cleaning up netns ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Jun 25 18:50:55.618212 containerd[1702]: 2024-06-25 18:50:55.501 [INFO][4866] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" iface="eth0" netns="/var/run/netns/cni-6154d3c5-6937-2492-c10e-2c36842aecea" Jun 25 18:50:55.618212 containerd[1702]: 2024-06-25 18:50:55.502 [INFO][4866] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" iface="eth0" netns="/var/run/netns/cni-6154d3c5-6937-2492-c10e-2c36842aecea" Jun 25 18:50:55.618212 containerd[1702]: 2024-06-25 18:50:55.503 [INFO][4866] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" iface="eth0" netns="/var/run/netns/cni-6154d3c5-6937-2492-c10e-2c36842aecea" Jun 25 18:50:55.618212 containerd[1702]: 2024-06-25 18:50:55.504 [INFO][4866] k8s.go 615: Releasing IP address(es) ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Jun 25 18:50:55.618212 containerd[1702]: 2024-06-25 18:50:55.504 [INFO][4866] utils.go 188: Calico CNI releasing IP address ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Jun 25 18:50:55.618212 containerd[1702]: 2024-06-25 18:50:55.582 [INFO][4900] ipam_plugin.go 411: Releasing address using handleID ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" HandleID="k8s-pod-network.3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0" Jun 25 18:50:55.618212 containerd[1702]: 2024-06-25 18:50:55.582 [INFO][4900] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:50:55.618212 containerd[1702]: 2024-06-25 18:50:55.589 [INFO][4900] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:50:55.618212 containerd[1702]: 2024-06-25 18:50:55.599 [WARNING][4900] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" HandleID="k8s-pod-network.3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0" Jun 25 18:50:55.618212 containerd[1702]: 2024-06-25 18:50:55.599 [INFO][4900] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" HandleID="k8s-pod-network.3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0" Jun 25 18:50:55.618212 containerd[1702]: 2024-06-25 18:50:55.606 [INFO][4900] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:50:55.618212 containerd[1702]: 2024-06-25 18:50:55.607 [INFO][4866] k8s.go 621: Teardown processing complete. ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Jun 25 18:50:55.618212 containerd[1702]: time="2024-06-25T18:50:55.611017953Z" level=info msg="TearDown network for sandbox \"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\" successfully" Jun 25 18:50:55.618212 containerd[1702]: time="2024-06-25T18:50:55.611044754Z" level=info msg="StopPodSandbox for \"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\" returns successfully" Jun 25 18:50:55.618212 containerd[1702]: time="2024-06-25T18:50:55.611547462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bm8fl,Uid:d53de7a9-dec5-41a7-8464-1fd397879242,Namespace:kube-system,Attempt:1,}" Jun 25 18:50:55.613213 systemd[1]: run-netns-cni\x2d91420aa8\x2dc3ff\x2de723\x2d1618\x2d074c897e1467.mount: Deactivated successfully. Jun 25 18:50:55.620780 systemd[1]: run-netns-cni\x2d6154d3c5\x2d6937\x2d2492\x2dc10e\x2d2c36842aecea.mount: Deactivated successfully. Jun 25 18:50:55.951625 systemd-networkd[1590]: calie705342d5b4: Link UP Jun 25 18:50:55.953475 systemd-networkd[1590]: calie705342d5b4: Gained carrier Jun 25 18:50:55.980180 containerd[1702]: 2024-06-25 18:50:55.733 [INFO][4910] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0 calico-kube-controllers-69b7dfffc8- calico-system cdf8ad2a-e8cc-4207-ae7f-a4803ee52206 817 0 2024-06-25 18:50:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69b7dfffc8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4012.0.0-a-c5aaeb7e49 calico-kube-controllers-69b7dfffc8-mbwz4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie705342d5b4 [] []}} ContainerID="783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a" Namespace="calico-system" Pod="calico-kube-controllers-69b7dfffc8-mbwz4" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-" Jun 25 18:50:55.980180 containerd[1702]: 2024-06-25 18:50:55.733 [INFO][4910] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a" Namespace="calico-system" Pod="calico-kube-controllers-69b7dfffc8-mbwz4" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0" Jun 25 18:50:55.980180 containerd[1702]: 2024-06-25 18:50:55.856 [INFO][4944] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a" HandleID="k8s-pod-network.783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0" Jun 25 18:50:55.980180 containerd[1702]: 2024-06-25 18:50:55.889 [INFO][4944] ipam_plugin.go 264: Auto assigning IP ContainerID="783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a" HandleID="k8s-pod-network.783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027f550), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012.0.0-a-c5aaeb7e49", "pod":"calico-kube-controllers-69b7dfffc8-mbwz4", "timestamp":"2024-06-25 18:50:55.856724394 +0000 UTC"}, Hostname:"ci-4012.0.0-a-c5aaeb7e49", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:50:55.980180 containerd[1702]: 2024-06-25 18:50:55.891 [INFO][4944] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:50:55.980180 containerd[1702]: 2024-06-25 18:50:55.891 [INFO][4944] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:50:55.980180 containerd[1702]: 2024-06-25 18:50:55.891 [INFO][4944] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-c5aaeb7e49' Jun 25 18:50:55.980180 containerd[1702]: 2024-06-25 18:50:55.895 [INFO][4944] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:55.980180 containerd[1702]: 2024-06-25 18:50:55.907 [INFO][4944] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:55.980180 containerd[1702]: 2024-06-25 18:50:55.919 [INFO][4944] ipam.go 489: Trying affinity for 192.168.81.0/26 host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:55.980180 containerd[1702]: 2024-06-25 18:50:55.921 [INFO][4944] ipam.go 155: Attempting to load block cidr=192.168.81.0/26 host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:55.980180 containerd[1702]: 2024-06-25 18:50:55.927 [INFO][4944] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.81.0/26 host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:55.980180 containerd[1702]: 2024-06-25 18:50:55.928 [INFO][4944] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.81.0/26 handle="k8s-pod-network.783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:55.980180 containerd[1702]: 2024-06-25 18:50:55.931 [INFO][4944] ipam.go 1685: Creating new handle: k8s-pod-network.783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a Jun 25 18:50:55.980180 containerd[1702]: 2024-06-25 18:50:55.936 [INFO][4944] ipam.go 1203: Writing block in order to claim IPs block=192.168.81.0/26 handle="k8s-pod-network.783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:55.980180 containerd[1702]: 2024-06-25 18:50:55.941 [INFO][4944] ipam.go 1216: Successfully claimed IPs: [192.168.81.1/26] block=192.168.81.0/26 handle="k8s-pod-network.783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:55.980180 containerd[1702]: 2024-06-25 18:50:55.941 [INFO][4944] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.81.1/26] handle="k8s-pod-network.783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:55.980180 containerd[1702]: 2024-06-25 18:50:55.941 [INFO][4944] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:50:55.980180 containerd[1702]: 2024-06-25 18:50:55.942 [INFO][4944] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.81.1/26] IPv6=[] ContainerID="783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a" HandleID="k8s-pod-network.783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0" Jun 25 18:50:55.982957 containerd[1702]: 2024-06-25 18:50:55.944 [INFO][4910] k8s.go 386: Populated endpoint ContainerID="783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a" Namespace="calico-system" Pod="calico-kube-controllers-69b7dfffc8-mbwz4" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0", GenerateName:"calico-kube-controllers-69b7dfffc8-", Namespace:"calico-system", SelfLink:"", UID:"cdf8ad2a-e8cc-4207-ae7f-a4803ee52206", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69b7dfffc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-c5aaeb7e49", ContainerID:"", Pod:"calico-kube-controllers-69b7dfffc8-mbwz4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie705342d5b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:50:55.982957 containerd[1702]: 2024-06-25 18:50:55.944 [INFO][4910] k8s.go 387: Calico CNI using IPs: [192.168.81.1/32] ContainerID="783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a" Namespace="calico-system" Pod="calico-kube-controllers-69b7dfffc8-mbwz4" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0" Jun 25 18:50:55.982957 containerd[1702]: 2024-06-25 18:50:55.944 [INFO][4910] dataplane_linux.go 68: Setting the host side veth name to calie705342d5b4 ContainerID="783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a" Namespace="calico-system" Pod="calico-kube-controllers-69b7dfffc8-mbwz4" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0" Jun 25 18:50:55.982957 containerd[1702]: 2024-06-25 18:50:55.952 [INFO][4910] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a" Namespace="calico-system" Pod="calico-kube-controllers-69b7dfffc8-mbwz4" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0" Jun 25 18:50:55.982957 containerd[1702]: 2024-06-25 18:50:55.953 [INFO][4910] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a" Namespace="calico-system" Pod="calico-kube-controllers-69b7dfffc8-mbwz4" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0", GenerateName:"calico-kube-controllers-69b7dfffc8-", Namespace:"calico-system", SelfLink:"", UID:"cdf8ad2a-e8cc-4207-ae7f-a4803ee52206", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69b7dfffc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-c5aaeb7e49", ContainerID:"783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a", Pod:"calico-kube-controllers-69b7dfffc8-mbwz4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie705342d5b4", MAC:"02:06:bb:99:3b:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:50:55.982957 containerd[1702]: 2024-06-25 18:50:55.972 [INFO][4910] k8s.go 500: Wrote updated endpoint to datastore ContainerID="783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a" Namespace="calico-system" Pod="calico-kube-controllers-69b7dfffc8-mbwz4" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0" Jun 25 18:50:56.019162 systemd-networkd[1590]: cali5d20d6e30aa: Link UP Jun 25 18:50:56.020443 systemd-networkd[1590]: cali5d20d6e30aa: Gained carrier Jun 25 18:50:56.051977 containerd[1702]: time="2024-06-25T18:50:56.051270772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:50:56.051977 containerd[1702]: time="2024-06-25T18:50:56.051340173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:50:56.051977 containerd[1702]: time="2024-06-25T18:50:56.051361273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:50:56.051977 containerd[1702]: time="2024-06-25T18:50:56.051386774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:50:56.081593 containerd[1702]: 2024-06-25 18:50:55.809 [INFO][4922] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0 coredns-7db6d8ff4d- kube-system 42e4e351-9586-4756-9f34-473dd41b7f92 818 0 2024-06-25 18:50:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012.0.0-a-c5aaeb7e49 coredns-7db6d8ff4d-9txp5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5d20d6e30aa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9txp5" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-" Jun 25 18:50:56.081593 containerd[1702]: 2024-06-25 18:50:55.810 [INFO][4922] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9txp5" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0" Jun 25 18:50:56.081593 containerd[1702]: 2024-06-25 18:50:55.908 [INFO][4955] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6" HandleID="k8s-pod-network.bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0" Jun 25 18:50:56.081593 containerd[1702]: 2024-06-25 18:50:55.925 [INFO][4955] ipam_plugin.go 264: Auto assigning IP ContainerID="bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6" HandleID="k8s-pod-network.bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a2e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012.0.0-a-c5aaeb7e49", "pod":"coredns-7db6d8ff4d-9txp5", "timestamp":"2024-06-25 18:50:55.90873937 +0000 UTC"}, Hostname:"ci-4012.0.0-a-c5aaeb7e49", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:50:56.081593 containerd[1702]: 2024-06-25 18:50:55.925 [INFO][4955] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:50:56.081593 containerd[1702]: 2024-06-25 18:50:55.942 [INFO][4955] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:50:56.081593 containerd[1702]: 2024-06-25 18:50:55.942 [INFO][4955] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-c5aaeb7e49' Jun 25 18:50:56.081593 containerd[1702]: 2024-06-25 18:50:55.947 [INFO][4955] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:56.081593 containerd[1702]: 2024-06-25 18:50:55.955 [INFO][4955] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:56.081593 containerd[1702]: 2024-06-25 18:50:55.971 [INFO][4955] ipam.go 489: Trying affinity for 192.168.81.0/26 host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:56.081593 containerd[1702]: 2024-06-25 18:50:55.981 [INFO][4955] ipam.go 155: Attempting to load block cidr=192.168.81.0/26 host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:56.081593 containerd[1702]: 2024-06-25 18:50:55.989 [INFO][4955] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.81.0/26 host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:56.081593 containerd[1702]: 2024-06-25 18:50:55.989 [INFO][4955] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.81.0/26 handle="k8s-pod-network.bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:56.081593 containerd[1702]: 2024-06-25 18:50:55.991 [INFO][4955] ipam.go 1685: Creating new handle: k8s-pod-network.bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6 Jun 25 18:50:56.081593 containerd[1702]: 2024-06-25 18:50:55.997 [INFO][4955] ipam.go 1203: Writing block in order to claim IPs block=192.168.81.0/26 handle="k8s-pod-network.bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:56.081593 containerd[1702]: 2024-06-25 18:50:56.006 [INFO][4955] ipam.go 1216: Successfully claimed IPs: [192.168.81.2/26] block=192.168.81.0/26 handle="k8s-pod-network.bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:56.081593 containerd[1702]: 2024-06-25 18:50:56.007 [INFO][4955] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.81.2/26] handle="k8s-pod-network.bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:56.081593 containerd[1702]: 2024-06-25 18:50:56.007 [INFO][4955] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:50:56.081593 containerd[1702]: 2024-06-25 18:50:56.007 [INFO][4955] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.81.2/26] IPv6=[] ContainerID="bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6" HandleID="k8s-pod-network.bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0" Jun 25 18:50:56.082633 containerd[1702]: 2024-06-25 18:50:56.014 [INFO][4922] k8s.go 386: Populated endpoint ContainerID="bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9txp5" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"42e4e351-9586-4756-9f34-473dd41b7f92", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-c5aaeb7e49", ContainerID:"", Pod:"coredns-7db6d8ff4d-9txp5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5d20d6e30aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:50:56.082633 containerd[1702]: 2024-06-25 18:50:56.014 [INFO][4922] k8s.go 387: Calico CNI using IPs: [192.168.81.2/32] ContainerID="bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9txp5" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0" Jun 25 18:50:56.082633 containerd[1702]: 2024-06-25 18:50:56.014 [INFO][4922] dataplane_linux.go 68: Setting the host side veth name to cali5d20d6e30aa ContainerID="bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9txp5" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0" Jun 25 18:50:56.082633 containerd[1702]: 2024-06-25 18:50:56.020 [INFO][4922] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9txp5" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0" Jun 25 18:50:56.082633 containerd[1702]: 2024-06-25 18:50:56.022 [INFO][4922] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9txp5" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"42e4e351-9586-4756-9f34-473dd41b7f92", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-c5aaeb7e49", ContainerID:"bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6", Pod:"coredns-7db6d8ff4d-9txp5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5d20d6e30aa", MAC:"92:da:57:57:45:59", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:50:56.082633 containerd[1702]: 2024-06-25 18:50:56.045 [INFO][4922] k8s.go 500: Wrote updated endpoint to datastore ContainerID="bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9txp5" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0" Jun 25 18:50:56.084358 systemd[1]: Started cri-containerd-783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a.scope - libcontainer container 783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a. Jun 25 18:50:56.118319 systemd-networkd[1590]: caliad595b1a551: Link UP Jun 25 18:50:56.118633 systemd-networkd[1590]: caliad595b1a551: Gained carrier Jun 25 18:50:56.147999 containerd[1702]: time="2024-06-25T18:50:56.147820099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:50:56.150411 containerd[1702]: time="2024-06-25T18:50:56.147888400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:50:56.150411 containerd[1702]: time="2024-06-25T18:50:56.149118721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:50:56.150411 containerd[1702]: time="2024-06-25T18:50:56.149142221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:50:56.150411 containerd[1702]: 2024-06-25 18:50:55.845 [INFO][4928] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0 coredns-7db6d8ff4d- kube-system d53de7a9-dec5-41a7-8464-1fd397879242 819 0 2024-06-25 18:50:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012.0.0-a-c5aaeb7e49 coredns-7db6d8ff4d-bm8fl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliad595b1a551 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bm8fl" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-" Jun 25 18:50:56.150411 containerd[1702]: 2024-06-25 18:50:55.845 [INFO][4928] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bm8fl" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0" Jun 25 18:50:56.150411 containerd[1702]: 2024-06-25 18:50:55.921 [INFO][4960] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7" HandleID="k8s-pod-network.7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0" Jun 25 18:50:56.150411 containerd[1702]: 2024-06-25 18:50:55.933 [INFO][4960] ipam_plugin.go 264: Auto assigning IP ContainerID="7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7" HandleID="k8s-pod-network.7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00059ae10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012.0.0-a-c5aaeb7e49", "pod":"coredns-7db6d8ff4d-bm8fl", "timestamp":"2024-06-25 18:50:55.921982393 +0000 UTC"}, Hostname:"ci-4012.0.0-a-c5aaeb7e49", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:50:56.150411 containerd[1702]: 2024-06-25 18:50:55.934 [INFO][4960] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:50:56.150411 containerd[1702]: 2024-06-25 18:50:56.007 [INFO][4960] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:50:56.150411 containerd[1702]: 2024-06-25 18:50:56.007 [INFO][4960] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-c5aaeb7e49' Jun 25 18:50:56.150411 containerd[1702]: 2024-06-25 18:50:56.010 [INFO][4960] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:56.150411 containerd[1702]: 2024-06-25 18:50:56.021 [INFO][4960] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:56.150411 containerd[1702]: 2024-06-25 18:50:56.034 [INFO][4960] ipam.go 489: Trying affinity for 192.168.81.0/26 host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:56.150411 containerd[1702]: 2024-06-25 18:50:56.041 [INFO][4960] ipam.go 155: Attempting to load block cidr=192.168.81.0/26 host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:56.150411 containerd[1702]: 2024-06-25 18:50:56.071 [INFO][4960] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.81.0/26 host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:56.150411 containerd[1702]: 2024-06-25 18:50:56.079 [INFO][4960] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.81.0/26 handle="k8s-pod-network.7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:56.150411 containerd[1702]: 2024-06-25 18:50:56.089 [INFO][4960] ipam.go 1685: Creating new handle: k8s-pod-network.7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7 Jun 25 18:50:56.150411 containerd[1702]: 2024-06-25 18:50:56.097 [INFO][4960] ipam.go 1203: Writing block in order to claim IPs block=192.168.81.0/26 handle="k8s-pod-network.7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:56.150411 containerd[1702]: 2024-06-25 18:50:56.105 [INFO][4960] ipam.go 1216: Successfully claimed IPs: [192.168.81.3/26] block=192.168.81.0/26 handle="k8s-pod-network.7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:56.150411 containerd[1702]: 2024-06-25 18:50:56.105 [INFO][4960] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.81.3/26] handle="k8s-pod-network.7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:56.150411 containerd[1702]: 2024-06-25 18:50:56.105 [INFO][4960] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:50:56.150411 containerd[1702]: 2024-06-25 18:50:56.105 [INFO][4960] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.81.3/26] IPv6=[] ContainerID="7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7" HandleID="k8s-pod-network.7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0" Jun 25 18:50:56.151719 containerd[1702]: 2024-06-25 18:50:56.107 [INFO][4928] k8s.go 386: Populated endpoint ContainerID="7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bm8fl" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d53de7a9-dec5-41a7-8464-1fd397879242", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-c5aaeb7e49", ContainerID:"", Pod:"coredns-7db6d8ff4d-bm8fl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad595b1a551", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:50:56.151719 containerd[1702]: 2024-06-25 18:50:56.108 [INFO][4928] k8s.go 387: Calico CNI using IPs: [192.168.81.3/32] ContainerID="7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bm8fl" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0" Jun 25 18:50:56.151719 containerd[1702]: 2024-06-25 18:50:56.108 [INFO][4928] dataplane_linux.go 68: Setting the host side veth name to caliad595b1a551 ContainerID="7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bm8fl" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0" Jun 25 18:50:56.151719 containerd[1702]: 2024-06-25 18:50:56.121 [INFO][4928] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bm8fl" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0" Jun 25 18:50:56.151719 containerd[1702]: 2024-06-25 18:50:56.125 [INFO][4928] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bm8fl" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d53de7a9-dec5-41a7-8464-1fd397879242", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-c5aaeb7e49", ContainerID:"7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7", Pod:"coredns-7db6d8ff4d-bm8fl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad595b1a551", MAC:"be:c9:68:da:12:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:50:56.151719 containerd[1702]: 2024-06-25 18:50:56.141 [INFO][4928] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bm8fl" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0" Jun 25 18:50:56.193951 systemd[1]: Started cri-containerd-bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6.scope - libcontainer container bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6. Jun 25 18:50:56.227758 containerd[1702]: time="2024-06-25T18:50:56.227544742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:50:56.227758 containerd[1702]: time="2024-06-25T18:50:56.227626544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:50:56.228510 containerd[1702]: time="2024-06-25T18:50:56.227705345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:50:56.231435 containerd[1702]: time="2024-06-25T18:50:56.229692578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:50:56.267831 systemd[1]: Started cri-containerd-7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7.scope - libcontainer container 7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7. Jun 25 18:50:56.285553 containerd[1702]: time="2024-06-25T18:50:56.285506019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9txp5,Uid:42e4e351-9586-4756-9f34-473dd41b7f92,Namespace:kube-system,Attempt:1,} returns sandbox id \"bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6\"" Jun 25 18:50:56.293232 containerd[1702]: time="2024-06-25T18:50:56.293186848Z" level=info msg="CreateContainer within sandbox \"bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:50:56.339836 containerd[1702]: time="2024-06-25T18:50:56.339786833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69b7dfffc8-mbwz4,Uid:cdf8ad2a-e8cc-4207-ae7f-a4803ee52206,Namespace:calico-system,Attempt:1,} returns sandbox id \"783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a\"" Jun 25 18:50:56.343994 containerd[1702]: time="2024-06-25T18:50:56.343959404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 18:50:56.368810 containerd[1702]: time="2024-06-25T18:50:56.368648820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bm8fl,Uid:d53de7a9-dec5-41a7-8464-1fd397879242,Namespace:kube-system,Attempt:1,} returns sandbox id \"7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7\"" Jun 25 18:50:56.374089 containerd[1702]: time="2024-06-25T18:50:56.373719605Z" level=info msg="CreateContainer within sandbox \"bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"efbdc2100fc3a387342eed93f53953b2dbb2b8e4407865d5bbdca7cb36410a8a\"" Jun 25 18:50:56.374695 containerd[1702]: time="2024-06-25T18:50:56.374506619Z" level=info msg="CreateContainer within sandbox \"7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:50:56.376997 containerd[1702]: time="2024-06-25T18:50:56.376886259Z" level=info msg="StartContainer for \"efbdc2100fc3a387342eed93f53953b2dbb2b8e4407865d5bbdca7cb36410a8a\"" Jun 25 18:50:56.434047 systemd[1]: Started cri-containerd-efbdc2100fc3a387342eed93f53953b2dbb2b8e4407865d5bbdca7cb36410a8a.scope - libcontainer container efbdc2100fc3a387342eed93f53953b2dbb2b8e4407865d5bbdca7cb36410a8a. Jun 25 18:50:56.447929 containerd[1702]: time="2024-06-25T18:50:56.447702352Z" level=info msg="CreateContainer within sandbox \"7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3abfe33a2edffac4373ffd26151337dc402afd0d82776f50b195ab35ebe68b7b\"" Jun 25 18:50:56.449635 containerd[1702]: time="2024-06-25T18:50:56.449598284Z" level=info msg="StartContainer for \"3abfe33a2edffac4373ffd26151337dc402afd0d82776f50b195ab35ebe68b7b\"" Jun 25 18:50:56.488018 containerd[1702]: time="2024-06-25T18:50:56.487273719Z" level=info msg="StartContainer for \"efbdc2100fc3a387342eed93f53953b2dbb2b8e4407865d5bbdca7cb36410a8a\" returns successfully" Jun 25 18:50:56.503088 systemd[1]: Started cri-containerd-3abfe33a2edffac4373ffd26151337dc402afd0d82776f50b195ab35ebe68b7b.scope - libcontainer container 3abfe33a2edffac4373ffd26151337dc402afd0d82776f50b195ab35ebe68b7b. Jun 25 18:50:56.557777 containerd[1702]: time="2024-06-25T18:50:56.557707106Z" level=info msg="StartContainer for \"3abfe33a2edffac4373ffd26151337dc402afd0d82776f50b195ab35ebe68b7b\" returns successfully" Jun 25 18:50:56.667097 kubelet[3235]: I0625 18:50:56.666765 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bm8fl" podStartSLOduration=40.666743243 podStartE2EDuration="40.666743243s" podCreationTimestamp="2024-06-25 18:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:50:56.66360559 +0000 UTC m=+54.905973722" watchObservedRunningTime="2024-06-25 18:50:56.666743243 +0000 UTC m=+54.909111375" Jun 25 18:50:57.313122 systemd-networkd[1590]: caliad595b1a551: Gained IPv6LL Jun 25 18:50:57.697294 systemd-networkd[1590]: calie705342d5b4: Gained IPv6LL Jun 25 18:50:57.825047 systemd-networkd[1590]: cali5d20d6e30aa: Gained IPv6LL Jun 25 18:50:58.399207 containerd[1702]: time="2024-06-25T18:50:58.399035033Z" level=info msg="StopPodSandbox for \"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\"" Jun 25 18:50:58.451920 kubelet[3235]: I0625 18:50:58.450471 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9txp5" podStartSLOduration=42.4504422 podStartE2EDuration="42.4504422s" podCreationTimestamp="2024-06-25 18:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:50:56.712046806 +0000 UTC m=+54.954414938" watchObservedRunningTime="2024-06-25 18:50:58.4504422 +0000 UTC m=+56.692810432" Jun 25 18:50:58.501649 containerd[1702]: 2024-06-25 18:50:58.453 [INFO][5238] k8s.go 608: Cleaning up netns ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Jun 25 18:50:58.501649 containerd[1702]: 2024-06-25 18:50:58.453 [INFO][5238] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" iface="eth0" netns="/var/run/netns/cni-2af0d96c-0f21-7779-23cd-04d2b2946628" Jun 25 18:50:58.501649 containerd[1702]: 2024-06-25 18:50:58.454 [INFO][5238] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" iface="eth0" netns="/var/run/netns/cni-2af0d96c-0f21-7779-23cd-04d2b2946628" Jun 25 18:50:58.501649 containerd[1702]: 2024-06-25 18:50:58.455 [INFO][5238] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" iface="eth0" netns="/var/run/netns/cni-2af0d96c-0f21-7779-23cd-04d2b2946628" Jun 25 18:50:58.501649 containerd[1702]: 2024-06-25 18:50:58.455 [INFO][5238] k8s.go 615: Releasing IP address(es) ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Jun 25 18:50:58.501649 containerd[1702]: 2024-06-25 18:50:58.455 [INFO][5238] utils.go 188: Calico CNI releasing IP address ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Jun 25 18:50:58.501649 containerd[1702]: 2024-06-25 18:50:58.492 [INFO][5245] ipam_plugin.go 411: Releasing address using handleID ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" HandleID="k8s-pod-network.502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0" Jun 25 18:50:58.501649 containerd[1702]: 2024-06-25 18:50:58.493 [INFO][5245] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:50:58.501649 containerd[1702]: 2024-06-25 18:50:58.493 [INFO][5245] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:50:58.501649 containerd[1702]: 2024-06-25 18:50:58.498 [WARNING][5245] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" HandleID="k8s-pod-network.502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0" Jun 25 18:50:58.501649 containerd[1702]: 2024-06-25 18:50:58.498 [INFO][5245] ipam_plugin.go 439: Releasing address using workloadID ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" HandleID="k8s-pod-network.502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0" Jun 25 18:50:58.501649 containerd[1702]: 2024-06-25 18:50:58.499 [INFO][5245] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:50:58.501649 containerd[1702]: 2024-06-25 18:50:58.500 [INFO][5238] k8s.go 621: Teardown processing complete. ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Jun 25 18:50:58.502702 containerd[1702]: time="2024-06-25T18:50:58.502543277Z" level=info msg="TearDown network for sandbox \"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\" successfully" Jun 25 18:50:58.502702 containerd[1702]: time="2024-06-25T18:50:58.502582978Z" level=info msg="StopPodSandbox for \"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\" returns successfully" Jun 25 18:50:58.504948 containerd[1702]: time="2024-06-25T18:50:58.504668113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7hj8p,Uid:0e18d57e-4106-4db0-b549-0f6c4c21f68a,Namespace:calico-system,Attempt:1,}" Jun 25 18:50:58.507019 systemd[1]: run-netns-cni\x2d2af0d96c\x2d0f21\x2d7779\x2d23cd\x2d04d2b2946628.mount: Deactivated successfully. Jun 25 18:50:58.665703 systemd-networkd[1590]: cali54b739e5311: Link UP Jun 25 18:50:58.667604 systemd-networkd[1590]: cali54b739e5311: Gained carrier Jun 25 18:50:58.689027 containerd[1702]: 2024-06-25 18:50:58.593 [INFO][5251] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0 csi-node-driver- calico-system 0e18d57e-4106-4db0-b549-0f6c4c21f68a 854 0 2024-06-25 18:50:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4012.0.0-a-c5aaeb7e49 csi-node-driver-7hj8p eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali54b739e5311 [] []}} ContainerID="c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703" Namespace="calico-system" Pod="csi-node-driver-7hj8p" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-" Jun 25 18:50:58.689027 containerd[1702]: 2024-06-25 18:50:58.593 [INFO][5251] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703" Namespace="calico-system" Pod="csi-node-driver-7hj8p" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0" Jun 25 18:50:58.689027 containerd[1702]: 2024-06-25 18:50:58.629 [INFO][5262] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703" HandleID="k8s-pod-network.c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0" Jun 25 18:50:58.689027 containerd[1702]: 2024-06-25 18:50:58.638 [INFO][5262] ipam_plugin.go 264: Auto assigning IP ContainerID="c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703" HandleID="k8s-pod-network.c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a320), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012.0.0-a-c5aaeb7e49", "pod":"csi-node-driver-7hj8p", "timestamp":"2024-06-25 18:50:58.629968225 +0000 UTC"}, Hostname:"ci-4012.0.0-a-c5aaeb7e49", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:50:58.689027 containerd[1702]: 2024-06-25 18:50:58.638 [INFO][5262] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:50:58.689027 containerd[1702]: 2024-06-25 18:50:58.638 [INFO][5262] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:50:58.689027 containerd[1702]: 2024-06-25 18:50:58.638 [INFO][5262] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-c5aaeb7e49' Jun 25 18:50:58.689027 containerd[1702]: 2024-06-25 18:50:58.639 [INFO][5262] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:58.689027 containerd[1702]: 2024-06-25 18:50:58.642 [INFO][5262] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:58.689027 containerd[1702]: 2024-06-25 18:50:58.646 [INFO][5262] ipam.go 489: Trying affinity for 192.168.81.0/26 host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:58.689027 containerd[1702]: 2024-06-25 18:50:58.647 [INFO][5262] ipam.go 155: Attempting to load block cidr=192.168.81.0/26 host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:58.689027 containerd[1702]: 2024-06-25 18:50:58.649 [INFO][5262] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.81.0/26 host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:58.689027 containerd[1702]: 2024-06-25 18:50:58.649 [INFO][5262] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.81.0/26 handle="k8s-pod-network.c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:58.689027 containerd[1702]: 2024-06-25 18:50:58.650 [INFO][5262] ipam.go 1685: Creating new handle: k8s-pod-network.c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703 Jun 25 18:50:58.689027 containerd[1702]: 2024-06-25 18:50:58.653 [INFO][5262] ipam.go 1203: Writing block in order to claim IPs block=192.168.81.0/26 handle="k8s-pod-network.c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:58.689027 containerd[1702]: 2024-06-25 18:50:58.659 [INFO][5262] ipam.go 1216: Successfully claimed IPs: [192.168.81.4/26] block=192.168.81.0/26 handle="k8s-pod-network.c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:58.689027 containerd[1702]: 2024-06-25 18:50:58.659 [INFO][5262] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.81.4/26] handle="k8s-pod-network.c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:50:58.689027 containerd[1702]: 2024-06-25 18:50:58.659 [INFO][5262] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:50:58.689027 containerd[1702]: 2024-06-25 18:50:58.659 [INFO][5262] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.81.4/26] IPv6=[] ContainerID="c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703" HandleID="k8s-pod-network.c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0" Jun 25 18:50:58.690096 containerd[1702]: 2024-06-25 18:50:58.662 [INFO][5251] k8s.go 386: Populated endpoint ContainerID="c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703" Namespace="calico-system" Pod="csi-node-driver-7hj8p" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e18d57e-4106-4db0-b549-0f6c4c21f68a", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-c5aaeb7e49", ContainerID:"", Pod:"csi-node-driver-7hj8p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.81.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali54b739e5311", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:50:58.690096 containerd[1702]: 2024-06-25 18:50:58.662 [INFO][5251] k8s.go 387: Calico CNI using IPs: [192.168.81.4/32] ContainerID="c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703" Namespace="calico-system" Pod="csi-node-driver-7hj8p" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0" Jun 25 18:50:58.690096 containerd[1702]: 2024-06-25 18:50:58.662 [INFO][5251] dataplane_linux.go 68: Setting the host side veth name to cali54b739e5311 ContainerID="c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703" Namespace="calico-system" Pod="csi-node-driver-7hj8p" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0" Jun 25 18:50:58.690096 containerd[1702]: 2024-06-25 18:50:58.666 [INFO][5251] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703" Namespace="calico-system" Pod="csi-node-driver-7hj8p" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0" Jun 25 18:50:58.690096 containerd[1702]: 2024-06-25 18:50:58.666 [INFO][5251] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703" Namespace="calico-system" Pod="csi-node-driver-7hj8p" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e18d57e-4106-4db0-b549-0f6c4c21f68a", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-c5aaeb7e49", ContainerID:"c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703", Pod:"csi-node-driver-7hj8p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.81.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali54b739e5311", MAC:"32:92:a0:53:20:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:50:58.690096 containerd[1702]: 2024-06-25 18:50:58.685 [INFO][5251] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703" Namespace="calico-system" Pod="csi-node-driver-7hj8p" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0" Jun 25 18:50:58.973716 containerd[1702]: time="2024-06-25T18:50:58.972887403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:50:58.973716 containerd[1702]: time="2024-06-25T18:50:58.972975305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:50:58.973716 containerd[1702]: time="2024-06-25T18:50:58.973001205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:50:58.973716 containerd[1702]: time="2024-06-25T18:50:58.973039806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:50:59.008068 systemd[1]: Started cri-containerd-c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703.scope - libcontainer container c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703. Jun 25 18:50:59.053448 containerd[1702]: time="2024-06-25T18:50:59.053403260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7hj8p,Uid:0e18d57e-4106-4db0-b549-0f6c4c21f68a,Namespace:calico-system,Attempt:1,} returns sandbox id \"c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703\"" Jun 25 18:50:59.838802 containerd[1702]: time="2024-06-25T18:50:59.838749493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:50:59.841361 containerd[1702]: time="2024-06-25T18:50:59.841297036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 18:50:59.845551 containerd[1702]: time="2024-06-25T18:50:59.845496607Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:50:59.850656 containerd[1702]: time="2024-06-25T18:50:59.850587593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:50:59.851782 containerd[1702]: time="2024-06-25T18:50:59.851273705Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.5072705s" Jun 25 18:50:59.851782 containerd[1702]: time="2024-06-25T18:50:59.851311405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 18:50:59.854143 containerd[1702]: time="2024-06-25T18:50:59.854114152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 18:50:59.867247 containerd[1702]: time="2024-06-25T18:50:59.867096971Z" level=info msg="CreateContainer within sandbox \"783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 18:50:59.908637 containerd[1702]: time="2024-06-25T18:50:59.908584570Z" level=info msg="CreateContainer within sandbox \"783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8eb997095de45758dd56b5bf36711f8a44f1b793d30e8a7c95dbd5682889ce3e\"" Jun 25 18:50:59.909317 containerd[1702]: time="2024-06-25T18:50:59.909229981Z" level=info msg="StartContainer for \"8eb997095de45758dd56b5bf36711f8a44f1b793d30e8a7c95dbd5682889ce3e\"" Jun 25 18:50:59.941025 systemd[1]: Started cri-containerd-8eb997095de45758dd56b5bf36711f8a44f1b793d30e8a7c95dbd5682889ce3e.scope - libcontainer container 8eb997095de45758dd56b5bf36711f8a44f1b793d30e8a7c95dbd5682889ce3e. Jun 25 18:50:59.984899 containerd[1702]: time="2024-06-25T18:50:59.984773154Z" level=info msg="StartContainer for \"8eb997095de45758dd56b5bf36711f8a44f1b793d30e8a7c95dbd5682889ce3e\" returns successfully" Jun 25 18:51:00.641124 systemd-networkd[1590]: cali54b739e5311: Gained IPv6LL Jun 25 18:51:00.687342 kubelet[3235]: I0625 18:51:00.686019 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-69b7dfffc8-mbwz4" podStartSLOduration=34.176648535 podStartE2EDuration="37.68599257s" podCreationTimestamp="2024-06-25 18:50:23 +0000 UTC" firstStartedPulling="2024-06-25 18:50:56.342673382 +0000 UTC m=+54.585041514" lastFinishedPulling="2024-06-25 18:50:59.852017417 +0000 UTC m=+58.094385549" observedRunningTime="2024-06-25 18:51:00.683572829 +0000 UTC m=+58.925940961" watchObservedRunningTime="2024-06-25 18:51:00.68599257 +0000 UTC m=+58.928360702" Jun 25 18:51:01.899229 containerd[1702]: time="2024-06-25T18:51:01.899164313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:51:01.901964 containerd[1702]: time="2024-06-25T18:51:01.901841758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 18:51:01.909335 containerd[1702]: time="2024-06-25T18:51:01.909300484Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:51:01.915237 containerd[1702]: time="2024-06-25T18:51:01.915183083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:51:01.915908 containerd[1702]: time="2024-06-25T18:51:01.915773893Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.061485338s" Jun 25 18:51:01.915908 containerd[1702]: time="2024-06-25T18:51:01.915814193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 18:51:01.918373 containerd[1702]: time="2024-06-25T18:51:01.918333136Z" level=info msg="CreateContainer within sandbox \"c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 18:51:01.988884 containerd[1702]: time="2024-06-25T18:51:01.988830624Z" level=info msg="CreateContainer within sandbox \"c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9ab26a3e43967ca4026d6b5248f09d1c5ba9048b5f5ef3d7dda00799cd5cb0b7\"" Jun 25 18:51:01.989517 containerd[1702]: time="2024-06-25T18:51:01.989416134Z" level=info msg="StartContainer for \"9ab26a3e43967ca4026d6b5248f09d1c5ba9048b5f5ef3d7dda00799cd5cb0b7\"" Jun 25 18:51:02.024388 systemd[1]: run-containerd-runc-k8s.io-9ab26a3e43967ca4026d6b5248f09d1c5ba9048b5f5ef3d7dda00799cd5cb0b7-runc.T4vPl6.mount: Deactivated successfully. Jun 25 18:51:02.033037 systemd[1]: Started cri-containerd-9ab26a3e43967ca4026d6b5248f09d1c5ba9048b5f5ef3d7dda00799cd5cb0b7.scope - libcontainer container 9ab26a3e43967ca4026d6b5248f09d1c5ba9048b5f5ef3d7dda00799cd5cb0b7. Jun 25 18:51:02.066056 containerd[1702]: time="2024-06-25T18:51:02.066003624Z" level=info msg="StartContainer for \"9ab26a3e43967ca4026d6b5248f09d1c5ba9048b5f5ef3d7dda00799cd5cb0b7\" returns successfully" Jun 25 18:51:02.067262 containerd[1702]: time="2024-06-25T18:51:02.067224545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 18:51:02.371627 containerd[1702]: time="2024-06-25T18:51:02.371579355Z" level=info msg="StopPodSandbox for \"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\"" Jun 25 18:51:02.492178 containerd[1702]: 2024-06-25 18:51:02.422 [WARNING][5432] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"42e4e351-9586-4756-9f34-473dd41b7f92", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-c5aaeb7e49", ContainerID:"bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6", Pod:"coredns-7db6d8ff4d-9txp5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5d20d6e30aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:51:02.492178 containerd[1702]: 2024-06-25 18:51:02.427 [INFO][5432] k8s.go 608: Cleaning up netns ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Jun 25 18:51:02.492178 containerd[1702]: 2024-06-25 18:51:02.427 [INFO][5432] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" iface="eth0" netns="" Jun 25 18:51:02.492178 containerd[1702]: 2024-06-25 18:51:02.427 [INFO][5432] k8s.go 615: Releasing IP address(es) ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Jun 25 18:51:02.492178 containerd[1702]: 2024-06-25 18:51:02.427 [INFO][5432] utils.go 188: Calico CNI releasing IP address ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Jun 25 18:51:02.492178 containerd[1702]: 2024-06-25 18:51:02.468 [INFO][5440] ipam_plugin.go 411: Releasing address using handleID ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" HandleID="k8s-pod-network.72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0" Jun 25 18:51:02.492178 containerd[1702]: 2024-06-25 18:51:02.468 [INFO][5440] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:51:02.492178 containerd[1702]: 2024-06-25 18:51:02.468 [INFO][5440] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:51:02.492178 containerd[1702]: 2024-06-25 18:51:02.479 [WARNING][5440] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" HandleID="k8s-pod-network.72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0" Jun 25 18:51:02.492178 containerd[1702]: 2024-06-25 18:51:02.479 [INFO][5440] ipam_plugin.go 439: Releasing address using workloadID ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" HandleID="k8s-pod-network.72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0" Jun 25 18:51:02.492178 containerd[1702]: 2024-06-25 18:51:02.485 [INFO][5440] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:51:02.492178 containerd[1702]: 2024-06-25 18:51:02.489 [INFO][5432] k8s.go 621: Teardown processing complete. ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Jun 25 18:51:02.494572 containerd[1702]: time="2024-06-25T18:51:02.492225880Z" level=info msg="TearDown network for sandbox \"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\" successfully" Jun 25 18:51:02.494572 containerd[1702]: time="2024-06-25T18:51:02.492255881Z" level=info msg="StopPodSandbox for \"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\" returns successfully" Jun 25 18:51:02.494572 containerd[1702]: time="2024-06-25T18:51:02.493058894Z" level=info msg="RemovePodSandbox for \"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\"" Jun 25 18:51:02.494572 containerd[1702]: time="2024-06-25T18:51:02.493092495Z" level=info msg="Forcibly stopping sandbox \"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\"" Jun 25 18:51:02.599326 containerd[1702]: 2024-06-25 18:51:02.556 [WARNING][5464] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"42e4e351-9586-4756-9f34-473dd41b7f92", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-c5aaeb7e49", ContainerID:"bf67c34e9aa76de68aa2cdc6b4567036ca3a54e3b031a1fb2e345fed92ea0ef6", Pod:"coredns-7db6d8ff4d-9txp5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5d20d6e30aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:51:02.599326 containerd[1702]: 2024-06-25 18:51:02.556 [INFO][5464] k8s.go 608: Cleaning up netns ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Jun 25 18:51:02.599326 containerd[1702]: 2024-06-25 18:51:02.556 [INFO][5464] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" iface="eth0" netns="" Jun 25 18:51:02.599326 containerd[1702]: 2024-06-25 18:51:02.556 [INFO][5464] k8s.go 615: Releasing IP address(es) ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Jun 25 18:51:02.599326 containerd[1702]: 2024-06-25 18:51:02.556 [INFO][5464] utils.go 188: Calico CNI releasing IP address ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Jun 25 18:51:02.599326 containerd[1702]: 2024-06-25 18:51:02.584 [INFO][5471] ipam_plugin.go 411: Releasing address using handleID ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" HandleID="k8s-pod-network.72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0" Jun 25 18:51:02.599326 containerd[1702]: 2024-06-25 18:51:02.584 [INFO][5471] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:51:02.599326 containerd[1702]: 2024-06-25 18:51:02.584 [INFO][5471] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:51:02.599326 containerd[1702]: 2024-06-25 18:51:02.595 [WARNING][5471] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" HandleID="k8s-pod-network.72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0" Jun 25 18:51:02.599326 containerd[1702]: 2024-06-25 18:51:02.595 [INFO][5471] ipam_plugin.go 439: Releasing address using workloadID ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" HandleID="k8s-pod-network.72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--9txp5-eth0" Jun 25 18:51:02.599326 containerd[1702]: 2024-06-25 18:51:02.596 [INFO][5471] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:51:02.599326 containerd[1702]: 2024-06-25 18:51:02.598 [INFO][5464] k8s.go 621: Teardown processing complete. ContainerID="72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12" Jun 25 18:51:02.600023 containerd[1702]: time="2024-06-25T18:51:02.599358179Z" level=info msg="TearDown network for sandbox \"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\" successfully" Jun 25 18:51:02.613953 containerd[1702]: time="2024-06-25T18:51:02.613497616Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:51:02.613953 containerd[1702]: time="2024-06-25T18:51:02.613620418Z" level=info msg="RemovePodSandbox \"72238bdc7d45e5a854bbfdff085d3ae3524bfe87ae653ee13137c486e8cf3c12\" returns successfully" Jun 25 18:51:02.614511 containerd[1702]: time="2024-06-25T18:51:02.614396731Z" level=info msg="StopPodSandbox for \"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\"" Jun 25 18:51:02.694150 containerd[1702]: 2024-06-25 18:51:02.649 [WARNING][5490] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d53de7a9-dec5-41a7-8464-1fd397879242", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-c5aaeb7e49", ContainerID:"7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7", Pod:"coredns-7db6d8ff4d-bm8fl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad595b1a551", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:51:02.694150 containerd[1702]: 2024-06-25 18:51:02.650 [INFO][5490] k8s.go 608: Cleaning up netns ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Jun 25 18:51:02.694150 containerd[1702]: 2024-06-25 18:51:02.650 [INFO][5490] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" iface="eth0" netns="" Jun 25 18:51:02.694150 containerd[1702]: 2024-06-25 18:51:02.650 [INFO][5490] k8s.go 615: Releasing IP address(es) ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Jun 25 18:51:02.694150 containerd[1702]: 2024-06-25 18:51:02.650 [INFO][5490] utils.go 188: Calico CNI releasing IP address ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Jun 25 18:51:02.694150 containerd[1702]: 2024-06-25 18:51:02.677 [INFO][5496] ipam_plugin.go 411: Releasing address using handleID ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" HandleID="k8s-pod-network.3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0" Jun 25 18:51:02.694150 containerd[1702]: 2024-06-25 18:51:02.677 [INFO][5496] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:51:02.694150 containerd[1702]: 2024-06-25 18:51:02.678 [INFO][5496] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:51:02.694150 containerd[1702]: 2024-06-25 18:51:02.688 [WARNING][5496] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" HandleID="k8s-pod-network.3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0" Jun 25 18:51:02.694150 containerd[1702]: 2024-06-25 18:51:02.688 [INFO][5496] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" HandleID="k8s-pod-network.3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0" Jun 25 18:51:02.694150 containerd[1702]: 2024-06-25 18:51:02.690 [INFO][5496] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:51:02.694150 containerd[1702]: 2024-06-25 18:51:02.691 [INFO][5490] k8s.go 621: Teardown processing complete. ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Jun 25 18:51:02.695295 containerd[1702]: time="2024-06-25T18:51:02.694049468Z" level=info msg="TearDown network for sandbox \"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\" successfully" Jun 25 18:51:02.695295 containerd[1702]: time="2024-06-25T18:51:02.694416675Z" level=info msg="StopPodSandbox for \"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\" returns successfully" Jun 25 18:51:02.695295 containerd[1702]: time="2024-06-25T18:51:02.695117986Z" level=info msg="RemovePodSandbox for \"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\"" Jun 25 18:51:02.695685 containerd[1702]: time="2024-06-25T18:51:02.695463192Z" level=info msg="Forcibly stopping sandbox \"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\"" Jun 25 18:51:02.754637 containerd[1702]: 2024-06-25 18:51:02.728 [WARNING][5518] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d53de7a9-dec5-41a7-8464-1fd397879242", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-c5aaeb7e49", ContainerID:"7b0740766731b2ae6ed3cae56f5fbb671d15b1d0a186ef2d72ca53d66cd8ade7", Pod:"coredns-7db6d8ff4d-bm8fl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad595b1a551", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:51:02.754637 containerd[1702]: 2024-06-25 18:51:02.728 [INFO][5518] k8s.go 608: Cleaning up netns ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Jun 25 18:51:02.754637 containerd[1702]: 2024-06-25 18:51:02.728 [INFO][5518] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" iface="eth0" netns="" Jun 25 18:51:02.754637 containerd[1702]: 2024-06-25 18:51:02.729 [INFO][5518] k8s.go 615: Releasing IP address(es) ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Jun 25 18:51:02.754637 containerd[1702]: 2024-06-25 18:51:02.729 [INFO][5518] utils.go 188: Calico CNI releasing IP address ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Jun 25 18:51:02.754637 containerd[1702]: 2024-06-25 18:51:02.746 [INFO][5524] ipam_plugin.go 411: Releasing address using handleID ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" HandleID="k8s-pod-network.3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0" Jun 25 18:51:02.754637 containerd[1702]: 2024-06-25 18:51:02.747 [INFO][5524] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:51:02.754637 containerd[1702]: 2024-06-25 18:51:02.747 [INFO][5524] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:51:02.754637 containerd[1702]: 2024-06-25 18:51:02.751 [WARNING][5524] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" HandleID="k8s-pod-network.3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0" Jun 25 18:51:02.754637 containerd[1702]: 2024-06-25 18:51:02.751 [INFO][5524] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" HandleID="k8s-pod-network.3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-coredns--7db6d8ff4d--bm8fl-eth0" Jun 25 18:51:02.754637 containerd[1702]: 2024-06-25 18:51:02.752 [INFO][5524] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:51:02.754637 containerd[1702]: 2024-06-25 18:51:02.753 [INFO][5518] k8s.go 621: Teardown processing complete. ContainerID="3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc" Jun 25 18:51:02.755479 containerd[1702]: time="2024-06-25T18:51:02.754677386Z" level=info msg="TearDown network for sandbox \"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\" successfully" Jun 25 18:51:02.772109 containerd[1702]: time="2024-06-25T18:51:02.772057078Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:51:02.772281 containerd[1702]: time="2024-06-25T18:51:02.772125979Z" level=info msg="RemovePodSandbox \"3e7ef4d8d735d0ca00b445eb66b0a63a6855cf1982f1966a039445af428aa3dc\" returns successfully" Jun 25 18:51:02.772648 containerd[1702]: time="2024-06-25T18:51:02.772612887Z" level=info msg="StopPodSandbox for \"9815506f0a5ca14675af604c4cc2a14d7a1fe650dd93b668f7df2d8ce1b7e9d5\"" Jun 25 18:51:02.772743 containerd[1702]: time="2024-06-25T18:51:02.772717989Z" level=info msg="TearDown network for sandbox \"9815506f0a5ca14675af604c4cc2a14d7a1fe650dd93b668f7df2d8ce1b7e9d5\" successfully" Jun 25 18:51:02.772743 containerd[1702]: time="2024-06-25T18:51:02.772735889Z" level=info msg="StopPodSandbox for \"9815506f0a5ca14675af604c4cc2a14d7a1fe650dd93b668f7df2d8ce1b7e9d5\" returns successfully" Jun 25 18:51:02.773133 containerd[1702]: time="2024-06-25T18:51:02.773102795Z" level=info msg="RemovePodSandbox for \"9815506f0a5ca14675af604c4cc2a14d7a1fe650dd93b668f7df2d8ce1b7e9d5\"" Jun 25 18:51:02.773217 containerd[1702]: time="2024-06-25T18:51:02.773134196Z" level=info msg="Forcibly stopping sandbox \"9815506f0a5ca14675af604c4cc2a14d7a1fe650dd93b668f7df2d8ce1b7e9d5\"" Jun 25 18:51:02.773261 containerd[1702]: time="2024-06-25T18:51:02.773203697Z" level=info msg="TearDown network for sandbox \"9815506f0a5ca14675af604c4cc2a14d7a1fe650dd93b668f7df2d8ce1b7e9d5\" successfully" Jun 25 18:51:02.782417 containerd[1702]: time="2024-06-25T18:51:02.782293250Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9815506f0a5ca14675af604c4cc2a14d7a1fe650dd93b668f7df2d8ce1b7e9d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:51:02.782417 containerd[1702]: time="2024-06-25T18:51:02.782366451Z" level=info msg="RemovePodSandbox \"9815506f0a5ca14675af604c4cc2a14d7a1fe650dd93b668f7df2d8ce1b7e9d5\" returns successfully" Jun 25 18:51:02.782796 containerd[1702]: time="2024-06-25T18:51:02.782755857Z" level=info msg="StopPodSandbox for \"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\"" Jun 25 18:51:02.841016 containerd[1702]: 2024-06-25 18:51:02.814 [WARNING][5542] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0", GenerateName:"calico-kube-controllers-69b7dfffc8-", Namespace:"calico-system", SelfLink:"", UID:"cdf8ad2a-e8cc-4207-ae7f-a4803ee52206", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69b7dfffc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-c5aaeb7e49", ContainerID:"783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a", Pod:"calico-kube-controllers-69b7dfffc8-mbwz4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie705342d5b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:51:02.841016 containerd[1702]: 2024-06-25 18:51:02.814 [INFO][5542] k8s.go 608: Cleaning up netns ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Jun 25 18:51:02.841016 containerd[1702]: 2024-06-25 18:51:02.814 [INFO][5542] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" iface="eth0" netns="" Jun 25 18:51:02.841016 containerd[1702]: 2024-06-25 18:51:02.814 [INFO][5542] k8s.go 615: Releasing IP address(es) ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Jun 25 18:51:02.841016 containerd[1702]: 2024-06-25 18:51:02.814 [INFO][5542] utils.go 188: Calico CNI releasing IP address ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Jun 25 18:51:02.841016 containerd[1702]: 2024-06-25 18:51:02.833 [INFO][5548] ipam_plugin.go 411: Releasing address using handleID ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" HandleID="k8s-pod-network.332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0" Jun 25 18:51:02.841016 containerd[1702]: 2024-06-25 18:51:02.833 [INFO][5548] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:51:02.841016 containerd[1702]: 2024-06-25 18:51:02.833 [INFO][5548] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:51:02.841016 containerd[1702]: 2024-06-25 18:51:02.838 [WARNING][5548] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" HandleID="k8s-pod-network.332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0" Jun 25 18:51:02.841016 containerd[1702]: 2024-06-25 18:51:02.838 [INFO][5548] ipam_plugin.go 439: Releasing address using workloadID ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" HandleID="k8s-pod-network.332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0" Jun 25 18:51:02.841016 containerd[1702]: 2024-06-25 18:51:02.839 [INFO][5548] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:51:02.841016 containerd[1702]: 2024-06-25 18:51:02.840 [INFO][5542] k8s.go 621: Teardown processing complete. ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Jun 25 18:51:02.841732 containerd[1702]: time="2024-06-25T18:51:02.841038536Z" level=info msg="TearDown network for sandbox \"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\" successfully" Jun 25 18:51:02.841732 containerd[1702]: time="2024-06-25T18:51:02.841070036Z" level=info msg="StopPodSandbox for \"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\" returns successfully" Jun 25 18:51:02.841830 containerd[1702]: time="2024-06-25T18:51:02.841796749Z" level=info msg="RemovePodSandbox for \"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\"" Jun 25 18:51:02.841942 containerd[1702]: time="2024-06-25T18:51:02.841918151Z" level=info msg="Forcibly stopping sandbox \"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\"" Jun 25 18:51:02.932441 containerd[1702]: 2024-06-25 18:51:02.886 [WARNING][5567] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0", GenerateName:"calico-kube-controllers-69b7dfffc8-", Namespace:"calico-system", SelfLink:"", UID:"cdf8ad2a-e8cc-4207-ae7f-a4803ee52206", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69b7dfffc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-c5aaeb7e49", ContainerID:"783af06492cb17df6dc4b0b193f8cadc9aec6f82059a8e9d9313dc74d6d8698a", Pod:"calico-kube-controllers-69b7dfffc8-mbwz4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie705342d5b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:51:02.932441 containerd[1702]: 2024-06-25 18:51:02.886 [INFO][5567] k8s.go 608: Cleaning up netns ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Jun 25 18:51:02.932441 containerd[1702]: 2024-06-25 18:51:02.886 [INFO][5567] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" iface="eth0" netns="" Jun 25 18:51:02.932441 containerd[1702]: 2024-06-25 18:51:02.886 [INFO][5567] k8s.go 615: Releasing IP address(es) ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Jun 25 18:51:02.932441 containerd[1702]: 2024-06-25 18:51:02.886 [INFO][5567] utils.go 188: Calico CNI releasing IP address ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Jun 25 18:51:02.932441 containerd[1702]: 2024-06-25 18:51:02.909 [INFO][5578] ipam_plugin.go 411: Releasing address using handleID ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" HandleID="k8s-pod-network.332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0" Jun 25 18:51:02.932441 containerd[1702]: 2024-06-25 18:51:02.910 [INFO][5578] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:51:02.932441 containerd[1702]: 2024-06-25 18:51:02.910 [INFO][5578] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:51:02.932441 containerd[1702]: 2024-06-25 18:51:02.922 [WARNING][5578] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" HandleID="k8s-pod-network.332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0" Jun 25 18:51:02.932441 containerd[1702]: 2024-06-25 18:51:02.924 [INFO][5578] ipam_plugin.go 439: Releasing address using workloadID ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" HandleID="k8s-pod-network.332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--kube--controllers--69b7dfffc8--mbwz4-eth0" Jun 25 18:51:02.932441 containerd[1702]: 2024-06-25 18:51:02.928 [INFO][5578] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:51:02.932441 containerd[1702]: 2024-06-25 18:51:02.929 [INFO][5567] k8s.go 621: Teardown processing complete. ContainerID="332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39" Jun 25 18:51:02.933475 containerd[1702]: time="2024-06-25T18:51:02.932495371Z" level=info msg="TearDown network for sandbox \"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\" successfully" Jun 25 18:51:02.975165 containerd[1702]: time="2024-06-25T18:51:02.974996485Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:51:02.975165 containerd[1702]: time="2024-06-25T18:51:02.975079186Z" level=info msg="RemovePodSandbox \"332c3bb5ae238b27582ae8d19ebd5a927d4a986396933eb06e1abae1612bca39\" returns successfully" Jun 25 18:51:02.976647 containerd[1702]: time="2024-06-25T18:51:02.976572711Z" level=info msg="StopPodSandbox for \"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\"" Jun 25 18:51:03.054610 containerd[1702]: 2024-06-25 18:51:03.027 [WARNING][5620] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e18d57e-4106-4db0-b549-0f6c4c21f68a", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-c5aaeb7e49", ContainerID:"c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703", Pod:"csi-node-driver-7hj8p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.81.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali54b739e5311", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:51:03.054610 containerd[1702]: 2024-06-25 18:51:03.028 [INFO][5620] k8s.go 608: Cleaning up netns ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Jun 25 18:51:03.054610 containerd[1702]: 2024-06-25 18:51:03.028 [INFO][5620] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" iface="eth0" netns="" Jun 25 18:51:03.054610 containerd[1702]: 2024-06-25 18:51:03.028 [INFO][5620] k8s.go 615: Releasing IP address(es) ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Jun 25 18:51:03.054610 containerd[1702]: 2024-06-25 18:51:03.028 [INFO][5620] utils.go 188: Calico CNI releasing IP address ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Jun 25 18:51:03.054610 containerd[1702]: 2024-06-25 18:51:03.046 [INFO][5626] ipam_plugin.go 411: Releasing address using handleID ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" HandleID="k8s-pod-network.502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0" Jun 25 18:51:03.054610 containerd[1702]: 2024-06-25 18:51:03.046 [INFO][5626] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:51:03.054610 containerd[1702]: 2024-06-25 18:51:03.046 [INFO][5626] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:51:03.054610 containerd[1702]: 2024-06-25 18:51:03.051 [WARNING][5626] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" HandleID="k8s-pod-network.502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0" Jun 25 18:51:03.054610 containerd[1702]: 2024-06-25 18:51:03.051 [INFO][5626] ipam_plugin.go 439: Releasing address using workloadID ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" HandleID="k8s-pod-network.502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0" Jun 25 18:51:03.054610 containerd[1702]: 2024-06-25 18:51:03.052 [INFO][5626] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:51:03.054610 containerd[1702]: 2024-06-25 18:51:03.053 [INFO][5620] k8s.go 621: Teardown processing complete. ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Jun 25 18:51:03.055514 containerd[1702]: time="2024-06-25T18:51:03.054651222Z" level=info msg="TearDown network for sandbox \"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\" successfully" Jun 25 18:51:03.055514 containerd[1702]: time="2024-06-25T18:51:03.054691222Z" level=info msg="StopPodSandbox for \"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\" returns successfully" Jun 25 18:51:03.055728 containerd[1702]: time="2024-06-25T18:51:03.055491436Z" level=info msg="RemovePodSandbox for \"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\"" Jun 25 18:51:03.055728 containerd[1702]: time="2024-06-25T18:51:03.055544537Z" level=info msg="Forcibly stopping sandbox \"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\"" Jun 25 18:51:03.117469 containerd[1702]: 2024-06-25 18:51:03.087 [WARNING][5644] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e18d57e-4106-4db0-b549-0f6c4c21f68a", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-c5aaeb7e49", ContainerID:"c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703", Pod:"csi-node-driver-7hj8p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.81.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali54b739e5311", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:51:03.117469 containerd[1702]: 2024-06-25 18:51:03.088 [INFO][5644] k8s.go 608: Cleaning up netns ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Jun 25 18:51:03.117469 containerd[1702]: 2024-06-25 18:51:03.088 [INFO][5644] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" iface="eth0" netns="" Jun 25 18:51:03.117469 containerd[1702]: 2024-06-25 18:51:03.088 [INFO][5644] k8s.go 615: Releasing IP address(es) ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Jun 25 18:51:03.117469 containerd[1702]: 2024-06-25 18:51:03.088 [INFO][5644] utils.go 188: Calico CNI releasing IP address ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Jun 25 18:51:03.117469 containerd[1702]: 2024-06-25 18:51:03.106 [INFO][5650] ipam_plugin.go 411: Releasing address using handleID ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" HandleID="k8s-pod-network.502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0" Jun 25 18:51:03.117469 containerd[1702]: 2024-06-25 18:51:03.106 [INFO][5650] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:51:03.117469 containerd[1702]: 2024-06-25 18:51:03.107 [INFO][5650] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:51:03.117469 containerd[1702]: 2024-06-25 18:51:03.113 [WARNING][5650] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" HandleID="k8s-pod-network.502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0" Jun 25 18:51:03.117469 containerd[1702]: 2024-06-25 18:51:03.113 [INFO][5650] ipam_plugin.go 439: Releasing address using workloadID ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" HandleID="k8s-pod-network.502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-csi--node--driver--7hj8p-eth0" Jun 25 18:51:03.117469 containerd[1702]: 2024-06-25 18:51:03.115 [INFO][5650] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:51:03.117469 containerd[1702]: 2024-06-25 18:51:03.116 [INFO][5644] k8s.go 621: Teardown processing complete. ContainerID="502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb" Jun 25 18:51:03.118230 containerd[1702]: time="2024-06-25T18:51:03.117526377Z" level=info msg="TearDown network for sandbox \"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\" successfully" Jun 25 18:51:03.143168 containerd[1702]: time="2024-06-25T18:51:03.143075406Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:51:03.143168 containerd[1702]: time="2024-06-25T18:51:03.143166308Z" level=info msg="RemovePodSandbox \"502a769ea1d9273755b13e3cea2633f111556a97b3fc3e5d814809c8aed416eb\" returns successfully" Jun 25 18:51:03.143698 containerd[1702]: time="2024-06-25T18:51:03.143663216Z" level=info msg="StopPodSandbox for \"a78fd717861968d1caec128099b6547cf303a987ed441d2adc47acafa25e3681\"" Jun 25 18:51:03.143809 containerd[1702]: time="2024-06-25T18:51:03.143757418Z" level=info msg="TearDown network for sandbox \"a78fd717861968d1caec128099b6547cf303a987ed441d2adc47acafa25e3681\" successfully" Jun 25 18:51:03.143809 containerd[1702]: time="2024-06-25T18:51:03.143773918Z" level=info msg="StopPodSandbox for \"a78fd717861968d1caec128099b6547cf303a987ed441d2adc47acafa25e3681\" returns successfully" Jun 25 18:51:03.144189 containerd[1702]: time="2024-06-25T18:51:03.144164824Z" level=info msg="RemovePodSandbox for \"a78fd717861968d1caec128099b6547cf303a987ed441d2adc47acafa25e3681\"" Jun 25 18:51:03.144281 containerd[1702]: time="2024-06-25T18:51:03.144191625Z" level=info msg="Forcibly stopping sandbox \"a78fd717861968d1caec128099b6547cf303a987ed441d2adc47acafa25e3681\"" Jun 25 18:51:03.144326 containerd[1702]: time="2024-06-25T18:51:03.144258926Z" level=info msg="TearDown network for sandbox \"a78fd717861968d1caec128099b6547cf303a987ed441d2adc47acafa25e3681\" successfully" Jun 25 18:51:03.170994 containerd[1702]: time="2024-06-25T18:51:03.170933074Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a78fd717861968d1caec128099b6547cf303a987ed441d2adc47acafa25e3681\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:51:03.171151 containerd[1702]: time="2024-06-25T18:51:03.171018975Z" level=info msg="RemovePodSandbox \"a78fd717861968d1caec128099b6547cf303a987ed441d2adc47acafa25e3681\" returns successfully" Jun 25 18:51:04.196940 containerd[1702]: time="2024-06-25T18:51:04.196885597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:51:04.203648 containerd[1702]: time="2024-06-25T18:51:04.203577809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 18:51:04.210655 containerd[1702]: time="2024-06-25T18:51:04.210621127Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:51:04.218850 containerd[1702]: time="2024-06-25T18:51:04.218762264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:51:04.219612 containerd[1702]: time="2024-06-25T18:51:04.219431375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.15216163s" Jun 25 18:51:04.219612 containerd[1702]: time="2024-06-25T18:51:04.219474776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 18:51:04.222029 containerd[1702]: time="2024-06-25T18:51:04.221894616Z" level=info msg="CreateContainer within sandbox \"c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 18:51:04.253671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount976683590.mount: Deactivated successfully. Jun 25 18:51:04.259504 containerd[1702]: time="2024-06-25T18:51:04.259459747Z" level=info msg="CreateContainer within sandbox \"c8dfe2353d27f7f99b644cddc887808a7c7e1b49cd8324b7fff3d9a4ac9b4703\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ceff07011e4ae07694864096058c7c13eef7c48dd3bfe2b2e283940d04f310dc\"" Jun 25 18:51:04.260076 containerd[1702]: time="2024-06-25T18:51:04.260048657Z" level=info msg="StartContainer for \"ceff07011e4ae07694864096058c7c13eef7c48dd3bfe2b2e283940d04f310dc\"" Jun 25 18:51:04.296031 systemd[1]: Started cri-containerd-ceff07011e4ae07694864096058c7c13eef7c48dd3bfe2b2e283940d04f310dc.scope - libcontainer container ceff07011e4ae07694864096058c7c13eef7c48dd3bfe2b2e283940d04f310dc. Jun 25 18:51:04.330271 containerd[1702]: time="2024-06-25T18:51:04.330225635Z" level=info msg="StartContainer for \"ceff07011e4ae07694864096058c7c13eef7c48dd3bfe2b2e283940d04f310dc\" returns successfully" Jun 25 18:51:04.542038 kubelet[3235]: I0625 18:51:04.541890 3235 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 18:51:04.542038 kubelet[3235]: I0625 18:51:04.541929 3235 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 18:51:04.705337 kubelet[3235]: I0625 18:51:04.704802 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7hj8p" podStartSLOduration=36.540207432 podStartE2EDuration="41.704782023s" podCreationTimestamp="2024-06-25 18:50:23 +0000 UTC" firstStartedPulling="2024-06-25 18:50:59.055976803 +0000 UTC m=+57.298345035" lastFinishedPulling="2024-06-25 18:51:04.220551494 +0000 UTC m=+62.462919626" observedRunningTime="2024-06-25 18:51:04.703664604 +0000 UTC m=+62.946032736" watchObservedRunningTime="2024-06-25 18:51:04.704782023 +0000 UTC m=+62.947150155" Jun 25 18:51:05.176347 update_engine[1683]: I0625 18:51:05.176293 1683 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 18:51:05.176830 update_engine[1683]: I0625 18:51:05.176527 1683 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 18:51:05.176830 update_engine[1683]: I0625 18:51:05.176813 1683 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 18:51:05.192537 update_engine[1683]: E0625 18:51:05.192492 1683 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 18:51:05.192681 update_engine[1683]: I0625 18:51:05.192562 1683 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jun 25 18:51:08.007546 kubelet[3235]: I0625 18:51:08.007470 3235 topology_manager.go:215] "Topology Admit Handler" podUID="1c56deaa-e49b-45d4-a5ef-80e293eeab19" podNamespace="calico-apiserver" podName="calico-apiserver-795cd9c8bc-mfjr5" Jun 25 18:51:08.020444 systemd[1]: Created slice kubepods-besteffort-pod1c56deaa_e49b_45d4_a5ef_80e293eeab19.slice - libcontainer container kubepods-besteffort-pod1c56deaa_e49b_45d4_a5ef_80e293eeab19.slice. Jun 25 18:51:08.032683 kubelet[3235]: I0625 18:51:08.032621 3235 topology_manager.go:215] "Topology Admit Handler" podUID="df1ece1b-03da-4ea3-ba2d-a2bc7e03ca57" podNamespace="calico-apiserver" podName="calico-apiserver-795cd9c8bc-mqmv4" Jun 25 18:51:08.043587 systemd[1]: Created slice kubepods-besteffort-poddf1ece1b_03da_4ea3_ba2d_a2bc7e03ca57.slice - libcontainer container kubepods-besteffort-poddf1ece1b_03da_4ea3_ba2d_a2bc7e03ca57.slice. Jun 25 18:51:08.050694 kubelet[3235]: I0625 18:51:08.050588 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1c56deaa-e49b-45d4-a5ef-80e293eeab19-calico-apiserver-certs\") pod \"calico-apiserver-795cd9c8bc-mfjr5\" (UID: \"1c56deaa-e49b-45d4-a5ef-80e293eeab19\") " pod="calico-apiserver/calico-apiserver-795cd9c8bc-mfjr5" Jun 25 18:51:08.050694 kubelet[3235]: I0625 18:51:08.050636 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r5bc\" (UniqueName: \"kubernetes.io/projected/1c56deaa-e49b-45d4-a5ef-80e293eeab19-kube-api-access-9r5bc\") pod \"calico-apiserver-795cd9c8bc-mfjr5\" (UID: \"1c56deaa-e49b-45d4-a5ef-80e293eeab19\") " pod="calico-apiserver/calico-apiserver-795cd9c8bc-mfjr5" Jun 25 18:51:08.151760 kubelet[3235]: I0625 18:51:08.151692 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/df1ece1b-03da-4ea3-ba2d-a2bc7e03ca57-calico-apiserver-certs\") pod \"calico-apiserver-795cd9c8bc-mqmv4\" (UID: \"df1ece1b-03da-4ea3-ba2d-a2bc7e03ca57\") " pod="calico-apiserver/calico-apiserver-795cd9c8bc-mqmv4" Jun 25 18:51:08.151760 kubelet[3235]: I0625 18:51:08.151763 3235 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kqsh\" (UniqueName: \"kubernetes.io/projected/df1ece1b-03da-4ea3-ba2d-a2bc7e03ca57-kube-api-access-7kqsh\") pod \"calico-apiserver-795cd9c8bc-mqmv4\" (UID: \"df1ece1b-03da-4ea3-ba2d-a2bc7e03ca57\") " pod="calico-apiserver/calico-apiserver-795cd9c8bc-mqmv4" Jun 25 18:51:08.152343 kubelet[3235]: E0625 18:51:08.152170 3235 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 18:51:08.152343 kubelet[3235]: E0625 18:51:08.152257 3235 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c56deaa-e49b-45d4-a5ef-80e293eeab19-calico-apiserver-certs podName:1c56deaa-e49b-45d4-a5ef-80e293eeab19 nodeName:}" failed. No retries permitted until 2024-06-25 18:51:08.652233195 +0000 UTC m=+66.894601427 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/1c56deaa-e49b-45d4-a5ef-80e293eeab19-calico-apiserver-certs") pod "calico-apiserver-795cd9c8bc-mfjr5" (UID: "1c56deaa-e49b-45d4-a5ef-80e293eeab19") : secret "calico-apiserver-certs" not found Jun 25 18:51:08.253028 kubelet[3235]: E0625 18:51:08.252830 3235 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 18:51:08.253028 kubelet[3235]: E0625 18:51:08.252926 3235 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df1ece1b-03da-4ea3-ba2d-a2bc7e03ca57-calico-apiserver-certs podName:df1ece1b-03da-4ea3-ba2d-a2bc7e03ca57 nodeName:}" failed. No retries permitted until 2024-06-25 18:51:08.752908385 +0000 UTC m=+66.995276617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/df1ece1b-03da-4ea3-ba2d-a2bc7e03ca57-calico-apiserver-certs") pod "calico-apiserver-795cd9c8bc-mqmv4" (UID: "df1ece1b-03da-4ea3-ba2d-a2bc7e03ca57") : secret "calico-apiserver-certs" not found Jun 25 18:51:08.927765 containerd[1702]: time="2024-06-25T18:51:08.927653912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795cd9c8bc-mfjr5,Uid:1c56deaa-e49b-45d4-a5ef-80e293eeab19,Namespace:calico-apiserver,Attempt:0,}" Jun 25 18:51:08.947629 containerd[1702]: time="2024-06-25T18:51:08.947577747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795cd9c8bc-mqmv4,Uid:df1ece1b-03da-4ea3-ba2d-a2bc7e03ca57,Namespace:calico-apiserver,Attempt:0,}" Jun 25 18:51:09.129161 systemd-networkd[1590]: cali1f19d6ce437: Link UP Jun 25 18:51:09.129419 systemd-networkd[1590]: cali1f19d6ce437: Gained carrier Jun 25 18:51:09.153899 containerd[1702]: 2024-06-25 18:51:09.014 [INFO][5708] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mfjr5-eth0 calico-apiserver-795cd9c8bc- calico-apiserver 1c56deaa-e49b-45d4-a5ef-80e293eeab19 949 0 2024-06-25 18:51:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:795cd9c8bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4012.0.0-a-c5aaeb7e49 calico-apiserver-795cd9c8bc-mfjr5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1f19d6ce437 [] []}} ContainerID="c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb" Namespace="calico-apiserver" Pod="calico-apiserver-795cd9c8bc-mfjr5" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mfjr5-" Jun 25 18:51:09.153899 containerd[1702]: 2024-06-25 18:51:09.015 [INFO][5708] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb" Namespace="calico-apiserver" Pod="calico-apiserver-795cd9c8bc-mfjr5" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mfjr5-eth0" Jun 25 18:51:09.153899 containerd[1702]: 2024-06-25 18:51:09.076 [INFO][5728] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb" HandleID="k8s-pod-network.c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mfjr5-eth0" Jun 25 18:51:09.153899 containerd[1702]: 2024-06-25 18:51:09.085 [INFO][5728] ipam_plugin.go 264: Auto assigning IP ContainerID="c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb" HandleID="k8s-pod-network.c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mfjr5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265e10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4012.0.0-a-c5aaeb7e49", "pod":"calico-apiserver-795cd9c8bc-mfjr5", "timestamp":"2024-06-25 18:51:09.076788716 +0000 UTC"}, Hostname:"ci-4012.0.0-a-c5aaeb7e49", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:51:09.153899 containerd[1702]: 2024-06-25 18:51:09.085 [INFO][5728] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:51:09.153899 containerd[1702]: 2024-06-25 18:51:09.085 [INFO][5728] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:51:09.153899 containerd[1702]: 2024-06-25 18:51:09.085 [INFO][5728] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-c5aaeb7e49' Jun 25 18:51:09.153899 containerd[1702]: 2024-06-25 18:51:09.087 [INFO][5728] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:51:09.153899 containerd[1702]: 2024-06-25 18:51:09.093 [INFO][5728] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:51:09.153899 containerd[1702]: 2024-06-25 18:51:09.098 [INFO][5728] ipam.go 489: Trying affinity for 192.168.81.0/26 host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:51:09.153899 containerd[1702]: 2024-06-25 18:51:09.100 [INFO][5728] ipam.go 155: Attempting to load block cidr=192.168.81.0/26 host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:51:09.153899 containerd[1702]: 2024-06-25 18:51:09.103 [INFO][5728] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.81.0/26 host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:51:09.153899 containerd[1702]: 2024-06-25 18:51:09.103 [INFO][5728] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.81.0/26 handle="k8s-pod-network.c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:51:09.153899 containerd[1702]: 2024-06-25 18:51:09.104 [INFO][5728] ipam.go 1685: Creating new handle: k8s-pod-network.c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb Jun 25 18:51:09.153899 containerd[1702]: 2024-06-25 18:51:09.111 [INFO][5728] ipam.go 1203: Writing block in order to claim IPs block=192.168.81.0/26 handle="k8s-pod-network.c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:51:09.153899 containerd[1702]: 2024-06-25 18:51:09.120 [INFO][5728] ipam.go 1216: Successfully claimed IPs: [192.168.81.5/26] block=192.168.81.0/26 handle="k8s-pod-network.c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:51:09.153899 containerd[1702]: 2024-06-25 18:51:09.121 [INFO][5728] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.81.5/26] handle="k8s-pod-network.c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:51:09.153899 containerd[1702]: 2024-06-25 18:51:09.121 [INFO][5728] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:51:09.153899 containerd[1702]: 2024-06-25 18:51:09.121 [INFO][5728] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.81.5/26] IPv6=[] ContainerID="c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb" HandleID="k8s-pod-network.c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mfjr5-eth0" Jun 25 18:51:09.156154 containerd[1702]: 2024-06-25 18:51:09.124 [INFO][5708] k8s.go 386: Populated endpoint ContainerID="c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb" Namespace="calico-apiserver" Pod="calico-apiserver-795cd9c8bc-mfjr5" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mfjr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mfjr5-eth0", GenerateName:"calico-apiserver-795cd9c8bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c56deaa-e49b-45d4-a5ef-80e293eeab19", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 51, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795cd9c8bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-c5aaeb7e49", ContainerID:"", Pod:"calico-apiserver-795cd9c8bc-mfjr5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f19d6ce437", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:51:09.156154 containerd[1702]: 2024-06-25 18:51:09.124 [INFO][5708] k8s.go 387: Calico CNI using IPs: [192.168.81.5/32] ContainerID="c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb" Namespace="calico-apiserver" Pod="calico-apiserver-795cd9c8bc-mfjr5" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mfjr5-eth0" Jun 25 18:51:09.156154 containerd[1702]: 2024-06-25 18:51:09.124 [INFO][5708] dataplane_linux.go 68: Setting the host side veth name to cali1f19d6ce437 ContainerID="c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb" Namespace="calico-apiserver" Pod="calico-apiserver-795cd9c8bc-mfjr5" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mfjr5-eth0" Jun 25 18:51:09.156154 containerd[1702]: 2024-06-25 18:51:09.129 [INFO][5708] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb" Namespace="calico-apiserver" Pod="calico-apiserver-795cd9c8bc-mfjr5" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mfjr5-eth0" Jun 25 18:51:09.156154 containerd[1702]: 2024-06-25 18:51:09.130 [INFO][5708] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb" Namespace="calico-apiserver" Pod="calico-apiserver-795cd9c8bc-mfjr5" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mfjr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mfjr5-eth0", GenerateName:"calico-apiserver-795cd9c8bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c56deaa-e49b-45d4-a5ef-80e293eeab19", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 51, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795cd9c8bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-c5aaeb7e49", ContainerID:"c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb", Pod:"calico-apiserver-795cd9c8bc-mfjr5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f19d6ce437", MAC:"aa:fd:a9:5e:86:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:51:09.156154 containerd[1702]: 2024-06-25 18:51:09.150 [INFO][5708] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb" Namespace="calico-apiserver" Pod="calico-apiserver-795cd9c8bc-mfjr5" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mfjr5-eth0" Jun 25 18:51:09.216394 containerd[1702]: time="2024-06-25T18:51:09.213586312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:51:09.216394 containerd[1702]: time="2024-06-25T18:51:09.213757715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:51:09.216394 containerd[1702]: time="2024-06-25T18:51:09.213795116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:51:09.216394 containerd[1702]: time="2024-06-25T18:51:09.213814616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:51:09.223894 systemd-networkd[1590]: cali14a85a803fa: Link UP Jun 25 18:51:09.224768 systemd-networkd[1590]: cali14a85a803fa: Gained carrier Jun 25 18:51:09.253190 containerd[1702]: 2024-06-25 18:51:09.073 [INFO][5720] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mqmv4-eth0 calico-apiserver-795cd9c8bc- calico-apiserver df1ece1b-03da-4ea3-ba2d-a2bc7e03ca57 953 0 2024-06-25 18:51:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:795cd9c8bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4012.0.0-a-c5aaeb7e49 calico-apiserver-795cd9c8bc-mqmv4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali14a85a803fa [] []}} ContainerID="4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05" Namespace="calico-apiserver" Pod="calico-apiserver-795cd9c8bc-mqmv4" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mqmv4-" Jun 25 18:51:09.253190 containerd[1702]: 2024-06-25 18:51:09.073 [INFO][5720] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05" Namespace="calico-apiserver" Pod="calico-apiserver-795cd9c8bc-mqmv4" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mqmv4-eth0" Jun 25 18:51:09.253190 containerd[1702]: 2024-06-25 18:51:09.117 [INFO][5737] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05" HandleID="k8s-pod-network.4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mqmv4-eth0" Jun 25 18:51:09.253190 containerd[1702]: 2024-06-25 18:51:09.127 [INFO][5737] ipam_plugin.go 264: Auto assigning IP ContainerID="4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05" HandleID="k8s-pod-network.4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mqmv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002903d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4012.0.0-a-c5aaeb7e49", "pod":"calico-apiserver-795cd9c8bc-mqmv4", "timestamp":"2024-06-25 18:51:09.117492999 +0000 UTC"}, Hostname:"ci-4012.0.0-a-c5aaeb7e49", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:51:09.253190 containerd[1702]: 2024-06-25 18:51:09.127 [INFO][5737] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:51:09.253190 containerd[1702]: 2024-06-25 18:51:09.128 [INFO][5737] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:51:09.253190 containerd[1702]: 2024-06-25 18:51:09.129 [INFO][5737] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-c5aaeb7e49' Jun 25 18:51:09.253190 containerd[1702]: 2024-06-25 18:51:09.135 [INFO][5737] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:51:09.253190 containerd[1702]: 2024-06-25 18:51:09.152 [INFO][5737] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:51:09.253190 containerd[1702]: 2024-06-25 18:51:09.172 [INFO][5737] ipam.go 489: Trying affinity for 192.168.81.0/26 host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:51:09.253190 containerd[1702]: 2024-06-25 18:51:09.179 [INFO][5737] ipam.go 155: Attempting to load block cidr=192.168.81.0/26 host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:51:09.253190 containerd[1702]: 2024-06-25 18:51:09.184 [INFO][5737] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.81.0/26 host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:51:09.253190 containerd[1702]: 2024-06-25 18:51:09.184 [INFO][5737] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.81.0/26 handle="k8s-pod-network.4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:51:09.253190 containerd[1702]: 2024-06-25 18:51:09.187 [INFO][5737] ipam.go 1685: Creating new handle: k8s-pod-network.4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05 Jun 25 18:51:09.253190 containerd[1702]: 2024-06-25 18:51:09.196 [INFO][5737] ipam.go 1203: Writing block in order to claim IPs block=192.168.81.0/26 handle="k8s-pod-network.4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:51:09.253190 containerd[1702]: 2024-06-25 18:51:09.208 [INFO][5737] ipam.go 1216: Successfully claimed IPs: [192.168.81.6/26] block=192.168.81.0/26 handle="k8s-pod-network.4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:51:09.253190 containerd[1702]: 2024-06-25 18:51:09.209 [INFO][5737] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.81.6/26] handle="k8s-pod-network.4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05" host="ci-4012.0.0-a-c5aaeb7e49" Jun 25 18:51:09.253190 containerd[1702]: 2024-06-25 18:51:09.209 [INFO][5737] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:51:09.253190 containerd[1702]: 2024-06-25 18:51:09.209 [INFO][5737] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.81.6/26] IPv6=[] ContainerID="4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05" HandleID="k8s-pod-network.4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05" Workload="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mqmv4-eth0" Jun 25 18:51:09.256253 containerd[1702]: 2024-06-25 18:51:09.216 [INFO][5720] k8s.go 386: Populated endpoint ContainerID="4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05" Namespace="calico-apiserver" Pod="calico-apiserver-795cd9c8bc-mqmv4" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mqmv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mqmv4-eth0", GenerateName:"calico-apiserver-795cd9c8bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"df1ece1b-03da-4ea3-ba2d-a2bc7e03ca57", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 51, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795cd9c8bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-c5aaeb7e49", ContainerID:"", Pod:"calico-apiserver-795cd9c8bc-mqmv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14a85a803fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:51:09.256253 containerd[1702]: 2024-06-25 18:51:09.216 [INFO][5720] k8s.go 387: Calico CNI using IPs: [192.168.81.6/32] ContainerID="4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05" Namespace="calico-apiserver" Pod="calico-apiserver-795cd9c8bc-mqmv4" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mqmv4-eth0" Jun 25 18:51:09.256253 containerd[1702]: 2024-06-25 18:51:09.216 [INFO][5720] dataplane_linux.go 68: Setting the host side veth name to cali14a85a803fa ContainerID="4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05" Namespace="calico-apiserver" Pod="calico-apiserver-795cd9c8bc-mqmv4" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mqmv4-eth0" Jun 25 18:51:09.256253 containerd[1702]: 2024-06-25 18:51:09.226 [INFO][5720] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05" Namespace="calico-apiserver" Pod="calico-apiserver-795cd9c8bc-mqmv4" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mqmv4-eth0" Jun 25 18:51:09.256253 containerd[1702]: 2024-06-25 18:51:09.228 [INFO][5720] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05" Namespace="calico-apiserver" Pod="calico-apiserver-795cd9c8bc-mqmv4" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mqmv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mqmv4-eth0", GenerateName:"calico-apiserver-795cd9c8bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"df1ece1b-03da-4ea3-ba2d-a2bc7e03ca57", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 51, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795cd9c8bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-c5aaeb7e49", ContainerID:"4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05", Pod:"calico-apiserver-795cd9c8bc-mqmv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14a85a803fa", MAC:"22:03:30:18:4f:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:51:09.256253 containerd[1702]: 2024-06-25 18:51:09.242 [INFO][5720] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05" Namespace="calico-apiserver" Pod="calico-apiserver-795cd9c8bc-mqmv4" WorkloadEndpoint="ci--4012.0.0--a--c5aaeb7e49-k8s-calico--apiserver--795cd9c8bc--mqmv4-eth0" Jun 25 18:51:09.298064 systemd[1]: Started cri-containerd-c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb.scope - libcontainer container c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb. Jun 25 18:51:09.316030 containerd[1702]: time="2024-06-25T18:51:09.314768611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:51:09.316030 containerd[1702]: time="2024-06-25T18:51:09.314837312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:51:09.316030 containerd[1702]: time="2024-06-25T18:51:09.314862512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:51:09.316030 containerd[1702]: time="2024-06-25T18:51:09.314898413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:51:09.353082 systemd[1]: Started cri-containerd-4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05.scope - libcontainer container 4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05. Jun 25 18:51:09.417391 containerd[1702]: time="2024-06-25T18:51:09.417342932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795cd9c8bc-mfjr5,Uid:1c56deaa-e49b-45d4-a5ef-80e293eeab19,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb\"" Jun 25 18:51:09.420910 containerd[1702]: time="2024-06-25T18:51:09.420619387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 18:51:09.421342 containerd[1702]: time="2024-06-25T18:51:09.421198597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795cd9c8bc-mqmv4,Uid:df1ece1b-03da-4ea3-ba2d-a2bc7e03ca57,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05\"" Jun 25 18:51:10.753064 systemd-networkd[1590]: cali1f19d6ce437: Gained IPv6LL Jun 25 18:51:11.265043 systemd-networkd[1590]: cali14a85a803fa: Gained IPv6LL Jun 25 18:51:13.076220 containerd[1702]: time="2024-06-25T18:51:13.076157212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:51:13.079485 containerd[1702]: time="2024-06-25T18:51:13.079430873Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 18:51:13.088912 containerd[1702]: time="2024-06-25T18:51:13.088827745Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:51:13.096001 containerd[1702]: time="2024-06-25T18:51:13.095887675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:51:13.096953 containerd[1702]: time="2024-06-25T18:51:13.096531486Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 3.675868198s" Jun 25 18:51:13.096953 containerd[1702]: time="2024-06-25T18:51:13.096573587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 18:51:13.098433 containerd[1702]: time="2024-06-25T18:51:13.098392820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 18:51:13.099501 containerd[1702]: time="2024-06-25T18:51:13.099437040Z" level=info msg="CreateContainer within sandbox \"c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 18:51:13.146689 containerd[1702]: time="2024-06-25T18:51:13.146523704Z" level=info msg="CreateContainer within sandbox \"c25efe5cb347da4bc94f4f936e4001c60a52cc30e0168ff8cf355ae6182136eb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"825237f8fecec168f895a361aa1c694251ae1dbb43da1bdbd4af9add7726b9f8\"" Jun 25 18:51:13.148427 containerd[1702]: time="2024-06-25T18:51:13.148389538Z" level=info msg="StartContainer for \"825237f8fecec168f895a361aa1c694251ae1dbb43da1bdbd4af9add7726b9f8\"" Jun 25 18:51:13.186013 systemd[1]: Started cri-containerd-825237f8fecec168f895a361aa1c694251ae1dbb43da1bdbd4af9add7726b9f8.scope - libcontainer container 825237f8fecec168f895a361aa1c694251ae1dbb43da1bdbd4af9add7726b9f8. Jun 25 18:51:13.243599 containerd[1702]: time="2024-06-25T18:51:13.243423082Z" level=info msg="StartContainer for \"825237f8fecec168f895a361aa1c694251ae1dbb43da1bdbd4af9add7726b9f8\" returns successfully" Jun 25 18:51:13.536232 containerd[1702]: time="2024-06-25T18:51:13.535847247Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:51:13.539600 containerd[1702]: time="2024-06-25T18:51:13.539548515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Jun 25 18:51:13.543789 containerd[1702]: time="2024-06-25T18:51:13.543740592Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 445.299271ms" Jun 25 18:51:13.544181 containerd[1702]: time="2024-06-25T18:51:13.543804893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 18:51:13.546564 containerd[1702]: time="2024-06-25T18:51:13.546409841Z" level=info msg="CreateContainer within sandbox \"4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 18:51:13.586463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2261105181.mount: Deactivated successfully. Jun 25 18:51:13.592832 containerd[1702]: time="2024-06-25T18:51:13.592670390Z" level=info msg="CreateContainer within sandbox \"4dd568f4ad9368757316363e004dac6b6cdedc0e81bf07d6a2fa2a31bef28b05\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9f6ce82aa2c91e5cabc661928a7cfa3be8def2c2eb4ba15f7e17473efecfba3b\"" Jun 25 18:51:13.593930 containerd[1702]: time="2024-06-25T18:51:13.593470805Z" level=info msg="StartContainer for \"9f6ce82aa2c91e5cabc661928a7cfa3be8def2c2eb4ba15f7e17473efecfba3b\"" Jun 25 18:51:13.629423 systemd[1]: Started cri-containerd-9f6ce82aa2c91e5cabc661928a7cfa3be8def2c2eb4ba15f7e17473efecfba3b.scope - libcontainer container 9f6ce82aa2c91e5cabc661928a7cfa3be8def2c2eb4ba15f7e17473efecfba3b. Jun 25 18:51:13.686207 containerd[1702]: time="2024-06-25T18:51:13.686166906Z" level=info msg="StartContainer for \"9f6ce82aa2c91e5cabc661928a7cfa3be8def2c2eb4ba15f7e17473efecfba3b\" returns successfully" Jun 25 18:51:13.739881 kubelet[3235]: I0625 18:51:13.738325 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-795cd9c8bc-mqmv4" podStartSLOduration=2.616768982 podStartE2EDuration="6.738306762s" podCreationTimestamp="2024-06-25 18:51:07 +0000 UTC" firstStartedPulling="2024-06-25 18:51:09.423090429 +0000 UTC m=+67.665458661" lastFinishedPulling="2024-06-25 18:51:13.544628309 +0000 UTC m=+71.786996441" observedRunningTime="2024-06-25 18:51:13.737782353 +0000 UTC m=+71.980150585" watchObservedRunningTime="2024-06-25 18:51:13.738306762 +0000 UTC m=+71.980674894" Jun 25 18:51:13.756692 kubelet[3235]: I0625 18:51:13.756454 3235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-795cd9c8bc-mfjr5" podStartSLOduration=3.078585962 podStartE2EDuration="6.756433395s" podCreationTimestamp="2024-06-25 18:51:07 +0000 UTC" firstStartedPulling="2024-06-25 18:51:09.419733873 +0000 UTC m=+67.662102105" lastFinishedPulling="2024-06-25 18:51:13.097581406 +0000 UTC m=+71.339949538" observedRunningTime="2024-06-25 18:51:13.754896667 +0000 UTC m=+71.997264799" watchObservedRunningTime="2024-06-25 18:51:13.756433395 +0000 UTC m=+71.998801627" Jun 25 18:51:15.177067 update_engine[1683]: I0625 18:51:15.177000 1683 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 18:51:15.177517 update_engine[1683]: I0625 18:51:15.177249 1683 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 18:51:15.177580 update_engine[1683]: I0625 18:51:15.177548 1683 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 18:51:15.205404 update_engine[1683]: E0625 18:51:15.205356 1683 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 18:51:15.205574 update_engine[1683]: I0625 18:51:15.205436 1683 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jun 25 18:51:25.184986 update_engine[1683]: I0625 18:51:25.184927 1683 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 18:51:25.185540 update_engine[1683]: I0625 18:51:25.185181 1683 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 18:51:25.185540 update_engine[1683]: I0625 18:51:25.185499 1683 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 18:51:25.255376 update_engine[1683]: E0625 18:51:25.255323 1683 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 18:51:25.255581 update_engine[1683]: I0625 18:51:25.255402 1683 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 25 18:51:25.255581 update_engine[1683]: I0625 18:51:25.255411 1683 omaha_request_action.cc:617] Omaha request response: Jun 25 18:51:25.255581 update_engine[1683]: E0625 18:51:25.255532 1683 omaha_request_action.cc:636] Omaha request network transfer failed. Jun 25 18:51:25.255581 update_engine[1683]: I0625 18:51:25.255558 1683 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jun 25 18:51:25.255581 update_engine[1683]: I0625 18:51:25.255564 1683 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 25 18:51:25.255581 update_engine[1683]: I0625 18:51:25.255569 1683 update_attempter.cc:306] Processing Done. Jun 25 18:51:25.255935 update_engine[1683]: E0625 18:51:25.255587 1683 update_attempter.cc:619] Update failed. Jun 25 18:51:25.255935 update_engine[1683]: I0625 18:51:25.255594 1683 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jun 25 18:51:25.255935 update_engine[1683]: I0625 18:51:25.255599 1683 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jun 25 18:51:25.255935 update_engine[1683]: I0625 18:51:25.255604 1683 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jun 25 18:51:25.255935 update_engine[1683]: I0625 18:51:25.255709 1683 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 25 18:51:25.255935 update_engine[1683]: I0625 18:51:25.255736 1683 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 25 18:51:25.255935 update_engine[1683]: I0625 18:51:25.255743 1683 omaha_request_action.cc:272] Request: Jun 25 18:51:25.255935 update_engine[1683]: Jun 25 18:51:25.255935 update_engine[1683]: Jun 25 18:51:25.255935 update_engine[1683]: Jun 25 18:51:25.255935 update_engine[1683]: Jun 25 18:51:25.255935 update_engine[1683]: Jun 25 18:51:25.255935 update_engine[1683]: Jun 25 18:51:25.255935 update_engine[1683]: I0625 18:51:25.255749 1683 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 18:51:25.256596 update_engine[1683]: I0625 18:51:25.255995 1683 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 18:51:25.256596 update_engine[1683]: I0625 18:51:25.256284 1683 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 18:51:25.256763 locksmithd[1715]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jun 25 18:51:25.277366 update_engine[1683]: E0625 18:51:25.277150 1683 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 18:51:25.277366 update_engine[1683]: I0625 18:51:25.277225 1683 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 25 18:51:25.277366 update_engine[1683]: I0625 18:51:25.277230 1683 omaha_request_action.cc:617] Omaha request response: Jun 25 18:51:25.277366 update_engine[1683]: I0625 18:51:25.277237 1683 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 25 18:51:25.277366 update_engine[1683]: I0625 18:51:25.277242 1683 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 25 18:51:25.277366 update_engine[1683]: I0625 18:51:25.277246 1683 update_attempter.cc:306] Processing Done. Jun 25 18:51:25.277366 update_engine[1683]: I0625 18:51:25.277252 1683 update_attempter.cc:310] Error event sent. Jun 25 18:51:25.277366 update_engine[1683]: I0625 18:51:25.277262 1683 update_check_scheduler.cc:74] Next update check in 47m23s Jun 25 18:51:25.278182 locksmithd[1715]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jun 25 18:51:32.930311 systemd[1]: Started sshd@7-10.200.8.15:22-10.200.16.10:52994.service - OpenSSH per-connection server daemon (10.200.16.10:52994). Jun 25 18:51:33.582233 sshd[6018]: Accepted publickey for core from 10.200.16.10 port 52994 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:51:33.583716 sshd[6018]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:51:33.588433 systemd-logind[1680]: New session 10 of user core. Jun 25 18:51:33.594100 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:51:34.099850 sshd[6018]: pam_unix(sshd:session): session closed for user core Jun 25 18:51:34.104365 systemd[1]: sshd@7-10.200.8.15:22-10.200.16.10:52994.service: Deactivated successfully. Jun 25 18:51:34.107425 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:51:34.108528 systemd-logind[1680]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:51:34.109547 systemd-logind[1680]: Removed session 10. Jun 25 18:51:39.221310 systemd[1]: Started sshd@8-10.200.8.15:22-10.200.16.10:46418.service - OpenSSH per-connection server daemon (10.200.16.10:46418). Jun 25 18:51:39.863891 sshd[6047]: Accepted publickey for core from 10.200.16.10 port 46418 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:51:39.865668 sshd[6047]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:51:39.870600 systemd-logind[1680]: New session 11 of user core. Jun 25 18:51:39.873087 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:51:40.375438 sshd[6047]: pam_unix(sshd:session): session closed for user core Jun 25 18:51:40.379030 systemd[1]: sshd@8-10.200.8.15:22-10.200.16.10:46418.service: Deactivated successfully. Jun 25 18:51:40.381401 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:51:40.382928 systemd-logind[1680]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:51:40.384082 systemd-logind[1680]: Removed session 11. Jun 25 18:51:45.499324 systemd[1]: Started sshd@9-10.200.8.15:22-10.200.16.10:53752.service - OpenSSH per-connection server daemon (10.200.16.10:53752). Jun 25 18:51:46.184328 sshd[6085]: Accepted publickey for core from 10.200.16.10 port 53752 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:51:46.185894 sshd[6085]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:51:46.190888 systemd-logind[1680]: New session 12 of user core. Jun 25 18:51:46.195048 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:51:46.697546 sshd[6085]: pam_unix(sshd:session): session closed for user core Jun 25 18:51:46.701440 systemd[1]: sshd@9-10.200.8.15:22-10.200.16.10:53752.service: Deactivated successfully. Jun 25 18:51:46.703393 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:51:46.704241 systemd-logind[1680]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:51:46.705423 systemd-logind[1680]: Removed session 12. Jun 25 18:51:46.817201 systemd[1]: Started sshd@10-10.200.8.15:22-10.200.16.10:53756.service - OpenSSH per-connection server daemon (10.200.16.10:53756). Jun 25 18:51:47.456202 sshd[6098]: Accepted publickey for core from 10.200.16.10 port 53756 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:51:47.457711 sshd[6098]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:51:47.462570 systemd-logind[1680]: New session 13 of user core. Jun 25 18:51:47.469035 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:51:48.000742 sshd[6098]: pam_unix(sshd:session): session closed for user core Jun 25 18:51:48.004726 systemd[1]: sshd@10-10.200.8.15:22-10.200.16.10:53756.service: Deactivated successfully. Jun 25 18:51:48.006801 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:51:48.007688 systemd-logind[1680]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:51:48.008645 systemd-logind[1680]: Removed session 13. Jun 25 18:51:48.121166 systemd[1]: Started sshd@11-10.200.8.15:22-10.200.16.10:53762.service - OpenSSH per-connection server daemon (10.200.16.10:53762). Jun 25 18:51:48.768803 sshd[6111]: Accepted publickey for core from 10.200.16.10 port 53762 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:51:48.770276 sshd[6111]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:51:48.775120 systemd-logind[1680]: New session 14 of user core. Jun 25 18:51:48.781055 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:51:49.285118 sshd[6111]: pam_unix(sshd:session): session closed for user core Jun 25 18:51:49.289199 systemd[1]: sshd@11-10.200.8.15:22-10.200.16.10:53762.service: Deactivated successfully. Jun 25 18:51:49.291606 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:51:49.292593 systemd-logind[1680]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:51:49.293669 systemd-logind[1680]: Removed session 14. Jun 25 18:51:54.408703 systemd[1]: Started sshd@12-10.200.8.15:22-10.200.16.10:53778.service - OpenSSH per-connection server daemon (10.200.16.10:53778). Jun 25 18:51:55.056342 sshd[6125]: Accepted publickey for core from 10.200.16.10 port 53778 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:51:55.058029 sshd[6125]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:51:55.063487 systemd-logind[1680]: New session 15 of user core. Jun 25 18:51:55.070064 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:51:55.573399 sshd[6125]: pam_unix(sshd:session): session closed for user core Jun 25 18:51:55.576719 systemd[1]: sshd@12-10.200.8.15:22-10.200.16.10:53778.service: Deactivated successfully. Jun 25 18:51:55.579027 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:51:55.580503 systemd-logind[1680]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:51:55.581616 systemd-logind[1680]: Removed session 15. Jun 25 18:52:00.694220 systemd[1]: Started sshd@13-10.200.8.15:22-10.200.16.10:56536.service - OpenSSH per-connection server daemon (10.200.16.10:56536). Jun 25 18:52:01.347757 sshd[6147]: Accepted publickey for core from 10.200.16.10 port 56536 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:52:01.349484 sshd[6147]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:01.353947 systemd-logind[1680]: New session 16 of user core. Jun 25 18:52:01.360025 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:52:01.858390 sshd[6147]: pam_unix(sshd:session): session closed for user core Jun 25 18:52:01.862817 systemd[1]: sshd@13-10.200.8.15:22-10.200.16.10:56536.service: Deactivated successfully. Jun 25 18:52:01.865036 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:52:01.865836 systemd-logind[1680]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:52:01.867152 systemd-logind[1680]: Removed session 16. Jun 25 18:52:06.986178 systemd[1]: Started sshd@14-10.200.8.15:22-10.200.16.10:60184.service - OpenSSH per-connection server daemon (10.200.16.10:60184). Jun 25 18:52:07.633347 sshd[6188]: Accepted publickey for core from 10.200.16.10 port 60184 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:52:07.635554 sshd[6188]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:07.642212 systemd-logind[1680]: New session 17 of user core. Jun 25 18:52:07.648391 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:52:08.151712 sshd[6188]: pam_unix(sshd:session): session closed for user core Jun 25 18:52:08.155964 systemd[1]: sshd@14-10.200.8.15:22-10.200.16.10:60184.service: Deactivated successfully. Jun 25 18:52:08.158071 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:52:08.158899 systemd-logind[1680]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:52:08.160032 systemd-logind[1680]: Removed session 17. Jun 25 18:52:13.268286 systemd[1]: Started sshd@15-10.200.8.15:22-10.200.16.10:60192.service - OpenSSH per-connection server daemon (10.200.16.10:60192). Jun 25 18:52:13.928574 sshd[6225]: Accepted publickey for core from 10.200.16.10 port 60192 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:52:13.930067 sshd[6225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:13.934183 systemd-logind[1680]: New session 18 of user core. Jun 25 18:52:13.940052 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:52:14.443171 sshd[6225]: pam_unix(sshd:session): session closed for user core Jun 25 18:52:14.447656 systemd[1]: sshd@15-10.200.8.15:22-10.200.16.10:60192.service: Deactivated successfully. Jun 25 18:52:14.450060 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:52:14.450949 systemd-logind[1680]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:52:14.452201 systemd-logind[1680]: Removed session 18. Jun 25 18:52:14.561183 systemd[1]: Started sshd@16-10.200.8.15:22-10.200.16.10:60208.service - OpenSSH per-connection server daemon (10.200.16.10:60208). Jun 25 18:52:15.205078 sshd[6238]: Accepted publickey for core from 10.200.16.10 port 60208 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:52:15.206841 sshd[6238]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:15.210832 systemd-logind[1680]: New session 19 of user core. Jun 25 18:52:15.217035 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:52:15.779649 sshd[6238]: pam_unix(sshd:session): session closed for user core Jun 25 18:52:15.784637 systemd[1]: sshd@16-10.200.8.15:22-10.200.16.10:60208.service: Deactivated successfully. Jun 25 18:52:15.786804 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:52:15.787715 systemd-logind[1680]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:52:15.789151 systemd-logind[1680]: Removed session 19. Jun 25 18:52:15.897184 systemd[1]: Started sshd@17-10.200.8.15:22-10.200.16.10:49536.service - OpenSSH per-connection server daemon (10.200.16.10:49536). Jun 25 18:52:16.616479 sshd[6254]: Accepted publickey for core from 10.200.16.10 port 49536 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:52:16.618065 sshd[6254]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:16.622348 systemd-logind[1680]: New session 20 of user core. Jun 25 18:52:16.630059 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:52:18.878696 sshd[6254]: pam_unix(sshd:session): session closed for user core Jun 25 18:52:18.881957 systemd[1]: sshd@17-10.200.8.15:22-10.200.16.10:49536.service: Deactivated successfully. Jun 25 18:52:18.884165 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:52:18.885729 systemd-logind[1680]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:52:18.887234 systemd-logind[1680]: Removed session 20. Jun 25 18:52:19.003193 systemd[1]: Started sshd@18-10.200.8.15:22-10.200.16.10:49540.service - OpenSSH per-connection server daemon (10.200.16.10:49540). Jun 25 18:52:19.648470 sshd[6274]: Accepted publickey for core from 10.200.16.10 port 49540 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:52:19.650084 sshd[6274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:19.654444 systemd-logind[1680]: New session 21 of user core. Jun 25 18:52:19.660022 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 18:52:20.259969 sshd[6274]: pam_unix(sshd:session): session closed for user core Jun 25 18:52:20.264623 systemd[1]: sshd@18-10.200.8.15:22-10.200.16.10:49540.service: Deactivated successfully. Jun 25 18:52:20.267383 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 18:52:20.268186 systemd-logind[1680]: Session 21 logged out. Waiting for processes to exit. Jun 25 18:52:20.269485 systemd-logind[1680]: Removed session 21. Jun 25 18:52:20.374083 systemd[1]: Started sshd@19-10.200.8.15:22-10.200.16.10:49548.service - OpenSSH per-connection server daemon (10.200.16.10:49548). Jun 25 18:52:21.024206 sshd[6285]: Accepted publickey for core from 10.200.16.10 port 49548 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:52:21.025665 sshd[6285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:21.029634 systemd-logind[1680]: New session 22 of user core. Jun 25 18:52:21.039073 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 18:52:21.550087 sshd[6285]: pam_unix(sshd:session): session closed for user core Jun 25 18:52:21.553205 systemd[1]: sshd@19-10.200.8.15:22-10.200.16.10:49548.service: Deactivated successfully. Jun 25 18:52:21.555475 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 18:52:21.556991 systemd-logind[1680]: Session 22 logged out. Waiting for processes to exit. Jun 25 18:52:21.558019 systemd-logind[1680]: Removed session 22. Jun 25 18:52:26.666926 systemd[1]: Started sshd@20-10.200.8.15:22-10.200.16.10:41928.service - OpenSSH per-connection server daemon (10.200.16.10:41928). Jun 25 18:52:27.314345 sshd[6314]: Accepted publickey for core from 10.200.16.10 port 41928 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:52:27.315885 sshd[6314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:27.320743 systemd-logind[1680]: New session 23 of user core. Jun 25 18:52:27.327027 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 18:52:27.824164 sshd[6314]: pam_unix(sshd:session): session closed for user core Jun 25 18:52:27.827950 systemd[1]: sshd@20-10.200.8.15:22-10.200.16.10:41928.service: Deactivated successfully. Jun 25 18:52:27.830229 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 18:52:27.831730 systemd-logind[1680]: Session 23 logged out. Waiting for processes to exit. Jun 25 18:52:27.833056 systemd-logind[1680]: Removed session 23. Jun 25 18:52:32.930623 systemd[1]: run-containerd-runc-k8s.io-e05ea1a5374b95c4bf03ab387ef0c1828873f653d13a68d87dd27e110fd4af9b-runc.vatzTB.mount: Deactivated successfully. Jun 25 18:52:32.943082 systemd[1]: Started sshd@21-10.200.8.15:22-10.200.16.10:41938.service - OpenSSH per-connection server daemon (10.200.16.10:41938). Jun 25 18:52:33.608387 sshd[6367]: Accepted publickey for core from 10.200.16.10 port 41938 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:52:33.610144 sshd[6367]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:33.615019 systemd-logind[1680]: New session 24 of user core. Jun 25 18:52:33.624267 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 18:52:34.116834 sshd[6367]: pam_unix(sshd:session): session closed for user core Jun 25 18:52:34.119904 systemd[1]: sshd@21-10.200.8.15:22-10.200.16.10:41938.service: Deactivated successfully. Jun 25 18:52:34.122186 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 18:52:34.123665 systemd-logind[1680]: Session 24 logged out. Waiting for processes to exit. Jun 25 18:52:34.124851 systemd-logind[1680]: Removed session 24. Jun 25 18:52:39.235184 systemd[1]: Started sshd@22-10.200.8.15:22-10.200.16.10:43394.service - OpenSSH per-connection server daemon (10.200.16.10:43394). Jun 25 18:52:39.876496 sshd[6390]: Accepted publickey for core from 10.200.16.10 port 43394 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:52:39.878026 sshd[6390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:39.882680 systemd-logind[1680]: New session 25 of user core. Jun 25 18:52:39.888048 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 18:52:40.406059 sshd[6390]: pam_unix(sshd:session): session closed for user core Jun 25 18:52:40.409062 systemd[1]: sshd@22-10.200.8.15:22-10.200.16.10:43394.service: Deactivated successfully. Jun 25 18:52:40.411535 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 18:52:40.413220 systemd-logind[1680]: Session 25 logged out. Waiting for processes to exit. Jun 25 18:52:40.414437 systemd-logind[1680]: Removed session 25. Jun 25 18:52:45.523558 systemd[1]: Started sshd@23-10.200.8.15:22-10.200.16.10:56694.service - OpenSSH per-connection server daemon (10.200.16.10:56694). Jun 25 18:52:46.187109 sshd[6422]: Accepted publickey for core from 10.200.16.10 port 56694 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:52:46.188924 sshd[6422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:46.194006 systemd-logind[1680]: New session 26 of user core. Jun 25 18:52:46.199032 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 18:52:46.699981 sshd[6422]: pam_unix(sshd:session): session closed for user core Jun 25 18:52:46.703575 systemd[1]: sshd@23-10.200.8.15:22-10.200.16.10:56694.service: Deactivated successfully. Jun 25 18:52:46.705848 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 18:52:46.706656 systemd-logind[1680]: Session 26 logged out. Waiting for processes to exit. Jun 25 18:52:46.707686 systemd-logind[1680]: Removed session 26. Jun 25 18:52:51.818591 systemd[1]: Started sshd@24-10.200.8.15:22-10.200.16.10:56698.service - OpenSSH per-connection server daemon (10.200.16.10:56698). Jun 25 18:52:52.476937 sshd[6442]: Accepted publickey for core from 10.200.16.10 port 56698 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:52:52.478682 sshd[6442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:52.483022 systemd-logind[1680]: New session 27 of user core. Jun 25 18:52:52.491026 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 18:52:52.982492 sshd[6442]: pam_unix(sshd:session): session closed for user core Jun 25 18:52:52.985486 systemd[1]: sshd@24-10.200.8.15:22-10.200.16.10:56698.service: Deactivated successfully. Jun 25 18:52:52.987756 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 18:52:52.989670 systemd-logind[1680]: Session 27 logged out. Waiting for processes to exit. Jun 25 18:52:52.990680 systemd-logind[1680]: Removed session 27. Jun 25 18:52:58.098984 systemd[1]: Started sshd@25-10.200.8.15:22-10.200.16.10:53806.service - OpenSSH per-connection server daemon (10.200.16.10:53806). Jun 25 18:52:58.751274 sshd[6457]: Accepted publickey for core from 10.200.16.10 port 53806 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:52:58.752741 sshd[6457]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:58.757777 systemd-logind[1680]: New session 28 of user core. Jun 25 18:52:58.762044 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 25 18:52:59.269493 sshd[6457]: pam_unix(sshd:session): session closed for user core Jun 25 18:52:59.273503 systemd[1]: sshd@25-10.200.8.15:22-10.200.16.10:53806.service: Deactivated successfully. Jun 25 18:52:59.275844 systemd[1]: session-28.scope: Deactivated successfully. Jun 25 18:52:59.276684 systemd-logind[1680]: Session 28 logged out. Waiting for processes to exit. Jun 25 18:52:59.277626 systemd-logind[1680]: Removed session 28.