Jun 25 18:28:41.353445 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jun 25 18:28:41.353466 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Tue Jun 25 17:19:03 -00 2024 Jun 25 18:28:41.353474 kernel: KASLR enabled Jun 25 18:28:41.353482 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jun 25 18:28:41.353487 kernel: printk: bootconsole [pl11] enabled Jun 25 18:28:41.353493 kernel: efi: EFI v2.7 by EDK II Jun 25 18:28:41.353500 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3c198 RNG=0x3fd89998 MEMRESERVE=0x3e925e18 Jun 25 18:28:41.353506 kernel: random: crng init done Jun 25 18:28:41.353513 kernel: ACPI: Early table checksum verification disabled Jun 25 18:28:41.353518 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Jun 25 18:28:41.353524 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:28:41.353530 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:28:41.353538 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jun 25 18:28:41.353545 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:28:41.353552 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:28:41.353559 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:28:41.353565 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:28:41.353573 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:28:41.353580 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:28:41.353586 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jun 25 18:28:41.353592 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:28:41.353599 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jun 25 18:28:41.353605 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jun 25 18:28:41.353611 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jun 25 18:28:41.353618 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jun 25 18:28:41.353624 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jun 25 18:28:41.353630 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jun 25 18:28:41.353637 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jun 25 18:28:41.353644 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jun 25 18:28:41.353651 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jun 25 18:28:41.353657 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jun 25 18:28:41.353664 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jun 25 18:28:41.353670 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jun 25 18:28:41.353676 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jun 25 18:28:41.353682 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jun 25 18:28:41.353689 kernel: Zone ranges: Jun 25 18:28:41.353695 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jun 25 18:28:41.353701 kernel: DMA32 empty Jun 25 18:28:41.353708 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jun 25 18:28:41.353716 kernel: Movable zone start for each node Jun 25 18:28:41.353725 kernel: Early memory node ranges Jun 25 18:28:41.353731 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jun 25 18:28:41.353738 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Jun 25 18:28:41.353745 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Jun 25 18:28:41.353753 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Jun 25 18:28:41.353760 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Jun 25 18:28:41.353767 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Jun 25 18:28:41.353773 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Jun 25 18:28:41.353781 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Jun 25 18:28:41.355882 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jun 25 18:28:41.355902 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jun 25 18:28:41.355909 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jun 25 18:28:41.355917 kernel: psci: probing for conduit method from ACPI. Jun 25 18:28:41.355924 kernel: psci: PSCIv1.1 detected in firmware. Jun 25 18:28:41.355930 kernel: psci: Using standard PSCI v0.2 function IDs Jun 25 18:28:41.355937 kernel: psci: MIGRATE_INFO_TYPE not supported. Jun 25 18:28:41.355950 kernel: psci: SMC Calling Convention v1.4 Jun 25 18:28:41.355957 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jun 25 18:28:41.355964 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jun 25 18:28:41.355971 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jun 25 18:28:41.355978 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jun 25 18:28:41.355985 kernel: pcpu-alloc: [0] 0 [0] 1 Jun 25 18:28:41.355992 kernel: Detected PIPT I-cache on CPU0 Jun 25 18:28:41.355999 kernel: CPU features: detected: GIC system register CPU interface Jun 25 18:28:41.356006 kernel: CPU features: detected: Hardware dirty bit management Jun 25 18:28:41.356013 kernel: CPU features: detected: Spectre-BHB Jun 25 18:28:41.356020 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 25 18:28:41.356027 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 25 18:28:41.356038 kernel: CPU features: detected: ARM erratum 1418040 Jun 25 18:28:41.356045 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jun 25 18:28:41.356052 kernel: alternatives: applying boot alternatives Jun 25 18:28:41.356060 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e6069a8408a0ca7e7bc40a0bde7fe3ef89df2f98c4bdd2e7e7f9f8f3f8ad207f Jun 25 18:28:41.356068 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:28:41.356075 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:28:41.356082 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 18:28:41.356089 kernel: Fallback order for Node 0: 0 Jun 25 18:28:41.356096 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jun 25 18:28:41.356102 kernel: Policy zone: Normal Jun 25 18:28:41.356111 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:28:41.356118 kernel: software IO TLB: area num 2. Jun 25 18:28:41.356124 kernel: software IO TLB: mapped [mem 0x000000003a925000-0x000000003e925000] (64MB) Jun 25 18:28:41.356132 kernel: Memory: 3986332K/4194160K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 207828K reserved, 0K cma-reserved) Jun 25 18:28:41.356138 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 18:28:41.356145 kernel: trace event string verifier disabled Jun 25 18:28:41.356152 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:28:41.356159 kernel: rcu: RCU event tracing is enabled. Jun 25 18:28:41.356166 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 18:28:41.356173 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:28:41.356180 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:28:41.356187 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:28:41.356196 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 18:28:41.356203 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 25 18:28:41.356216 kernel: GICv3: 960 SPIs implemented Jun 25 18:28:41.356224 kernel: GICv3: 0 Extended SPIs implemented Jun 25 18:28:41.356231 kernel: Root IRQ handler: gic_handle_irq Jun 25 18:28:41.356237 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jun 25 18:28:41.356244 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jun 25 18:28:41.356251 kernel: ITS: No ITS available, not enabling LPIs Jun 25 18:28:41.356259 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:28:41.356266 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:28:41.356276 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jun 25 18:28:41.356287 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 25 18:28:41.356294 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 25 18:28:41.356301 kernel: Console: colour dummy device 80x25 Jun 25 18:28:41.356309 kernel: printk: console [tty1] enabled Jun 25 18:28:41.356316 kernel: ACPI: Core revision 20230628 Jun 25 18:28:41.356324 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 25 18:28:41.356331 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:28:41.356338 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:28:41.356348 kernel: SELinux: Initializing. Jun 25 18:28:41.356355 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:28:41.356364 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:28:41.356372 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:28:41.356379 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:28:41.356388 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jun 25 18:28:41.356395 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Jun 25 18:28:41.356402 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 25 18:28:41.356410 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:28:41.356424 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:28:41.356431 kernel: Remapping and enabling EFI services. Jun 25 18:28:41.356440 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:28:41.356448 kernel: Detected PIPT I-cache on CPU1 Jun 25 18:28:41.356457 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jun 25 18:28:41.356465 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:28:41.356472 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jun 25 18:28:41.356480 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 18:28:41.356487 kernel: SMP: Total of 2 processors activated. Jun 25 18:28:41.356496 kernel: CPU features: detected: 32-bit EL0 Support Jun 25 18:28:41.356507 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jun 25 18:28:41.356514 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 25 18:28:41.356522 kernel: CPU features: detected: CRC32 instructions Jun 25 18:28:41.356529 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 25 18:28:41.356536 kernel: CPU features: detected: LSE atomic instructions Jun 25 18:28:41.356544 kernel: CPU features: detected: Privileged Access Never Jun 25 18:28:41.356551 kernel: CPU: All CPU(s) started at EL1 Jun 25 18:28:41.356559 kernel: alternatives: applying system-wide alternatives Jun 25 18:28:41.356570 kernel: devtmpfs: initialized Jun 25 18:28:41.356578 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:28:41.356585 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 18:28:41.356593 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:28:41.356600 kernel: SMBIOS 3.1.0 present. Jun 25 18:28:41.356610 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Jun 25 18:28:41.356618 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:28:41.356626 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 25 18:28:41.356633 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 25 18:28:41.356643 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 25 18:28:41.356650 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:28:41.356660 kernel: audit: type=2000 audit(0.046:1): state=initialized audit_enabled=0 res=1 Jun 25 18:28:41.356668 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:28:41.356676 kernel: cpuidle: using governor menu Jun 25 18:28:41.356683 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 25 18:28:41.356690 kernel: ASID allocator initialised with 32768 entries Jun 25 18:28:41.356698 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:28:41.356708 kernel: Serial: AMBA PL011 UART driver Jun 25 18:28:41.356717 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jun 25 18:28:41.356725 kernel: Modules: 0 pages in range for non-PLT usage Jun 25 18:28:41.356732 kernel: Modules: 509120 pages in range for PLT usage Jun 25 18:28:41.356740 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:28:41.356747 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:28:41.356755 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 25 18:28:41.356765 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 25 18:28:41.356772 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:28:41.356780 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:28:41.356796 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 25 18:28:41.356804 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 25 18:28:41.356815 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:28:41.356822 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:28:41.356830 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:28:41.356837 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:28:41.356844 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:28:41.356852 kernel: ACPI: Interpreter enabled Jun 25 18:28:41.356862 kernel: ACPI: Using GIC for interrupt routing Jun 25 18:28:41.356871 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jun 25 18:28:41.356879 kernel: printk: console [ttyAMA0] enabled Jun 25 18:28:41.356886 kernel: printk: bootconsole [pl11] disabled Jun 25 18:28:41.356894 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jun 25 18:28:41.356901 kernel: iommu: Default domain type: Translated Jun 25 18:28:41.356909 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 25 18:28:41.356917 kernel: efivars: Registered efivars operations Jun 25 18:28:41.356927 kernel: vgaarb: loaded Jun 25 18:28:41.356936 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 25 18:28:41.356945 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:28:41.356952 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:28:41.356960 kernel: pnp: PnP ACPI init Jun 25 18:28:41.356967 kernel: pnp: PnP ACPI: found 0 devices Jun 25 18:28:41.356974 kernel: NET: Registered PF_INET protocol family Jun 25 18:28:41.356985 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:28:41.356992 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 18:28:41.357000 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:28:41.357007 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 18:28:41.357016 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 18:28:41.357027 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 18:28:41.357034 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:28:41.357042 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:28:41.357049 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:28:41.357057 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:28:41.357064 kernel: kvm [1]: HYP mode not available Jun 25 18:28:41.357074 kernel: Initialise system trusted keyrings Jun 25 18:28:41.357083 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 18:28:41.357093 kernel: Key type asymmetric registered Jun 25 18:28:41.357100 kernel: Asymmetric key parser 'x509' registered Jun 25 18:28:41.357108 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 25 18:28:41.357115 kernel: io scheduler mq-deadline registered Jun 25 18:28:41.357125 kernel: io scheduler kyber registered Jun 25 18:28:41.357133 kernel: io scheduler bfq registered Jun 25 18:28:41.357140 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:28:41.357147 kernel: thunder_xcv, ver 1.0 Jun 25 18:28:41.357154 kernel: thunder_bgx, ver 1.0 Jun 25 18:28:41.357162 kernel: nicpf, ver 1.0 Jun 25 18:28:41.357174 kernel: nicvf, ver 1.0 Jun 25 18:28:41.357331 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 25 18:28:41.357417 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-06-25T18:28:40 UTC (1719340120) Jun 25 18:28:41.357428 kernel: efifb: probing for efifb Jun 25 18:28:41.357436 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 25 18:28:41.357444 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 25 18:28:41.357454 kernel: efifb: scrolling: redraw Jun 25 18:28:41.357464 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 25 18:28:41.357472 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 18:28:41.357479 kernel: fb0: EFI VGA frame buffer device Jun 25 18:28:41.357487 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jun 25 18:28:41.357494 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 18:28:41.357504 kernel: No ACPI PMU IRQ for CPU0 Jun 25 18:28:41.357511 kernel: No ACPI PMU IRQ for CPU1 Jun 25 18:28:41.357519 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jun 25 18:28:41.357526 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jun 25 18:28:41.357536 kernel: watchdog: Hard watchdog permanently disabled Jun 25 18:28:41.357543 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:28:41.357551 kernel: Segment Routing with IPv6 Jun 25 18:28:41.357558 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:28:41.357565 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:28:41.357573 kernel: Key type dns_resolver registered Jun 25 18:28:41.357580 kernel: registered taskstats version 1 Jun 25 18:28:41.357587 kernel: Loading compiled-in X.509 certificates Jun 25 18:28:41.357595 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 751918e575d02f96b0daadd44b8f442a8c39ecd3' Jun 25 18:28:41.357603 kernel: Key type .fscrypt registered Jun 25 18:28:41.357611 kernel: Key type fscrypt-provisioning registered Jun 25 18:28:41.357618 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:28:41.357625 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:28:41.357633 kernel: ima: No architecture policies found Jun 25 18:28:41.357640 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 25 18:28:41.357647 kernel: clk: Disabling unused clocks Jun 25 18:28:41.357655 kernel: Freeing unused kernel memory: 39040K Jun 25 18:28:41.357662 kernel: Run /init as init process Jun 25 18:28:41.357671 kernel: with arguments: Jun 25 18:28:41.357678 kernel: /init Jun 25 18:28:41.357686 kernel: with environment: Jun 25 18:28:41.357693 kernel: HOME=/ Jun 25 18:28:41.357700 kernel: TERM=linux Jun 25 18:28:41.357707 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:28:41.357716 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:28:41.357729 systemd[1]: Detected virtualization microsoft. Jun 25 18:28:41.357739 systemd[1]: Detected architecture arm64. Jun 25 18:28:41.357746 systemd[1]: Running in initrd. Jun 25 18:28:41.357754 systemd[1]: No hostname configured, using default hostname. Jun 25 18:28:41.357761 systemd[1]: Hostname set to . Jun 25 18:28:41.357769 systemd[1]: Initializing machine ID from random generator. Jun 25 18:28:41.357777 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:28:41.357785 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:28:41.359831 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:28:41.359847 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:28:41.359856 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:28:41.359864 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:28:41.359872 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:28:41.359882 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:28:41.359890 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:28:41.359898 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:28:41.359908 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:28:41.359916 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:28:41.359924 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:28:41.359932 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:28:41.359940 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:28:41.359948 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:28:41.359956 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:28:41.359964 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:28:41.359973 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:28:41.359982 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:28:41.359990 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:28:41.359998 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:28:41.360006 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:28:41.360013 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:28:41.360021 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:28:41.360029 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:28:41.360037 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:28:41.360047 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:28:41.360055 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:28:41.360086 systemd-journald[217]: Collecting audit messages is disabled. Jun 25 18:28:41.360106 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:28:41.360117 systemd-journald[217]: Journal started Jun 25 18:28:41.360135 systemd-journald[217]: Runtime Journal (/run/log/journal/77deb0f5dd154ec0b5423f2e06e15de7) is 8.0M, max 78.6M, 70.6M free. Jun 25 18:28:41.359248 systemd-modules-load[218]: Inserted module 'overlay' Jun 25 18:28:41.391350 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:28:41.391383 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:28:41.398896 kernel: Bridge firewalling registered Jun 25 18:28:41.401973 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:28:41.402769 systemd-modules-load[218]: Inserted module 'br_netfilter' Jun 25 18:28:41.422314 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:28:41.437446 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:28:41.442559 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:28:41.453002 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:28:41.476912 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:28:41.485933 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:28:41.506415 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:28:41.528940 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:28:41.546818 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:28:41.555924 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:28:41.570811 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:28:41.583503 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:28:41.612099 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:28:41.620961 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:28:41.645887 dracut-cmdline[249]: dracut-dracut-053 Jun 25 18:28:41.661549 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e6069a8408a0ca7e7bc40a0bde7fe3ef89df2f98c4bdd2e7e7f9f8f3f8ad207f Jun 25 18:28:41.650989 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:28:41.696270 systemd-resolved[251]: Positive Trust Anchors: Jun 25 18:28:41.696281 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:28:41.696311 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:28:41.697293 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:28:41.698554 systemd-resolved[251]: Defaulting to hostname 'linux'. Jun 25 18:28:41.704509 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:28:41.711146 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:28:41.833830 kernel: SCSI subsystem initialized Jun 25 18:28:41.840825 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:28:41.850823 kernel: iscsi: registered transport (tcp) Jun 25 18:28:41.868616 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:28:41.868677 kernel: QLogic iSCSI HBA Driver Jun 25 18:28:41.901861 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:28:41.922053 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:28:41.956453 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:28:41.956520 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:28:41.962344 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:28:42.011820 kernel: raid6: neonx8 gen() 15722 MB/s Jun 25 18:28:42.031803 kernel: raid6: neonx4 gen() 15667 MB/s Jun 25 18:28:42.051799 kernel: raid6: neonx2 gen() 13277 MB/s Jun 25 18:28:42.072801 kernel: raid6: neonx1 gen() 10457 MB/s Jun 25 18:28:42.092803 kernel: raid6: int64x8 gen() 6952 MB/s Jun 25 18:28:42.112799 kernel: raid6: int64x4 gen() 7349 MB/s Jun 25 18:28:42.133805 kernel: raid6: int64x2 gen() 6127 MB/s Jun 25 18:28:42.157703 kernel: raid6: int64x1 gen() 5058 MB/s Jun 25 18:28:42.157741 kernel: raid6: using algorithm neonx8 gen() 15722 MB/s Jun 25 18:28:42.182522 kernel: raid6: .... xor() 11915 MB/s, rmw enabled Jun 25 18:28:42.182549 kernel: raid6: using neon recovery algorithm Jun 25 18:28:42.196345 kernel: xor: measuring software checksum speed Jun 25 18:28:42.196364 kernel: 8regs : 19873 MB/sec Jun 25 18:28:42.204692 kernel: 32regs : 19612 MB/sec Jun 25 18:28:42.204704 kernel: arm64_neon : 27143 MB/sec Jun 25 18:28:42.209460 kernel: xor: using function: arm64_neon (27143 MB/sec) Jun 25 18:28:42.261925 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:28:42.271957 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:28:42.287966 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:28:42.312590 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jun 25 18:28:42.320824 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:28:42.338985 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:28:42.356493 dracut-pre-trigger[454]: rd.md=0: removing MD RAID activation Jun 25 18:28:42.384942 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:28:42.406936 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:28:42.441984 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:28:42.467408 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:28:42.490565 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:28:42.506007 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:28:42.521681 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:28:42.535405 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:28:42.558128 kernel: hv_vmbus: Vmbus version:5.3 Jun 25 18:28:42.566491 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:28:42.596267 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 25 18:28:42.596291 kernel: hv_vmbus: registering driver hid_hyperv Jun 25 18:28:42.596301 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jun 25 18:28:42.592765 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:28:42.654822 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 18:28:42.654845 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 25 18:28:42.655007 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 18:28:42.655020 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jun 25 18:28:42.655030 kernel: PTP clock support registered Jun 25 18:28:42.592959 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:28:42.671984 kernel: hv_vmbus: registering driver hv_storvsc Jun 25 18:28:42.672006 kernel: hv_vmbus: registering driver hv_netvsc Jun 25 18:28:42.655071 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:28:42.706892 kernel: scsi host0: storvsc_host_t Jun 25 18:28:42.707074 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 25 18:28:42.707097 kernel: scsi host1: storvsc_host_t Jun 25 18:28:42.707182 kernel: hv_utils: Registering HyperV Utility Driver Jun 25 18:28:42.707193 kernel: hv_vmbus: registering driver hv_utils Jun 25 18:28:42.687307 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:28:42.923412 kernel: hv_utils: Heartbeat IC version 3.0 Jun 25 18:28:42.923446 kernel: hv_utils: Shutdown IC version 3.2 Jun 25 18:28:42.923466 kernel: hv_utils: TimeSync IC version 4.0 Jun 25 18:28:42.687541 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:28:42.720339 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:28:42.944165 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jun 25 18:28:42.919595 systemd-resolved[251]: Clock change detected. Flushing caches. Jun 25 18:28:42.948759 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:28:42.982962 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 25 18:28:42.983920 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 18:28:42.983934 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 25 18:28:42.965831 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:28:42.976302 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:28:42.995843 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:28:43.040098 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 25 18:28:43.075988 kernel: hv_netvsc 002248b9-0c04-0022-48b9-0c04002248b9 eth0: VF slot 1 added Jun 25 18:28:43.076120 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 25 18:28:43.076214 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 25 18:28:43.076295 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 25 18:28:43.076416 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 25 18:28:43.076599 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:28:43.076610 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 25 18:28:42.996088 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:28:43.005457 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:28:43.112937 kernel: hv_vmbus: registering driver hv_pci Jun 25 18:28:43.112959 kernel: hv_pci 94e0a1eb-f902-4877-9fe0-d4aae4f8464f: PCI VMBus probing: Using version 0x10004 Jun 25 18:28:43.189546 kernel: hv_pci 94e0a1eb-f902-4877-9fe0-d4aae4f8464f: PCI host bridge to bus f902:00 Jun 25 18:28:43.189663 kernel: pci_bus f902:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jun 25 18:28:43.189764 kernel: pci_bus f902:00: No busn resource found for root bus, will use [bus 00-ff] Jun 25 18:28:43.189847 kernel: pci f902:00:02.0: [15b3:1018] type 00 class 0x020000 Jun 25 18:28:43.189956 kernel: pci f902:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jun 25 18:28:43.190050 kernel: pci f902:00:02.0: enabling Extended Tags Jun 25 18:28:43.190141 kernel: pci f902:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f902:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jun 25 18:28:43.190232 kernel: pci_bus f902:00: busn_res: [bus 00-ff] end is updated to 00 Jun 25 18:28:43.190335 kernel: pci f902:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jun 25 18:28:43.044742 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:28:43.080673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:28:43.131672 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:28:43.205047 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:28:43.237604 kernel: mlx5_core f902:00:02.0: enabling device (0000 -> 0002) Jun 25 18:28:43.449476 kernel: mlx5_core f902:00:02.0: firmware version: 16.30.1284 Jun 25 18:28:43.449605 kernel: hv_netvsc 002248b9-0c04-0022-48b9-0c04002248b9 eth0: VF registering: eth1 Jun 25 18:28:43.449697 kernel: mlx5_core f902:00:02.0 eth1: joined to eth0 Jun 25 18:28:43.449800 kernel: mlx5_core f902:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jun 25 18:28:43.457336 kernel: mlx5_core f902:00:02.0 enP63746s1: renamed from eth1 Jun 25 18:28:43.721294 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 25 18:28:43.745335 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (504) Jun 25 18:28:43.758939 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 18:28:43.780339 kernel: BTRFS: device fsid c80091a6-4bf3-4ad3-8e1c-e6eb918765f9 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (491) Jun 25 18:28:43.793183 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 25 18:28:43.799842 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 25 18:28:43.828391 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 25 18:28:43.845462 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:28:43.868329 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:28:43.877214 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:28:44.878413 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:28:44.878469 disk-uuid[610]: The operation has completed successfully. Jun 25 18:28:44.938674 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:28:44.938770 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:28:44.971455 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:28:44.985501 sh[696]: Success Jun 25 18:28:45.014508 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 25 18:28:45.246231 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:28:45.252009 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:28:45.267451 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:28:45.303768 kernel: BTRFS info (device dm-0): first mount of filesystem c80091a6-4bf3-4ad3-8e1c-e6eb918765f9 Jun 25 18:28:45.303818 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:28:45.310466 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:28:45.315395 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:28:45.319821 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:28:45.697798 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:28:45.702613 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:28:45.724507 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:28:45.732455 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:28:45.769576 kernel: BTRFS info (device sda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:28:45.769623 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:28:45.776052 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:28:45.798358 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:28:45.813857 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:28:45.819335 kernel: BTRFS info (device sda6): last unmount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:28:45.823483 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:28:45.843487 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:28:45.852205 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:28:45.871469 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:28:45.889679 systemd-networkd[878]: lo: Link UP Jun 25 18:28:45.889693 systemd-networkd[878]: lo: Gained carrier Jun 25 18:28:45.891275 systemd-networkd[878]: Enumeration completed Jun 25 18:28:45.893089 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:28:45.893805 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:28:45.893809 systemd-networkd[878]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:28:45.903083 systemd[1]: Reached target network.target - Network. Jun 25 18:28:45.975334 kernel: mlx5_core f902:00:02.0 enP63746s1: Link up Jun 25 18:28:46.020275 systemd-networkd[878]: enP63746s1: Link UP Jun 25 18:28:46.024030 kernel: hv_netvsc 002248b9-0c04-0022-48b9-0c04002248b9 eth0: Data path switched to VF: enP63746s1 Jun 25 18:28:46.020526 systemd-networkd[878]: eth0: Link UP Jun 25 18:28:46.020941 systemd-networkd[878]: eth0: Gained carrier Jun 25 18:28:46.020951 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:28:46.043850 systemd-networkd[878]: enP63746s1: Gained carrier Jun 25 18:28:46.058350 systemd-networkd[878]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 25 18:28:47.042564 ignition[880]: Ignition 2.19.0 Jun 25 18:28:47.042580 ignition[880]: Stage: fetch-offline Jun 25 18:28:47.047346 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:28:47.042640 ignition[880]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:28:47.065543 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 18:28:47.042650 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:28:47.042757 ignition[880]: parsed url from cmdline: "" Jun 25 18:28:47.042760 ignition[880]: no config URL provided Jun 25 18:28:47.042764 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:28:47.042771 ignition[880]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:28:47.042776 ignition[880]: failed to fetch config: resource requires networking Jun 25 18:28:47.042967 ignition[880]: Ignition finished successfully Jun 25 18:28:47.090714 ignition[891]: Ignition 2.19.0 Jun 25 18:28:47.090721 ignition[891]: Stage: fetch Jun 25 18:28:47.090926 ignition[891]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:28:47.090935 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:28:47.091034 ignition[891]: parsed url from cmdline: "" Jun 25 18:28:47.091037 ignition[891]: no config URL provided Jun 25 18:28:47.091043 ignition[891]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:28:47.091052 ignition[891]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:28:47.091073 ignition[891]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 25 18:28:47.187164 ignition[891]: GET result: OK Jun 25 18:28:47.187252 ignition[891]: config has been read from IMDS userdata Jun 25 18:28:47.187320 ignition[891]: parsing config with SHA512: 9263e91e558859bef51c1acf067b44d3e66f4eeb27a49a2200f7093f3df65438954ef98cdfe86be8c8aadf960fd72d5da2d6f61173696a4086e99e3e5aaa3818 Jun 25 18:28:47.191170 unknown[891]: fetched base config from "system" Jun 25 18:28:47.191582 ignition[891]: fetch: fetch complete Jun 25 18:28:47.191178 unknown[891]: fetched base config from "system" Jun 25 18:28:47.191586 ignition[891]: fetch: fetch passed Jun 25 18:28:47.191184 unknown[891]: fetched user config from "azure" Jun 25 18:28:47.191630 ignition[891]: Ignition finished successfully Jun 25 18:28:47.201559 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 18:28:47.231647 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:28:47.250871 ignition[899]: Ignition 2.19.0 Jun 25 18:28:47.250886 ignition[899]: Stage: kargs Jun 25 18:28:47.251075 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:28:47.258121 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:28:47.251087 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:28:47.251986 ignition[899]: kargs: kargs passed Jun 25 18:28:47.279480 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:28:47.252031 ignition[899]: Ignition finished successfully Jun 25 18:28:47.306668 ignition[907]: Ignition 2.19.0 Jun 25 18:28:47.306683 ignition[907]: Stage: disks Jun 25 18:28:47.311498 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:28:47.306892 ignition[907]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:28:47.317993 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:28:47.306903 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:28:47.327680 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:28:47.307893 ignition[907]: disks: disks passed Jun 25 18:28:47.341122 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:28:47.307941 ignition[907]: Ignition finished successfully Jun 25 18:28:47.352265 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:28:47.365011 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:28:47.393599 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:28:47.461354 systemd-fsck[917]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jun 25 18:28:47.470354 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:28:47.487565 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:28:47.543334 kernel: EXT4-fs (sda9): mounted filesystem 91548e21-ce72-437e-94b9-d3fed380163a r/w with ordered data mode. Quota mode: none. Jun 25 18:28:47.543744 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:28:47.549445 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:28:47.603436 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:28:47.610412 systemd-networkd[878]: enP63746s1: Gained IPv6LL Jun 25 18:28:47.613794 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:28:47.631454 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 25 18:28:47.646699 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:28:47.646745 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:28:47.669141 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:28:47.714927 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (928) Jun 25 18:28:47.714957 kernel: BTRFS info (device sda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:28:47.714987 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:28:47.714998 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:28:47.727394 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:28:47.721645 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:28:47.729680 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:28:47.738662 systemd-networkd[878]: eth0: Gained IPv6LL Jun 25 18:28:48.180217 coreos-metadata[930]: Jun 25 18:28:48.180 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 18:28:48.190081 coreos-metadata[930]: Jun 25 18:28:48.190 INFO Fetch successful Jun 25 18:28:48.195319 coreos-metadata[930]: Jun 25 18:28:48.190 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 25 18:28:48.206107 coreos-metadata[930]: Jun 25 18:28:48.204 INFO Fetch successful Jun 25 18:28:48.218361 coreos-metadata[930]: Jun 25 18:28:48.218 INFO wrote hostname ci-4012.0.0-a-5284b277fa to /sysroot/etc/hostname Jun 25 18:28:48.227535 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:28:48.631356 initrd-setup-root[957]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:28:48.680203 initrd-setup-root[964]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:28:48.689278 initrd-setup-root[971]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:28:48.698811 initrd-setup-root[978]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:28:49.841909 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:28:49.859592 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:28:49.873610 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:28:49.893781 kernel: BTRFS info (device sda6): last unmount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:28:49.887276 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:28:49.917708 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:28:49.932018 ignition[1046]: INFO : Ignition 2.19.0 Jun 25 18:28:49.932018 ignition[1046]: INFO : Stage: mount Jun 25 18:28:49.941224 ignition[1046]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:28:49.941224 ignition[1046]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:28:49.941224 ignition[1046]: INFO : mount: mount passed Jun 25 18:28:49.941224 ignition[1046]: INFO : Ignition finished successfully Jun 25 18:28:49.938118 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:28:49.955569 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:28:50.000902 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:28:50.028948 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1058) Jun 25 18:28:50.028994 kernel: BTRFS info (device sda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:28:50.034922 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:28:50.039130 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:28:50.046332 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:28:50.047931 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:28:50.078247 ignition[1075]: INFO : Ignition 2.19.0 Jun 25 18:28:50.078247 ignition[1075]: INFO : Stage: files Jun 25 18:28:50.086756 ignition[1075]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:28:50.086756 ignition[1075]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:28:50.086756 ignition[1075]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:28:50.086756 ignition[1075]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:28:50.086756 ignition[1075]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:28:50.157824 ignition[1075]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:28:50.165493 ignition[1075]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:28:50.165493 ignition[1075]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:28:50.158816 unknown[1075]: wrote ssh authorized keys file for user: core Jun 25 18:28:50.215231 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 18:28:50.225937 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jun 25 18:28:50.521016 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:28:50.719293 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jun 25 18:28:51.167742 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 18:28:51.377699 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jun 25 18:28:51.377699 ignition[1075]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 18:28:51.409029 ignition[1075]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:28:51.420324 ignition[1075]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:28:51.420324 ignition[1075]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 18:28:51.420324 ignition[1075]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:28:51.420324 ignition[1075]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:28:51.420324 ignition[1075]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:28:51.420324 ignition[1075]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:28:51.420324 ignition[1075]: INFO : files: files passed Jun 25 18:28:51.420324 ignition[1075]: INFO : Ignition finished successfully Jun 25 18:28:51.433161 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:28:51.474629 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:28:51.494512 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:28:51.510897 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:28:51.511008 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:28:51.552829 initrd-setup-root-after-ignition[1104]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:28:51.552829 initrd-setup-root-after-ignition[1104]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:28:51.571771 initrd-setup-root-after-ignition[1108]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:28:51.572866 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:28:51.587789 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:28:51.612589 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:28:51.643962 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:28:51.644078 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:28:51.657438 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:28:51.671161 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:28:51.684422 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:28:51.705803 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:28:51.730508 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:28:51.751829 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:28:51.772606 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:28:51.780487 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:28:51.795521 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:28:51.809016 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:28:51.809145 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:28:51.828300 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:28:51.835269 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:28:51.848868 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:28:51.862159 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:28:51.876317 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:28:51.890126 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:28:51.904630 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:28:51.920332 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:28:51.933098 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:28:51.945959 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:28:51.957953 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:28:51.958075 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:28:51.976530 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:28:51.984419 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:28:51.998810 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:28:52.005184 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:28:52.013710 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:28:52.013829 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:28:52.034477 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:28:52.034607 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:28:52.043158 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:28:52.043251 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:28:52.055381 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 25 18:28:52.137550 ignition[1128]: INFO : Ignition 2.19.0 Jun 25 18:28:52.055485 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:28:52.167521 ignition[1128]: INFO : Stage: umount Jun 25 18:28:52.167521 ignition[1128]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:28:52.167521 ignition[1128]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:28:52.167521 ignition[1128]: INFO : umount: umount passed Jun 25 18:28:52.167521 ignition[1128]: INFO : Ignition finished successfully Jun 25 18:28:52.092630 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:28:52.115665 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:28:52.131033 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:28:52.131198 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:28:52.146576 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:28:52.146742 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:28:52.169629 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:28:52.169727 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:28:52.182067 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:28:52.182173 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:28:52.193859 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:28:52.193910 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:28:52.208495 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:28:52.208556 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:28:52.220134 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 18:28:52.220178 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 18:28:52.234177 systemd[1]: Stopped target network.target - Network. Jun 25 18:28:52.248442 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:28:52.248506 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:28:52.265693 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:28:52.289787 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:28:52.301352 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:28:52.313515 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:28:52.320752 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:28:52.333339 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:28:52.333390 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:28:52.346461 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:28:52.346500 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:28:52.360146 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:28:52.360203 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:28:52.381036 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:28:52.381102 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:28:52.395741 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:28:52.408486 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:28:52.415356 systemd-networkd[878]: eth0: DHCPv6 lease lost Jun 25 18:28:52.430356 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:28:52.430937 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:28:52.431053 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:28:52.444761 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:28:52.689155 kernel: hv_netvsc 002248b9-0c04-0022-48b9-0c04002248b9 eth0: Data path switched from VF: enP63746s1 Jun 25 18:28:52.445332 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:28:52.459345 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:28:52.459403 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:28:52.502545 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:28:52.514357 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:28:52.514438 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:28:52.529014 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:28:52.529069 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:28:52.541725 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:28:52.541775 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:28:52.554999 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:28:52.555048 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:28:52.569221 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:28:52.619546 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:28:52.619746 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:28:52.634702 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:28:52.634754 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:28:52.647338 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:28:52.647379 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:28:52.660183 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:28:52.660243 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:28:52.703560 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:28:52.703632 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:28:52.723222 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:28:52.723284 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:28:52.768535 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:28:52.787228 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:28:52.787300 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:28:52.802550 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 25 18:28:52.802607 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:28:52.816737 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:28:52.816788 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:28:52.832897 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:28:52.832945 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:28:52.848666 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:28:52.848765 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:28:52.861537 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:28:52.861624 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:28:52.916522 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:28:52.916904 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:28:52.934808 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:28:52.948090 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:28:52.948163 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:28:53.110721 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jun 25 18:28:52.987579 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:28:53.015279 systemd[1]: Switching root. Jun 25 18:28:53.121082 systemd-journald[217]: Journal stopped Jun 25 18:28:41.353445 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jun 25 18:28:41.353466 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Tue Jun 25 17:19:03 -00 2024 Jun 25 18:28:41.353474 kernel: KASLR enabled Jun 25 18:28:41.353482 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jun 25 18:28:41.353487 kernel: printk: bootconsole [pl11] enabled Jun 25 18:28:41.353493 kernel: efi: EFI v2.7 by EDK II Jun 25 18:28:41.353500 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3c198 RNG=0x3fd89998 MEMRESERVE=0x3e925e18 Jun 25 18:28:41.353506 kernel: random: crng init done Jun 25 18:28:41.353513 kernel: ACPI: Early table checksum verification disabled Jun 25 18:28:41.353518 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Jun 25 18:28:41.353524 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:28:41.353530 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:28:41.353538 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jun 25 18:28:41.353545 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:28:41.353552 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:28:41.353559 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:28:41.353565 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:28:41.353573 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:28:41.353580 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:28:41.353586 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jun 25 18:28:41.353592 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:28:41.353599 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jun 25 18:28:41.353605 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jun 25 18:28:41.353611 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jun 25 18:28:41.353618 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jun 25 18:28:41.353624 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jun 25 18:28:41.353630 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jun 25 18:28:41.353637 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jun 25 18:28:41.353644 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jun 25 18:28:41.353651 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jun 25 18:28:41.353657 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jun 25 18:28:41.353664 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jun 25 18:28:41.353670 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jun 25 18:28:41.353676 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jun 25 18:28:41.353682 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jun 25 18:28:41.353689 kernel: Zone ranges: Jun 25 18:28:41.353695 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jun 25 18:28:41.353701 kernel: DMA32 empty Jun 25 18:28:41.353708 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jun 25 18:28:41.353716 kernel: Movable zone start for each node Jun 25 18:28:41.353725 kernel: Early memory node ranges Jun 25 18:28:41.353731 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jun 25 18:28:41.353738 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Jun 25 18:28:41.353745 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Jun 25 18:28:41.353753 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Jun 25 18:28:41.353760 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Jun 25 18:28:41.353767 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Jun 25 18:28:41.353773 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Jun 25 18:28:41.353781 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Jun 25 18:28:41.355882 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jun 25 18:28:41.355902 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jun 25 18:28:41.355909 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jun 25 18:28:41.355917 kernel: psci: probing for conduit method from ACPI. Jun 25 18:28:41.355924 kernel: psci: PSCIv1.1 detected in firmware. Jun 25 18:28:41.355930 kernel: psci: Using standard PSCI v0.2 function IDs Jun 25 18:28:41.355937 kernel: psci: MIGRATE_INFO_TYPE not supported. Jun 25 18:28:41.355950 kernel: psci: SMC Calling Convention v1.4 Jun 25 18:28:41.355957 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jun 25 18:28:41.355964 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jun 25 18:28:41.355971 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jun 25 18:28:41.355978 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jun 25 18:28:41.355985 kernel: pcpu-alloc: [0] 0 [0] 1 Jun 25 18:28:41.355992 kernel: Detected PIPT I-cache on CPU0 Jun 25 18:28:41.355999 kernel: CPU features: detected: GIC system register CPU interface Jun 25 18:28:41.356006 kernel: CPU features: detected: Hardware dirty bit management Jun 25 18:28:41.356013 kernel: CPU features: detected: Spectre-BHB Jun 25 18:28:41.356020 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 25 18:28:41.356027 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 25 18:28:41.356038 kernel: CPU features: detected: ARM erratum 1418040 Jun 25 18:28:41.356045 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jun 25 18:28:41.356052 kernel: alternatives: applying boot alternatives Jun 25 18:28:41.356060 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e6069a8408a0ca7e7bc40a0bde7fe3ef89df2f98c4bdd2e7e7f9f8f3f8ad207f Jun 25 18:28:41.356068 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:28:41.356075 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:28:41.356082 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 18:28:41.356089 kernel: Fallback order for Node 0: 0 Jun 25 18:28:41.356096 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jun 25 18:28:41.356102 kernel: Policy zone: Normal Jun 25 18:28:41.356111 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:28:41.356118 kernel: software IO TLB: area num 2. Jun 25 18:28:41.356124 kernel: software IO TLB: mapped [mem 0x000000003a925000-0x000000003e925000] (64MB) Jun 25 18:28:41.356132 kernel: Memory: 3986332K/4194160K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 207828K reserved, 0K cma-reserved) Jun 25 18:28:41.356138 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 18:28:41.356145 kernel: trace event string verifier disabled Jun 25 18:28:41.356152 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:28:41.356159 kernel: rcu: RCU event tracing is enabled. Jun 25 18:28:41.356166 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 18:28:41.356173 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:28:41.356180 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:28:41.356187 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:28:41.356196 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 18:28:41.356203 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 25 18:28:41.356216 kernel: GICv3: 960 SPIs implemented Jun 25 18:28:41.356224 kernel: GICv3: 0 Extended SPIs implemented Jun 25 18:28:41.356231 kernel: Root IRQ handler: gic_handle_irq Jun 25 18:28:41.356237 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jun 25 18:28:41.356244 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jun 25 18:28:41.356251 kernel: ITS: No ITS available, not enabling LPIs Jun 25 18:28:41.356259 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:28:41.356266 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:28:41.356276 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jun 25 18:28:41.356287 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 25 18:28:41.356294 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 25 18:28:41.356301 kernel: Console: colour dummy device 80x25 Jun 25 18:28:41.356309 kernel: printk: console [tty1] enabled Jun 25 18:28:41.356316 kernel: ACPI: Core revision 20230628 Jun 25 18:28:41.356324 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 25 18:28:41.356331 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:28:41.356338 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:28:41.356348 kernel: SELinux: Initializing. Jun 25 18:28:41.356355 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:28:41.356364 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:28:41.356372 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:28:41.356379 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:28:41.356388 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jun 25 18:28:41.356395 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Jun 25 18:28:41.356402 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 25 18:28:41.356410 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:28:41.356424 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:28:41.356431 kernel: Remapping and enabling EFI services. Jun 25 18:28:41.356440 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:28:41.356448 kernel: Detected PIPT I-cache on CPU1 Jun 25 18:28:41.356457 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jun 25 18:28:41.356465 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:28:41.356472 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jun 25 18:28:41.356480 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 18:28:41.356487 kernel: SMP: Total of 2 processors activated. Jun 25 18:28:41.356496 kernel: CPU features: detected: 32-bit EL0 Support Jun 25 18:28:41.356507 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jun 25 18:28:41.356514 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 25 18:28:41.356522 kernel: CPU features: detected: CRC32 instructions Jun 25 18:28:41.356529 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 25 18:28:41.356536 kernel: CPU features: detected: LSE atomic instructions Jun 25 18:28:41.356544 kernel: CPU features: detected: Privileged Access Never Jun 25 18:28:41.356551 kernel: CPU: All CPU(s) started at EL1 Jun 25 18:28:41.356559 kernel: alternatives: applying system-wide alternatives Jun 25 18:28:41.356570 kernel: devtmpfs: initialized Jun 25 18:28:41.356578 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:28:41.356585 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 18:28:41.356593 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:28:41.356600 kernel: SMBIOS 3.1.0 present. Jun 25 18:28:41.356610 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Jun 25 18:28:41.356618 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:28:41.356626 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 25 18:28:41.356633 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 25 18:28:41.356643 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 25 18:28:41.356650 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:28:41.356660 kernel: audit: type=2000 audit(0.046:1): state=initialized audit_enabled=0 res=1 Jun 25 18:28:41.356668 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:28:41.356676 kernel: cpuidle: using governor menu Jun 25 18:28:41.356683 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 25 18:28:41.356690 kernel: ASID allocator initialised with 32768 entries Jun 25 18:28:41.356698 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:28:41.356708 kernel: Serial: AMBA PL011 UART driver Jun 25 18:28:41.356717 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jun 25 18:28:41.356725 kernel: Modules: 0 pages in range for non-PLT usage Jun 25 18:28:41.356732 kernel: Modules: 509120 pages in range for PLT usage Jun 25 18:28:41.356740 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:28:41.356747 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:28:41.356755 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 25 18:28:41.356765 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 25 18:28:41.356772 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:28:41.356780 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:28:41.356796 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 25 18:28:41.356804 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 25 18:28:41.356815 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:28:41.356822 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:28:41.356830 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:28:41.356837 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:28:41.356844 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:28:41.356852 kernel: ACPI: Interpreter enabled Jun 25 18:28:41.356862 kernel: ACPI: Using GIC for interrupt routing Jun 25 18:28:41.356871 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jun 25 18:28:41.356879 kernel: printk: console [ttyAMA0] enabled Jun 25 18:28:41.356886 kernel: printk: bootconsole [pl11] disabled Jun 25 18:28:41.356894 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jun 25 18:28:41.356901 kernel: iommu: Default domain type: Translated Jun 25 18:28:41.356909 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 25 18:28:41.356917 kernel: efivars: Registered efivars operations Jun 25 18:28:41.356927 kernel: vgaarb: loaded Jun 25 18:28:41.356936 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 25 18:28:41.356945 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:28:41.356952 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:28:41.356960 kernel: pnp: PnP ACPI init Jun 25 18:28:41.356967 kernel: pnp: PnP ACPI: found 0 devices Jun 25 18:28:41.356974 kernel: NET: Registered PF_INET protocol family Jun 25 18:28:41.356985 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:28:41.356992 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 18:28:41.357000 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:28:41.357007 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 18:28:41.357016 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 18:28:41.357027 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 18:28:41.357034 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:28:41.357042 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:28:41.357049 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:28:41.357057 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:28:41.357064 kernel: kvm [1]: HYP mode not available Jun 25 18:28:41.357074 kernel: Initialise system trusted keyrings Jun 25 18:28:41.357083 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 18:28:41.357093 kernel: Key type asymmetric registered Jun 25 18:28:41.357100 kernel: Asymmetric key parser 'x509' registered Jun 25 18:28:41.357108 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 25 18:28:41.357115 kernel: io scheduler mq-deadline registered Jun 25 18:28:41.357125 kernel: io scheduler kyber registered Jun 25 18:28:41.357133 kernel: io scheduler bfq registered Jun 25 18:28:41.357140 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:28:41.357147 kernel: thunder_xcv, ver 1.0 Jun 25 18:28:41.357154 kernel: thunder_bgx, ver 1.0 Jun 25 18:28:41.357162 kernel: nicpf, ver 1.0 Jun 25 18:28:41.357174 kernel: nicvf, ver 1.0 Jun 25 18:28:41.357331 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 25 18:28:41.357417 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-06-25T18:28:40 UTC (1719340120) Jun 25 18:28:41.357428 kernel: efifb: probing for efifb Jun 25 18:28:41.357436 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 25 18:28:41.357444 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 25 18:28:41.357454 kernel: efifb: scrolling: redraw Jun 25 18:28:41.357464 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 25 18:28:41.357472 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 18:28:41.357479 kernel: fb0: EFI VGA frame buffer device Jun 25 18:28:41.357487 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jun 25 18:28:41.357494 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 18:28:41.357504 kernel: No ACPI PMU IRQ for CPU0 Jun 25 18:28:41.357511 kernel: No ACPI PMU IRQ for CPU1 Jun 25 18:28:41.357519 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jun 25 18:28:41.357526 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jun 25 18:28:41.357536 kernel: watchdog: Hard watchdog permanently disabled Jun 25 18:28:41.357543 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:28:41.357551 kernel: Segment Routing with IPv6 Jun 25 18:28:41.357558 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:28:41.357565 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:28:41.357573 kernel: Key type dns_resolver registered Jun 25 18:28:41.357580 kernel: registered taskstats version 1 Jun 25 18:28:41.357587 kernel: Loading compiled-in X.509 certificates Jun 25 18:28:41.357595 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 751918e575d02f96b0daadd44b8f442a8c39ecd3' Jun 25 18:28:41.357603 kernel: Key type .fscrypt registered Jun 25 18:28:41.357611 kernel: Key type fscrypt-provisioning registered Jun 25 18:28:41.357618 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:28:41.357625 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:28:41.357633 kernel: ima: No architecture policies found Jun 25 18:28:41.357640 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 25 18:28:41.357647 kernel: clk: Disabling unused clocks Jun 25 18:28:41.357655 kernel: Freeing unused kernel memory: 39040K Jun 25 18:28:41.357662 kernel: Run /init as init process Jun 25 18:28:41.357671 kernel: with arguments: Jun 25 18:28:41.357678 kernel: /init Jun 25 18:28:41.357686 kernel: with environment: Jun 25 18:28:41.357693 kernel: HOME=/ Jun 25 18:28:41.357700 kernel: TERM=linux Jun 25 18:28:41.357707 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:28:41.357716 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:28:41.357729 systemd[1]: Detected virtualization microsoft. Jun 25 18:28:41.357739 systemd[1]: Detected architecture arm64. Jun 25 18:28:41.357746 systemd[1]: Running in initrd. Jun 25 18:28:41.357754 systemd[1]: No hostname configured, using default hostname. Jun 25 18:28:41.357761 systemd[1]: Hostname set to . Jun 25 18:28:41.357769 systemd[1]: Initializing machine ID from random generator. Jun 25 18:28:41.357777 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:28:41.357785 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:28:41.359831 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:28:41.359847 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:28:41.359856 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:28:41.359864 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:28:41.359872 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:28:41.359882 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:28:41.359890 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:28:41.359898 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:28:41.359908 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:28:41.359916 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:28:41.359924 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:28:41.359932 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:28:41.359940 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:28:41.359948 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:28:41.359956 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:28:41.359964 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:28:41.359973 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:28:41.359982 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:28:41.359990 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:28:41.359998 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:28:41.360006 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:28:41.360013 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:28:41.360021 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:28:41.360029 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:28:41.360037 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:28:41.360047 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:28:41.360055 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:28:41.360086 systemd-journald[217]: Collecting audit messages is disabled. Jun 25 18:28:41.360106 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:28:41.360117 systemd-journald[217]: Journal started Jun 25 18:28:41.360135 systemd-journald[217]: Runtime Journal (/run/log/journal/77deb0f5dd154ec0b5423f2e06e15de7) is 8.0M, max 78.6M, 70.6M free. Jun 25 18:28:41.359248 systemd-modules-load[218]: Inserted module 'overlay' Jun 25 18:28:41.391350 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:28:41.391383 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:28:41.398896 kernel: Bridge firewalling registered Jun 25 18:28:41.401973 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:28:41.402769 systemd-modules-load[218]: Inserted module 'br_netfilter' Jun 25 18:28:41.422314 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:28:41.437446 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:28:41.442559 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:28:41.453002 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:28:41.476912 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:28:41.485933 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:28:41.506415 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:28:41.528940 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:28:41.546818 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:28:41.555924 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:28:41.570811 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:28:41.583503 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:28:41.612099 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:28:41.620961 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:28:41.645887 dracut-cmdline[249]: dracut-dracut-053 Jun 25 18:28:41.661549 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e6069a8408a0ca7e7bc40a0bde7fe3ef89df2f98c4bdd2e7e7f9f8f3f8ad207f Jun 25 18:28:41.650989 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:28:41.696270 systemd-resolved[251]: Positive Trust Anchors: Jun 25 18:28:41.696281 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:28:41.696311 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:28:41.697293 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:28:41.698554 systemd-resolved[251]: Defaulting to hostname 'linux'. Jun 25 18:28:41.704509 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:28:41.711146 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:28:41.833830 kernel: SCSI subsystem initialized Jun 25 18:28:41.840825 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:28:41.850823 kernel: iscsi: registered transport (tcp) Jun 25 18:28:41.868616 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:28:41.868677 kernel: QLogic iSCSI HBA Driver Jun 25 18:28:41.901861 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:28:41.922053 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:28:41.956453 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:28:41.956520 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:28:41.962344 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:28:42.011820 kernel: raid6: neonx8 gen() 15722 MB/s Jun 25 18:28:42.031803 kernel: raid6: neonx4 gen() 15667 MB/s Jun 25 18:28:42.051799 kernel: raid6: neonx2 gen() 13277 MB/s Jun 25 18:28:42.072801 kernel: raid6: neonx1 gen() 10457 MB/s Jun 25 18:28:42.092803 kernel: raid6: int64x8 gen() 6952 MB/s Jun 25 18:28:42.112799 kernel: raid6: int64x4 gen() 7349 MB/s Jun 25 18:28:42.133805 kernel: raid6: int64x2 gen() 6127 MB/s Jun 25 18:28:42.157703 kernel: raid6: int64x1 gen() 5058 MB/s Jun 25 18:28:42.157741 kernel: raid6: using algorithm neonx8 gen() 15722 MB/s Jun 25 18:28:42.182522 kernel: raid6: .... xor() 11915 MB/s, rmw enabled Jun 25 18:28:42.182549 kernel: raid6: using neon recovery algorithm Jun 25 18:28:42.196345 kernel: xor: measuring software checksum speed Jun 25 18:28:42.196364 kernel: 8regs : 19873 MB/sec Jun 25 18:28:42.204692 kernel: 32regs : 19612 MB/sec Jun 25 18:28:42.204704 kernel: arm64_neon : 27143 MB/sec Jun 25 18:28:42.209460 kernel: xor: using function: arm64_neon (27143 MB/sec) Jun 25 18:28:42.261925 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:28:42.271957 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:28:42.287966 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:28:42.312590 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jun 25 18:28:42.320824 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:28:42.338985 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:28:42.356493 dracut-pre-trigger[454]: rd.md=0: removing MD RAID activation Jun 25 18:28:42.384942 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:28:42.406936 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:28:42.441984 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:28:42.467408 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:28:42.490565 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:28:42.506007 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:28:42.521681 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:28:42.535405 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:28:42.558128 kernel: hv_vmbus: Vmbus version:5.3 Jun 25 18:28:42.566491 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:28:42.596267 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 25 18:28:42.596291 kernel: hv_vmbus: registering driver hid_hyperv Jun 25 18:28:42.596301 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jun 25 18:28:42.592765 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:28:42.654822 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 18:28:42.654845 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 25 18:28:42.655007 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 18:28:42.655020 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jun 25 18:28:42.655030 kernel: PTP clock support registered Jun 25 18:28:42.592959 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:28:42.671984 kernel: hv_vmbus: registering driver hv_storvsc Jun 25 18:28:42.672006 kernel: hv_vmbus: registering driver hv_netvsc Jun 25 18:28:42.655071 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:28:42.706892 kernel: scsi host0: storvsc_host_t Jun 25 18:28:42.707074 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 25 18:28:42.707097 kernel: scsi host1: storvsc_host_t Jun 25 18:28:42.707182 kernel: hv_utils: Registering HyperV Utility Driver Jun 25 18:28:42.707193 kernel: hv_vmbus: registering driver hv_utils Jun 25 18:28:42.687307 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:28:42.923412 kernel: hv_utils: Heartbeat IC version 3.0 Jun 25 18:28:42.923446 kernel: hv_utils: Shutdown IC version 3.2 Jun 25 18:28:42.923466 kernel: hv_utils: TimeSync IC version 4.0 Jun 25 18:28:42.687541 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:28:42.720339 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:28:42.944165 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jun 25 18:28:42.919595 systemd-resolved[251]: Clock change detected. Flushing caches. Jun 25 18:28:42.948759 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:28:42.982962 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 25 18:28:42.983920 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 18:28:42.983934 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 25 18:28:42.965831 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:28:42.976302 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:28:42.995843 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:28:43.040098 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 25 18:28:43.075988 kernel: hv_netvsc 002248b9-0c04-0022-48b9-0c04002248b9 eth0: VF slot 1 added Jun 25 18:28:43.076120 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 25 18:28:43.076214 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 25 18:28:43.076295 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 25 18:28:43.076416 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 25 18:28:43.076599 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:28:43.076610 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 25 18:28:42.996088 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:28:43.005457 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:28:43.112937 kernel: hv_vmbus: registering driver hv_pci Jun 25 18:28:43.112959 kernel: hv_pci 94e0a1eb-f902-4877-9fe0-d4aae4f8464f: PCI VMBus probing: Using version 0x10004 Jun 25 18:28:43.189546 kernel: hv_pci 94e0a1eb-f902-4877-9fe0-d4aae4f8464f: PCI host bridge to bus f902:00 Jun 25 18:28:43.189663 kernel: pci_bus f902:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jun 25 18:28:43.189764 kernel: pci_bus f902:00: No busn resource found for root bus, will use [bus 00-ff] Jun 25 18:28:43.189847 kernel: pci f902:00:02.0: [15b3:1018] type 00 class 0x020000 Jun 25 18:28:43.189956 kernel: pci f902:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jun 25 18:28:43.190050 kernel: pci f902:00:02.0: enabling Extended Tags Jun 25 18:28:43.190141 kernel: pci f902:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f902:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jun 25 18:28:43.190232 kernel: pci_bus f902:00: busn_res: [bus 00-ff] end is updated to 00 Jun 25 18:28:43.190335 kernel: pci f902:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jun 25 18:28:43.044742 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:28:43.080673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:28:43.131672 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:28:43.205047 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:28:43.237604 kernel: mlx5_core f902:00:02.0: enabling device (0000 -> 0002) Jun 25 18:28:43.449476 kernel: mlx5_core f902:00:02.0: firmware version: 16.30.1284 Jun 25 18:28:43.449605 kernel: hv_netvsc 002248b9-0c04-0022-48b9-0c04002248b9 eth0: VF registering: eth1 Jun 25 18:28:43.449697 kernel: mlx5_core f902:00:02.0 eth1: joined to eth0 Jun 25 18:28:43.449800 kernel: mlx5_core f902:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jun 25 18:28:43.457336 kernel: mlx5_core f902:00:02.0 enP63746s1: renamed from eth1 Jun 25 18:28:43.721294 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 25 18:28:43.745335 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (504) Jun 25 18:28:43.758939 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 18:28:43.780339 kernel: BTRFS: device fsid c80091a6-4bf3-4ad3-8e1c-e6eb918765f9 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (491) Jun 25 18:28:43.793183 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 25 18:28:43.799842 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 25 18:28:43.828391 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 25 18:28:43.845462 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:28:43.868329 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:28:43.877214 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:28:44.878413 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:28:44.878469 disk-uuid[610]: The operation has completed successfully. Jun 25 18:28:44.938674 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:28:44.938770 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:28:44.971455 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:28:44.985501 sh[696]: Success Jun 25 18:28:45.014508 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 25 18:28:45.246231 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:28:45.252009 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:28:45.267451 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:28:45.303768 kernel: BTRFS info (device dm-0): first mount of filesystem c80091a6-4bf3-4ad3-8e1c-e6eb918765f9 Jun 25 18:28:45.303818 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:28:45.310466 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:28:45.315395 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:28:45.319821 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:28:45.697798 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:28:45.702613 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:28:45.724507 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:28:45.732455 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:28:45.769576 kernel: BTRFS info (device sda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:28:45.769623 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:28:45.776052 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:28:45.798358 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:28:45.813857 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:28:45.819335 kernel: BTRFS info (device sda6): last unmount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:28:45.823483 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:28:45.843487 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:28:45.852205 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:28:45.871469 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:28:45.889679 systemd-networkd[878]: lo: Link UP Jun 25 18:28:45.889693 systemd-networkd[878]: lo: Gained carrier Jun 25 18:28:45.891275 systemd-networkd[878]: Enumeration completed Jun 25 18:28:45.893089 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:28:45.893805 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:28:45.893809 systemd-networkd[878]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:28:45.903083 systemd[1]: Reached target network.target - Network. Jun 25 18:28:45.975334 kernel: mlx5_core f902:00:02.0 enP63746s1: Link up Jun 25 18:28:46.020275 systemd-networkd[878]: enP63746s1: Link UP Jun 25 18:28:46.024030 kernel: hv_netvsc 002248b9-0c04-0022-48b9-0c04002248b9 eth0: Data path switched to VF: enP63746s1 Jun 25 18:28:46.020526 systemd-networkd[878]: eth0: Link UP Jun 25 18:28:46.020941 systemd-networkd[878]: eth0: Gained carrier Jun 25 18:28:46.020951 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:28:46.043850 systemd-networkd[878]: enP63746s1: Gained carrier Jun 25 18:28:46.058350 systemd-networkd[878]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 25 18:28:47.042564 ignition[880]: Ignition 2.19.0 Jun 25 18:28:47.042580 ignition[880]: Stage: fetch-offline Jun 25 18:28:47.047346 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:28:47.042640 ignition[880]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:28:47.065543 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 18:28:47.042650 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:28:47.042757 ignition[880]: parsed url from cmdline: "" Jun 25 18:28:47.042760 ignition[880]: no config URL provided Jun 25 18:28:47.042764 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:28:47.042771 ignition[880]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:28:47.042776 ignition[880]: failed to fetch config: resource requires networking Jun 25 18:28:47.042967 ignition[880]: Ignition finished successfully Jun 25 18:28:47.090714 ignition[891]: Ignition 2.19.0 Jun 25 18:28:47.090721 ignition[891]: Stage: fetch Jun 25 18:28:47.090926 ignition[891]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:28:47.090935 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:28:47.091034 ignition[891]: parsed url from cmdline: "" Jun 25 18:28:47.091037 ignition[891]: no config URL provided Jun 25 18:28:47.091043 ignition[891]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:28:47.091052 ignition[891]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:28:47.091073 ignition[891]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 25 18:28:47.187164 ignition[891]: GET result: OK Jun 25 18:28:47.187252 ignition[891]: config has been read from IMDS userdata Jun 25 18:28:47.187320 ignition[891]: parsing config with SHA512: 9263e91e558859bef51c1acf067b44d3e66f4eeb27a49a2200f7093f3df65438954ef98cdfe86be8c8aadf960fd72d5da2d6f61173696a4086e99e3e5aaa3818 Jun 25 18:28:47.191170 unknown[891]: fetched base config from "system" Jun 25 18:28:47.191582 ignition[891]: fetch: fetch complete Jun 25 18:28:47.191178 unknown[891]: fetched base config from "system" Jun 25 18:28:47.191586 ignition[891]: fetch: fetch passed Jun 25 18:28:47.191184 unknown[891]: fetched user config from "azure" Jun 25 18:28:47.191630 ignition[891]: Ignition finished successfully Jun 25 18:28:47.201559 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 18:28:47.231647 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:28:47.250871 ignition[899]: Ignition 2.19.0 Jun 25 18:28:47.250886 ignition[899]: Stage: kargs Jun 25 18:28:47.251075 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:28:47.258121 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:28:47.251087 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:28:47.251986 ignition[899]: kargs: kargs passed Jun 25 18:28:47.279480 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:28:47.252031 ignition[899]: Ignition finished successfully Jun 25 18:28:47.306668 ignition[907]: Ignition 2.19.0 Jun 25 18:28:47.306683 ignition[907]: Stage: disks Jun 25 18:28:47.311498 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:28:47.306892 ignition[907]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:28:47.317993 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:28:47.306903 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:28:47.327680 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:28:47.307893 ignition[907]: disks: disks passed Jun 25 18:28:47.341122 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:28:47.307941 ignition[907]: Ignition finished successfully Jun 25 18:28:47.352265 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:28:47.365011 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:28:47.393599 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:28:47.461354 systemd-fsck[917]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jun 25 18:28:47.470354 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:28:47.487565 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:28:47.543334 kernel: EXT4-fs (sda9): mounted filesystem 91548e21-ce72-437e-94b9-d3fed380163a r/w with ordered data mode. Quota mode: none. Jun 25 18:28:47.543744 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:28:47.549445 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:28:47.603436 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:28:47.610412 systemd-networkd[878]: enP63746s1: Gained IPv6LL Jun 25 18:28:47.613794 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:28:47.631454 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 25 18:28:47.646699 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:28:47.646745 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:28:47.669141 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:28:47.714927 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (928) Jun 25 18:28:47.714957 kernel: BTRFS info (device sda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:28:47.714987 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:28:47.714998 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:28:47.727394 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:28:47.721645 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:28:47.729680 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:28:47.738662 systemd-networkd[878]: eth0: Gained IPv6LL Jun 25 18:28:48.180217 coreos-metadata[930]: Jun 25 18:28:48.180 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 18:28:48.190081 coreos-metadata[930]: Jun 25 18:28:48.190 INFO Fetch successful Jun 25 18:28:48.195319 coreos-metadata[930]: Jun 25 18:28:48.190 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 25 18:28:48.206107 coreos-metadata[930]: Jun 25 18:28:48.204 INFO Fetch successful Jun 25 18:28:48.218361 coreos-metadata[930]: Jun 25 18:28:48.218 INFO wrote hostname ci-4012.0.0-a-5284b277fa to /sysroot/etc/hostname Jun 25 18:28:48.227535 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:28:48.631356 initrd-setup-root[957]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:28:48.680203 initrd-setup-root[964]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:28:48.689278 initrd-setup-root[971]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:28:48.698811 initrd-setup-root[978]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:28:49.841909 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:28:49.859592 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:28:49.873610 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:28:49.893781 kernel: BTRFS info (device sda6): last unmount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:28:49.887276 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:28:49.917708 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:28:49.932018 ignition[1046]: INFO : Ignition 2.19.0 Jun 25 18:28:49.932018 ignition[1046]: INFO : Stage: mount Jun 25 18:28:49.941224 ignition[1046]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:28:49.941224 ignition[1046]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:28:49.941224 ignition[1046]: INFO : mount: mount passed Jun 25 18:28:49.941224 ignition[1046]: INFO : Ignition finished successfully Jun 25 18:28:49.938118 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:28:49.955569 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:28:50.000902 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:28:50.028948 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1058) Jun 25 18:28:50.028994 kernel: BTRFS info (device sda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:28:50.034922 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:28:50.039130 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:28:50.046332 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:28:50.047931 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:28:50.078247 ignition[1075]: INFO : Ignition 2.19.0 Jun 25 18:28:50.078247 ignition[1075]: INFO : Stage: files Jun 25 18:28:50.086756 ignition[1075]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:28:50.086756 ignition[1075]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:28:50.086756 ignition[1075]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:28:50.086756 ignition[1075]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:28:50.086756 ignition[1075]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:28:50.157824 ignition[1075]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:28:50.165493 ignition[1075]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:28:50.165493 ignition[1075]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:28:50.158816 unknown[1075]: wrote ssh authorized keys file for user: core Jun 25 18:28:50.215231 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 18:28:50.225937 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jun 25 18:28:50.521016 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:28:50.719293 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jun 25 18:28:50.729986 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jun 25 18:28:51.167742 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 18:28:51.377699 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jun 25 18:28:51.377699 ignition[1075]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 18:28:51.409029 ignition[1075]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:28:51.420324 ignition[1075]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:28:51.420324 ignition[1075]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 18:28:51.420324 ignition[1075]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:28:51.420324 ignition[1075]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:28:51.420324 ignition[1075]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:28:51.420324 ignition[1075]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:28:51.420324 ignition[1075]: INFO : files: files passed Jun 25 18:28:51.420324 ignition[1075]: INFO : Ignition finished successfully Jun 25 18:28:51.433161 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:28:51.474629 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:28:51.494512 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:28:51.510897 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:28:51.511008 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:28:51.552829 initrd-setup-root-after-ignition[1104]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:28:51.552829 initrd-setup-root-after-ignition[1104]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:28:51.571771 initrd-setup-root-after-ignition[1108]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:28:51.572866 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:28:51.587789 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:28:51.612589 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:28:51.643962 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:28:51.644078 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:28:51.657438 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:28:51.671161 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:28:51.684422 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:28:51.705803 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:28:51.730508 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:28:51.751829 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:28:51.772606 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:28:51.780487 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:28:51.795521 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:28:51.809016 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:28:51.809145 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:28:51.828300 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:28:51.835269 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:28:51.848868 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:28:51.862159 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:28:51.876317 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:28:51.890126 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:28:51.904630 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:28:51.920332 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:28:51.933098 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:28:51.945959 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:28:51.957953 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:28:51.958075 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:28:51.976530 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:28:51.984419 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:28:51.998810 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:28:52.005184 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:28:52.013710 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:28:52.013829 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:28:52.034477 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:28:52.034607 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:28:52.043158 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:28:52.043251 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:28:52.055381 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 25 18:28:52.137550 ignition[1128]: INFO : Ignition 2.19.0 Jun 25 18:28:52.055485 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:28:52.167521 ignition[1128]: INFO : Stage: umount Jun 25 18:28:52.167521 ignition[1128]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:28:52.167521 ignition[1128]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:28:52.167521 ignition[1128]: INFO : umount: umount passed Jun 25 18:28:52.167521 ignition[1128]: INFO : Ignition finished successfully Jun 25 18:28:52.092630 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:28:52.115665 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:28:52.131033 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:28:52.131198 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:28:52.146576 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:28:52.146742 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:28:52.169629 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:28:52.169727 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:28:52.182067 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:28:52.182173 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:28:52.193859 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:28:52.193910 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:28:52.208495 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:28:52.208556 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:28:52.220134 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 18:28:52.220178 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 18:28:52.234177 systemd[1]: Stopped target network.target - Network. Jun 25 18:28:52.248442 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:28:52.248506 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:28:52.265693 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:28:52.289787 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:28:52.301352 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:28:52.313515 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:28:52.320752 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:28:52.333339 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:28:52.333390 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:28:52.346461 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:28:52.346500 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:28:52.360146 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:28:52.360203 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:28:52.381036 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:28:52.381102 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:28:52.395741 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:28:52.408486 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:28:52.415356 systemd-networkd[878]: eth0: DHCPv6 lease lost Jun 25 18:28:52.430356 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:28:52.430937 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:28:52.431053 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:28:52.444761 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:28:52.689155 kernel: hv_netvsc 002248b9-0c04-0022-48b9-0c04002248b9 eth0: Data path switched from VF: enP63746s1 Jun 25 18:28:52.445332 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:28:52.459345 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:28:52.459403 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:28:52.502545 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:28:52.514357 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:28:52.514438 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:28:52.529014 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:28:52.529069 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:28:52.541725 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:28:52.541775 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:28:52.554999 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:28:52.555048 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:28:52.569221 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:28:52.619546 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:28:52.619746 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:28:52.634702 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:28:52.634754 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:28:52.647338 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:28:52.647379 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:28:52.660183 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:28:52.660243 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:28:52.703560 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:28:52.703632 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:28:52.723222 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:28:52.723284 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:28:52.768535 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:28:52.787228 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:28:52.787300 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:28:52.802550 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 25 18:28:52.802607 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:28:52.816737 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:28:52.816788 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:28:52.832897 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:28:52.832945 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:28:52.848666 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:28:52.848765 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:28:52.861537 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:28:52.861624 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:28:52.916522 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:28:52.916904 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:28:52.934808 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:28:52.948090 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:28:52.948163 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:28:53.110721 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jun 25 18:28:52.987579 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:28:53.015279 systemd[1]: Switching root. Jun 25 18:28:53.121082 systemd-journald[217]: Journal stopped Jun 25 18:29:00.233878 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:29:00.233905 kernel: SELinux: policy capability open_perms=1 Jun 25 18:29:00.233915 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:29:00.233925 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:29:00.233933 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:29:00.233941 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:29:00.233950 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:29:00.233958 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:29:00.233966 kernel: audit: type=1403 audit(1719340135.223:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:29:00.233975 systemd[1]: Successfully loaded SELinux policy in 368.775ms. Jun 25 18:29:00.233987 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.942ms. Jun 25 18:29:00.233997 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:29:00.234006 systemd[1]: Detected virtualization microsoft. Jun 25 18:29:00.234015 systemd[1]: Detected architecture arm64. Jun 25 18:29:00.234024 systemd[1]: Detected first boot. Jun 25 18:29:00.234035 systemd[1]: Hostname set to . Jun 25 18:29:00.234044 systemd[1]: Initializing machine ID from random generator. Jun 25 18:29:00.234054 zram_generator::config[1169]: No configuration found. Jun 25 18:29:00.234063 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:29:00.234073 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 18:29:00.234082 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 18:29:00.234094 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 18:29:00.234104 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:29:00.234114 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:29:00.234123 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:29:00.234133 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:29:00.234142 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:29:00.234151 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:29:00.234162 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:29:00.234172 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:29:00.234181 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:29:00.234190 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:29:00.234200 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:29:00.234209 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:29:00.234219 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:29:00.234228 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:29:00.234237 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jun 25 18:29:00.234248 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:29:00.234258 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 18:29:00.234267 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 18:29:00.234279 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 18:29:00.234289 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:29:00.234300 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:29:00.235384 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:29:00.235419 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:29:00.235430 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:29:00.235439 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:29:00.235449 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:29:00.235459 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:29:00.235469 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:29:00.235481 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:29:00.235491 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:29:00.235504 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:29:00.235515 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:29:00.235525 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:29:00.235535 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:29:00.235545 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:29:00.235556 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:29:00.235567 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:29:00.235577 systemd[1]: Reached target machines.target - Containers. Jun 25 18:29:00.235587 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:29:00.235597 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:29:00.235607 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:29:00.235617 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:29:00.235627 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:29:00.235636 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:29:00.235648 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:29:00.235657 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:29:00.235668 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:29:00.235678 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:29:00.235688 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 18:29:00.235697 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 18:29:00.235707 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 18:29:00.235716 kernel: fuse: init (API version 7.39) Jun 25 18:29:00.235728 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 18:29:00.235737 kernel: loop: module loaded Jun 25 18:29:00.235746 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:29:00.235756 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:29:00.235766 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:29:00.235801 systemd-journald[1271]: Collecting audit messages is disabled. Jun 25 18:29:00.235826 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:29:00.235837 systemd-journald[1271]: Journal started Jun 25 18:29:00.235858 systemd-journald[1271]: Runtime Journal (/run/log/journal/16f3f7abde1d4579b881341f0803d029) is 8.0M, max 78.6M, 70.6M free. Jun 25 18:28:59.052891 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:28:59.209081 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 25 18:28:59.209456 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 18:28:59.209773 systemd[1]: systemd-journald.service: Consumed 3.499s CPU time. Jun 25 18:29:00.252979 kernel: ACPI: bus type drm_connector registered Jun 25 18:29:00.279048 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:29:00.290236 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 18:29:00.290290 systemd[1]: Stopped verity-setup.service. Jun 25 18:29:00.314731 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:29:00.315007 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:29:00.321925 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:29:00.329911 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:29:00.336765 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:29:00.344666 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:29:00.352472 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:29:00.361502 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:29:00.371346 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:29:00.380813 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:29:00.380964 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:29:00.389978 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:29:00.390108 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:29:00.398142 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:29:00.398283 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:29:00.406687 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:29:00.406812 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:29:00.415975 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:29:00.417373 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:29:00.425281 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:29:00.425441 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:29:00.433069 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:29:00.441505 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:29:00.450961 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:29:00.459819 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:29:00.481069 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:29:00.498452 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:29:00.506358 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:29:00.514841 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:29:00.514886 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:29:00.522100 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:29:00.531669 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:29:00.541289 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:29:00.552506 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:29:00.593509 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:29:00.603009 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:29:00.609798 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:29:00.612524 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:29:00.622827 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:29:00.623914 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:29:00.634598 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:29:00.644830 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:29:00.658464 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:29:00.672758 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:29:00.681152 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:29:00.692378 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:29:00.701996 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:29:00.708359 systemd-journald[1271]: Time spent on flushing to /var/log/journal/16f3f7abde1d4579b881341f0803d029 is 28.591ms for 905 entries. Jun 25 18:29:00.708359 systemd-journald[1271]: System Journal (/var/log/journal/16f3f7abde1d4579b881341f0803d029) is 8.0M, max 2.6G, 2.6G free. Jun 25 18:29:00.770235 kernel: loop0: detected capacity change from 0 to 194096 Jun 25 18:29:00.770264 systemd-journald[1271]: Received client request to flush runtime journal. Jun 25 18:29:00.770287 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:29:00.738005 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:29:00.756136 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:29:00.766364 udevadm[1305]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 18:29:00.772714 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:29:00.820344 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:29:00.821380 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Jun 25 18:29:00.821400 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Jun 25 18:29:00.825748 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:29:00.838550 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:29:00.883354 kernel: loop1: detected capacity change from 0 to 59688 Jun 25 18:29:00.888466 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:29:00.890440 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:29:00.987605 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:29:01.110275 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:29:01.123498 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:29:01.140101 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Jun 25 18:29:01.140120 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Jun 25 18:29:01.144620 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:29:01.434341 kernel: loop2: detected capacity change from 0 to 113712 Jun 25 18:29:01.768335 kernel: loop3: detected capacity change from 0 to 62152 Jun 25 18:29:02.144988 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:29:02.156512 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:29:02.184701 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Jun 25 18:29:02.270349 kernel: loop4: detected capacity change from 0 to 194096 Jun 25 18:29:02.281338 kernel: loop5: detected capacity change from 0 to 59688 Jun 25 18:29:02.291353 kernel: loop6: detected capacity change from 0 to 113712 Jun 25 18:29:02.298262 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:29:02.309087 kernel: loop7: detected capacity change from 0 to 62152 Jun 25 18:29:02.317748 (sd-merge)[1331]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 25 18:29:02.318194 (sd-merge)[1331]: Merged extensions into '/usr'. Jun 25 18:29:02.323558 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:29:02.363676 systemd[1]: Reloading requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:29:02.363692 systemd[1]: Reloading... Jun 25 18:29:02.416359 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1345) Jun 25 18:29:02.468004 zram_generator::config[1385]: No configuration found. Jun 25 18:29:02.498351 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 18:29:02.686412 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1346) Jun 25 18:29:02.724171 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:29:02.744584 kernel: hv_vmbus: registering driver hv_balloon Jun 25 18:29:02.749639 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 25 18:29:02.758094 kernel: hv_vmbus: registering driver hyperv_fb Jun 25 18:29:02.758180 kernel: hv_balloon: Memory hot add disabled on ARM64 Jun 25 18:29:02.771342 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 25 18:29:02.771437 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 25 18:29:02.776751 kernel: Console: switching to colour dummy device 80x25 Jun 25 18:29:02.785616 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 18:29:02.805801 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jun 25 18:29:02.806065 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 18:29:02.813101 systemd[1]: Reloading finished in 449 ms. Jun 25 18:29:02.839598 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:29:02.884596 systemd[1]: Starting ensure-sysext.service... Jun 25 18:29:02.890124 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:29:02.897856 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:29:02.908657 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:29:02.921883 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:29:02.942422 systemd[1]: Reloading requested from client PID 1484 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:29:02.942441 systemd[1]: Reloading... Jun 25 18:29:02.949814 systemd-tmpfiles[1486]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:29:02.950097 systemd-tmpfiles[1486]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:29:02.952065 systemd-tmpfiles[1486]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:29:02.952297 systemd-tmpfiles[1486]: ACLs are not supported, ignoring. Jun 25 18:29:02.954147 systemd-tmpfiles[1486]: ACLs are not supported, ignoring. Jun 25 18:29:02.960270 systemd-tmpfiles[1486]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:29:02.960285 systemd-tmpfiles[1486]: Skipping /boot Jun 25 18:29:02.977441 systemd-tmpfiles[1486]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:29:02.977457 systemd-tmpfiles[1486]: Skipping /boot Jun 25 18:29:03.047349 zram_generator::config[1519]: No configuration found. Jun 25 18:29:03.162835 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:29:03.242179 systemd[1]: Reloading finished in 299 ms. Jun 25 18:29:03.261299 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:29:03.301885 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:29:03.318806 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:29:03.329360 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:29:03.349582 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:29:03.360148 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:29:03.368847 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:29:03.379474 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:29:03.394288 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:29:03.401757 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:29:03.411090 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:29:03.421685 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:29:03.432144 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:29:03.452727 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:29:03.460389 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:29:03.463719 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:29:03.465375 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:29:03.472562 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:29:03.472742 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:29:03.482127 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:29:03.482529 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:29:03.496556 lvm[1587]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:29:03.503231 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:29:03.507659 systemd-resolved[1594]: Positive Trust Anchors: Jun 25 18:29:03.507835 systemd-resolved[1594]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:29:03.507867 systemd-resolved[1594]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:29:03.511038 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:29:03.527074 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:29:03.536040 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:29:03.542546 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:29:03.543555 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:29:03.552805 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:29:03.561703 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:29:03.574174 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:29:03.574361 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:29:03.581969 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:29:03.582117 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:29:03.590274 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:29:03.590444 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:29:03.603197 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:29:03.615599 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:29:03.623015 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:29:03.623172 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:29:03.623579 systemd-resolved[1594]: Using system hostname 'ci-4012.0.0-a-5284b277fa'. Jun 25 18:29:03.631473 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:29:03.641877 lvm[1620]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:29:03.643098 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:29:03.651410 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:29:03.659223 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:29:03.668720 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:29:03.679540 augenrules[1621]: No rules Jun 25 18:29:03.680582 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:29:03.689446 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:29:03.695685 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:29:03.695869 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:29:03.704365 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:29:03.712556 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:29:03.721669 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:29:03.721811 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:29:03.730046 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:29:03.730187 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:29:03.738269 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:29:03.738599 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:29:03.748161 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:29:03.748411 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:29:03.758672 systemd[1]: Finished ensure-sysext.service. Jun 25 18:29:03.769155 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:29:03.769223 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:29:03.860923 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:29:03.894862 systemd-networkd[1347]: lo: Link UP Jun 25 18:29:03.894870 systemd-networkd[1347]: lo: Gained carrier Jun 25 18:29:03.897167 systemd-networkd[1347]: Enumeration completed Jun 25 18:29:03.897292 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:29:03.897684 systemd-networkd[1347]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:29:03.897691 systemd-networkd[1347]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:29:03.904674 systemd[1]: Reached target network.target - Network. Jun 25 18:29:03.916524 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:29:03.963339 kernel: mlx5_core f902:00:02.0 enP63746s1: Link up Jun 25 18:29:03.989344 kernel: hv_netvsc 002248b9-0c04-0022-48b9-0c04002248b9 eth0: Data path switched to VF: enP63746s1 Jun 25 18:29:03.990140 systemd-networkd[1347]: enP63746s1: Link UP Jun 25 18:29:03.990237 systemd-networkd[1347]: eth0: Link UP Jun 25 18:29:03.990240 systemd-networkd[1347]: eth0: Gained carrier Jun 25 18:29:03.990254 systemd-networkd[1347]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:29:03.995671 systemd-networkd[1347]: enP63746s1: Gained carrier Jun 25 18:29:04.008365 systemd-networkd[1347]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 25 18:29:04.046705 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:29:04.054974 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:29:05.018458 systemd-networkd[1347]: eth0: Gained IPv6LL Jun 25 18:29:05.023370 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:29:05.031350 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:29:05.274506 systemd-networkd[1347]: enP63746s1: Gained IPv6LL Jun 25 18:29:08.295928 ldconfig[1297]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:29:08.337784 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:29:08.350472 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:29:08.359867 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:29:08.366810 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:29:08.373196 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:29:08.380106 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:29:08.387677 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:29:08.393771 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:29:08.401304 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:29:08.409781 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:29:08.409819 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:29:08.415788 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:29:08.440390 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:29:08.448299 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:29:08.461456 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:29:08.468048 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:29:08.474243 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:29:08.479492 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:29:08.484794 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:29:08.484825 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:29:08.494417 systemd[1]: Starting chronyd.service - NTP client/server... Jun 25 18:29:08.502474 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:29:08.515536 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 18:29:08.522826 (chronyd)[1649]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 25 18:29:08.527517 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:29:08.534685 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:29:08.545265 jq[1655]: false Jun 25 18:29:08.545536 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:29:08.552398 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:29:08.555481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:29:08.564600 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:29:08.576549 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:29:08.585550 chronyd[1663]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 25 18:29:08.592374 extend-filesystems[1656]: Found loop4 Jun 25 18:29:08.592374 extend-filesystems[1656]: Found loop5 Jun 25 18:29:08.592374 extend-filesystems[1656]: Found loop6 Jun 25 18:29:08.592374 extend-filesystems[1656]: Found loop7 Jun 25 18:29:08.592374 extend-filesystems[1656]: Found sda Jun 25 18:29:08.592374 extend-filesystems[1656]: Found sda1 Jun 25 18:29:08.592374 extend-filesystems[1656]: Found sda2 Jun 25 18:29:08.592374 extend-filesystems[1656]: Found sda3 Jun 25 18:29:08.592374 extend-filesystems[1656]: Found usr Jun 25 18:29:08.592374 extend-filesystems[1656]: Found sda4 Jun 25 18:29:08.592374 extend-filesystems[1656]: Found sda6 Jun 25 18:29:08.592374 extend-filesystems[1656]: Found sda7 Jun 25 18:29:08.592374 extend-filesystems[1656]: Found sda9 Jun 25 18:29:08.592374 extend-filesystems[1656]: Checking size of /dev/sda9 Jun 25 18:29:08.740962 extend-filesystems[1656]: Old size kept for /dev/sda9 Jun 25 18:29:08.740962 extend-filesystems[1656]: Found sr0 Jun 25 18:29:08.592881 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:29:08.607287 chronyd[1663]: Timezone right/UTC failed leap second check, ignoring Jun 25 18:29:08.609017 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:29:08.607504 chronyd[1663]: Loaded seccomp filter (level 2) Jun 25 18:29:08.635217 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:29:08.688051 dbus-daemon[1652]: [system] SELinux support is enabled Jun 25 18:29:08.788612 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1696) Jun 25 18:29:08.666937 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:29:08.788980 coreos-metadata[1651]: Jun 25 18:29:08.774 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 18:29:08.788980 coreos-metadata[1651]: Jun 25 18:29:08.776 INFO Fetch successful Jun 25 18:29:08.788980 coreos-metadata[1651]: Jun 25 18:29:08.776 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 25 18:29:08.788980 coreos-metadata[1651]: Jun 25 18:29:08.780 INFO Fetch successful Jun 25 18:29:08.788980 coreos-metadata[1651]: Jun 25 18:29:08.788 INFO Fetching http://168.63.129.16/machine/0dc1a4a2-47a9-4c09-9259-d33c21c58253/0b5d017d%2Dd87f%2D4db7%2Db882%2Db4579222a8cd.%5Fci%2D4012.0.0%2Da%2D5284b277fa?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 25 18:29:08.788980 coreos-metadata[1651]: Jun 25 18:29:08.788 INFO Fetch successful Jun 25 18:29:08.788980 coreos-metadata[1651]: Jun 25 18:29:08.788 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 25 18:29:08.680870 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:29:08.681440 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:29:08.695778 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:29:08.795018 jq[1691]: true Jun 25 18:29:08.712762 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:29:08.722362 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:29:08.746625 systemd[1]: Started chronyd.service - NTP client/server. Jun 25 18:29:08.781188 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:29:08.781403 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:29:08.781739 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:29:08.781898 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:29:08.797230 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:29:08.798564 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:29:08.806300 coreos-metadata[1651]: Jun 25 18:29:08.805 INFO Fetch successful Jun 25 18:29:08.817187 update_engine[1686]: I0625 18:29:08.817082 1686 main.cc:92] Flatcar Update Engine starting Jun 25 18:29:08.839049 update_engine[1686]: I0625 18:29:08.818418 1686 update_check_scheduler.cc:74] Next update check in 7m22s Jun 25 18:29:08.822381 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:29:08.838768 systemd-logind[1683]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jun 25 18:29:08.839629 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:29:08.839874 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:29:08.840440 systemd-logind[1683]: New seat seat0. Jun 25 18:29:08.854539 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:29:08.884978 jq[1729]: true Jun 25 18:29:08.894412 (ntainerd)[1730]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:29:08.898413 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 18:29:08.920911 dbus-daemon[1652]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 25 18:29:08.926628 tar[1726]: linux-arm64/helm Jun 25 18:29:08.935900 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:29:08.949542 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:29:08.949778 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:29:08.949906 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:29:08.963136 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:29:08.963251 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:29:08.984631 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:29:09.024134 bash[1760]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:29:09.030545 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:29:09.045476 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 18:29:09.263191 locksmithd[1764]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:29:09.608516 tar[1726]: linux-arm64/LICENSE Jun 25 18:29:09.608516 tar[1726]: linux-arm64/README.md Jun 25 18:29:09.625808 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:29:09.646755 sshd_keygen[1694]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:29:09.667888 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:29:09.683912 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:29:09.692435 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 25 18:29:09.706964 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:29:09.710368 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:29:09.718520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:29:09.731098 (kubelet)[1796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:29:09.732504 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 25 18:29:09.746567 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:29:09.771775 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:29:09.786718 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:29:09.797082 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 25 18:29:09.806344 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:29:09.828790 containerd[1730]: time="2024-06-25T18:29:09.828612800Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:29:09.862408 containerd[1730]: time="2024-06-25T18:29:09.861281160Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:29:09.862408 containerd[1730]: time="2024-06-25T18:29:09.861745840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:29:09.864499 containerd[1730]: time="2024-06-25T18:29:09.864444160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:29:09.864499 containerd[1730]: time="2024-06-25T18:29:09.864489160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:29:09.864739 containerd[1730]: time="2024-06-25T18:29:09.864709960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:29:09.864739 containerd[1730]: time="2024-06-25T18:29:09.864734960Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:29:09.864829 containerd[1730]: time="2024-06-25T18:29:09.864809520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:29:09.865084 containerd[1730]: time="2024-06-25T18:29:09.864865720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:29:09.865084 containerd[1730]: time="2024-06-25T18:29:09.864884360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:29:09.865084 containerd[1730]: time="2024-06-25T18:29:09.864945560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:29:09.865168 containerd[1730]: time="2024-06-25T18:29:09.865121760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:29:09.865168 containerd[1730]: time="2024-06-25T18:29:09.865139760Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:29:09.865168 containerd[1730]: time="2024-06-25T18:29:09.865149600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:29:09.865270 containerd[1730]: time="2024-06-25T18:29:09.865241120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:29:09.865270 containerd[1730]: time="2024-06-25T18:29:09.865263440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:29:09.866095 containerd[1730]: time="2024-06-25T18:29:09.865346240Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:29:09.866095 containerd[1730]: time="2024-06-25T18:29:09.865363440Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:29:09.878677 containerd[1730]: time="2024-06-25T18:29:09.878626000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:29:09.878677 containerd[1730]: time="2024-06-25T18:29:09.878677680Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:29:09.878825 containerd[1730]: time="2024-06-25T18:29:09.878697480Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:29:09.878825 containerd[1730]: time="2024-06-25T18:29:09.878731240Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:29:09.878825 containerd[1730]: time="2024-06-25T18:29:09.878746320Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:29:09.878825 containerd[1730]: time="2024-06-25T18:29:09.878757160Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:29:09.878825 containerd[1730]: time="2024-06-25T18:29:09.878774440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:29:09.878951 containerd[1730]: time="2024-06-25T18:29:09.878924240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:29:09.878984 containerd[1730]: time="2024-06-25T18:29:09.878949440Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:29:09.878984 containerd[1730]: time="2024-06-25T18:29:09.878965120Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:29:09.878984 containerd[1730]: time="2024-06-25T18:29:09.878979760Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:29:09.879037 containerd[1730]: time="2024-06-25T18:29:09.878994480Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:29:09.879037 containerd[1730]: time="2024-06-25T18:29:09.879012080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:29:09.879037 containerd[1730]: time="2024-06-25T18:29:09.879024760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:29:09.879088 containerd[1730]: time="2024-06-25T18:29:09.879036960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:29:09.879088 containerd[1730]: time="2024-06-25T18:29:09.879051640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:29:09.879088 containerd[1730]: time="2024-06-25T18:29:09.879064720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:29:09.879088 containerd[1730]: time="2024-06-25T18:29:09.879076560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:29:09.879153 containerd[1730]: time="2024-06-25T18:29:09.879088400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:29:09.879201 containerd[1730]: time="2024-06-25T18:29:09.879179640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:29:09.879468 containerd[1730]: time="2024-06-25T18:29:09.879443960Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:29:09.879504 containerd[1730]: time="2024-06-25T18:29:09.879477000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:29:09.879504 containerd[1730]: time="2024-06-25T18:29:09.879492720Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:29:09.879559 containerd[1730]: time="2024-06-25T18:29:09.879514720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:29:09.879582 containerd[1730]: time="2024-06-25T18:29:09.879567640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:29:09.879601 containerd[1730]: time="2024-06-25T18:29:09.879582360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:29:09.879601 containerd[1730]: time="2024-06-25T18:29:09.879595120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:29:09.879638 containerd[1730]: time="2024-06-25T18:29:09.879606040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:29:09.879638 containerd[1730]: time="2024-06-25T18:29:09.879618800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:29:09.879702 containerd[1730]: time="2024-06-25T18:29:09.879631880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:29:09.879702 containerd[1730]: time="2024-06-25T18:29:09.879661280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:29:09.879702 containerd[1730]: time="2024-06-25T18:29:09.879673600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:29:09.880079 containerd[1730]: time="2024-06-25T18:29:09.880037760Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:29:09.880249 containerd[1730]: time="2024-06-25T18:29:09.880215040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:29:09.880282 containerd[1730]: time="2024-06-25T18:29:09.880249200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:29:09.880282 containerd[1730]: time="2024-06-25T18:29:09.880267720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:29:09.880351 containerd[1730]: time="2024-06-25T18:29:09.880331880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:29:09.880372 containerd[1730]: time="2024-06-25T18:29:09.880351280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:29:09.880391 containerd[1730]: time="2024-06-25T18:29:09.880373200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:29:09.880418 containerd[1730]: time="2024-06-25T18:29:09.880389720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:29:09.880418 containerd[1730]: time="2024-06-25T18:29:09.880404800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:29:09.880942 containerd[1730]: time="2024-06-25T18:29:09.880864160Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:29:09.881739 containerd[1730]: time="2024-06-25T18:29:09.881685280Z" level=info msg="Connect containerd service" Jun 25 18:29:09.881789 containerd[1730]: time="2024-06-25T18:29:09.881760000Z" level=info msg="using legacy CRI server" Jun 25 18:29:09.881789 containerd[1730]: time="2024-06-25T18:29:09.881773000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:29:09.882376 containerd[1730]: time="2024-06-25T18:29:09.881871040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:29:09.882711 containerd[1730]: time="2024-06-25T18:29:09.882680680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:29:09.883155 containerd[1730]: time="2024-06-25T18:29:09.883107280Z" level=info msg="Start subscribing containerd event" Jun 25 18:29:09.883214 containerd[1730]: time="2024-06-25T18:29:09.883166560Z" level=info msg="Start recovering state" Jun 25 18:29:09.883244 containerd[1730]: time="2024-06-25T18:29:09.883236240Z" level=info msg="Start event monitor" Jun 25 18:29:09.883263 containerd[1730]: time="2024-06-25T18:29:09.883246760Z" level=info msg="Start snapshots syncer" Jun 25 18:29:09.883263 containerd[1730]: time="2024-06-25T18:29:09.883256160Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:29:09.883263 containerd[1730]: time="2024-06-25T18:29:09.883263080Z" level=info msg="Start streaming server" Jun 25 18:29:09.883411 containerd[1730]: time="2024-06-25T18:29:09.883384000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:29:09.886733 containerd[1730]: time="2024-06-25T18:29:09.883773600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:29:09.886733 containerd[1730]: time="2024-06-25T18:29:09.883796400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:29:09.886733 containerd[1730]: time="2024-06-25T18:29:09.883811400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:29:09.886733 containerd[1730]: time="2024-06-25T18:29:09.883999040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:29:09.886733 containerd[1730]: time="2024-06-25T18:29:09.884032560Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:29:09.886733 containerd[1730]: time="2024-06-25T18:29:09.884095240Z" level=info msg="containerd successfully booted in 0.056950s" Jun 25 18:29:09.884205 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:29:09.893448 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:29:09.899818 systemd[1]: Startup finished in 668ms (kernel) + 13.904s (initrd) + 15.043s (userspace) = 29.616s. Jun 25 18:29:10.239962 login[1806]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 18:29:10.241013 login[1807]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 18:29:10.255049 systemd-logind[1683]: New session 2 of user core. Jun 25 18:29:10.256136 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:29:10.261643 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:29:10.265956 systemd-logind[1683]: New session 1 of user core. Jun 25 18:29:10.279651 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:29:10.287722 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:29:10.294299 (systemd)[1822]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:29:10.339445 kubelet[1796]: E0625 18:29:10.339393 1796 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:29:10.341876 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:29:10.342024 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:29:10.432967 systemd[1822]: Queued start job for default target default.target. Jun 25 18:29:10.447869 systemd[1822]: Created slice app.slice - User Application Slice. Jun 25 18:29:10.447908 systemd[1822]: Reached target paths.target - Paths. Jun 25 18:29:10.447921 systemd[1822]: Reached target timers.target - Timers. Jun 25 18:29:10.449286 systemd[1822]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:29:10.462518 systemd[1822]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:29:10.462793 systemd[1822]: Reached target sockets.target - Sockets. Jun 25 18:29:10.462888 systemd[1822]: Reached target basic.target - Basic System. Jun 25 18:29:10.462937 systemd[1822]: Reached target default.target - Main User Target. Jun 25 18:29:10.462964 systemd[1822]: Startup finished in 161ms. Jun 25 18:29:10.463223 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:29:10.473514 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:29:10.474516 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:29:11.977333 waagent[1799]: 2024-06-25T18:29:11.976690Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jun 25 18:29:11.983006 waagent[1799]: 2024-06-25T18:29:11.982929Z INFO Daemon Daemon OS: flatcar 4012.0.0 Jun 25 18:29:11.987699 waagent[1799]: 2024-06-25T18:29:11.987641Z INFO Daemon Daemon Python: 3.11.9 Jun 25 18:29:11.992329 waagent[1799]: 2024-06-25T18:29:11.992219Z INFO Daemon Daemon Run daemon Jun 25 18:29:11.996405 waagent[1799]: 2024-06-25T18:29:11.996351Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4012.0.0' Jun 25 18:29:12.005204 waagent[1799]: 2024-06-25T18:29:12.005131Z INFO Daemon Daemon Using waagent for provisioning Jun 25 18:29:12.010671 waagent[1799]: 2024-06-25T18:29:12.010620Z INFO Daemon Daemon Activate resource disk Jun 25 18:29:12.015373 waagent[1799]: 2024-06-25T18:29:12.015302Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 25 18:29:12.026083 waagent[1799]: 2024-06-25T18:29:12.026014Z INFO Daemon Daemon Found device: None Jun 25 18:29:12.030500 waagent[1799]: 2024-06-25T18:29:12.030447Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 25 18:29:12.039978 waagent[1799]: 2024-06-25T18:29:12.039922Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 25 18:29:12.054307 waagent[1799]: 2024-06-25T18:29:12.054247Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 18:29:12.061069 waagent[1799]: 2024-06-25T18:29:12.061009Z INFO Daemon Daemon Running default provisioning handler Jun 25 18:29:12.073382 waagent[1799]: 2024-06-25T18:29:12.072824Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 25 18:29:12.088733 waagent[1799]: 2024-06-25T18:29:12.088664Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 25 18:29:12.098526 waagent[1799]: 2024-06-25T18:29:12.098463Z INFO Daemon Daemon cloud-init is enabled: False Jun 25 18:29:12.103671 waagent[1799]: 2024-06-25T18:29:12.103617Z INFO Daemon Daemon Copying ovf-env.xml Jun 25 18:29:12.200187 waagent[1799]: 2024-06-25T18:29:12.199553Z INFO Daemon Daemon Successfully mounted dvd Jun 25 18:29:12.234851 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 25 18:29:12.237743 waagent[1799]: 2024-06-25T18:29:12.237654Z INFO Daemon Daemon Detect protocol endpoint Jun 25 18:29:12.242527 waagent[1799]: 2024-06-25T18:29:12.242469Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 18:29:12.248443 waagent[1799]: 2024-06-25T18:29:12.248392Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 25 18:29:12.254748 waagent[1799]: 2024-06-25T18:29:12.254696Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 25 18:29:12.260291 waagent[1799]: 2024-06-25T18:29:12.260241Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 25 18:29:12.265649 waagent[1799]: 2024-06-25T18:29:12.265599Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 25 18:29:12.299126 waagent[1799]: 2024-06-25T18:29:12.299083Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 25 18:29:12.305724 waagent[1799]: 2024-06-25T18:29:12.305694Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 25 18:29:12.311280 waagent[1799]: 2024-06-25T18:29:12.311227Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 25 18:29:12.498404 waagent[1799]: 2024-06-25T18:29:12.498238Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 25 18:29:12.504716 waagent[1799]: 2024-06-25T18:29:12.504649Z INFO Daemon Daemon Forcing an update of the goal state. Jun 25 18:29:12.526428 waagent[1799]: 2024-06-25T18:29:12.526375Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 18:29:12.546789 waagent[1799]: 2024-06-25T18:29:12.546743Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.151 Jun 25 18:29:12.552623 waagent[1799]: 2024-06-25T18:29:12.552574Z INFO Daemon Jun 25 18:29:12.555487 waagent[1799]: 2024-06-25T18:29:12.555441Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: bf7a1b8f-26a0-4131-9f3b-1ac7d80a3415 eTag: 15751868370831741721 source: Fabric] Jun 25 18:29:12.567032 waagent[1799]: 2024-06-25T18:29:12.566984Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 25 18:29:12.574818 waagent[1799]: 2024-06-25T18:29:12.574765Z INFO Daemon Jun 25 18:29:12.577611 waagent[1799]: 2024-06-25T18:29:12.577567Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 25 18:29:12.589192 waagent[1799]: 2024-06-25T18:29:12.589156Z INFO Daemon Daemon Downloading artifacts profile blob Jun 25 18:29:12.675000 waagent[1799]: 2024-06-25T18:29:12.674899Z INFO Daemon Downloaded certificate {'thumbprint': 'D3FE67FBB9C8A35290138B1BCA70D35CE7834598', 'hasPrivateKey': True} Jun 25 18:29:12.685658 waagent[1799]: 2024-06-25T18:29:12.685608Z INFO Daemon Downloaded certificate {'thumbprint': '663C73007D8A0830518E5A9E7979E82D7B6FE1DD', 'hasPrivateKey': False} Jun 25 18:29:12.696082 waagent[1799]: 2024-06-25T18:29:12.696028Z INFO Daemon Fetch goal state completed Jun 25 18:29:12.706877 waagent[1799]: 2024-06-25T18:29:12.706831Z INFO Daemon Daemon Starting provisioning Jun 25 18:29:12.711890 waagent[1799]: 2024-06-25T18:29:12.711827Z INFO Daemon Daemon Handle ovf-env.xml. Jun 25 18:29:12.716709 waagent[1799]: 2024-06-25T18:29:12.716662Z INFO Daemon Daemon Set hostname [ci-4012.0.0-a-5284b277fa] Jun 25 18:29:12.738639 waagent[1799]: 2024-06-25T18:29:12.738558Z INFO Daemon Daemon Publish hostname [ci-4012.0.0-a-5284b277fa] Jun 25 18:29:12.745364 waagent[1799]: 2024-06-25T18:29:12.745280Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 25 18:29:12.751652 waagent[1799]: 2024-06-25T18:29:12.751560Z INFO Daemon Daemon Primary interface is [eth0] Jun 25 18:29:12.801085 systemd-networkd[1347]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:29:12.801094 systemd-networkd[1347]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:29:12.801121 systemd-networkd[1347]: eth0: DHCP lease lost Jun 25 18:29:12.802421 waagent[1799]: 2024-06-25T18:29:12.802338Z INFO Daemon Daemon Create user account if not exists Jun 25 18:29:12.808177 waagent[1799]: 2024-06-25T18:29:12.808117Z INFO Daemon Daemon User core already exists, skip useradd Jun 25 18:29:12.813802 waagent[1799]: 2024-06-25T18:29:12.813743Z INFO Daemon Daemon Configure sudoer Jun 25 18:29:12.813879 systemd-networkd[1347]: eth0: DHCPv6 lease lost Jun 25 18:29:12.818667 waagent[1799]: 2024-06-25T18:29:12.818571Z INFO Daemon Daemon Configure sshd Jun 25 18:29:12.822877 waagent[1799]: 2024-06-25T18:29:12.822820Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 25 18:29:12.835549 waagent[1799]: 2024-06-25T18:29:12.835254Z INFO Daemon Daemon Deploy ssh public key. Jun 25 18:29:12.846173 systemd-networkd[1347]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 25 18:29:14.161381 waagent[1799]: 2024-06-25T18:29:14.161297Z INFO Daemon Daemon Provisioning complete Jun 25 18:29:14.180901 waagent[1799]: 2024-06-25T18:29:14.180851Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 25 18:29:14.187193 waagent[1799]: 2024-06-25T18:29:14.187132Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 25 18:29:14.197859 waagent[1799]: 2024-06-25T18:29:14.197802Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jun 25 18:29:14.337338 waagent[1872]: 2024-06-25T18:29:14.337238Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jun 25 18:29:14.337666 waagent[1872]: 2024-06-25T18:29:14.337416Z INFO ExtHandler ExtHandler OS: flatcar 4012.0.0 Jun 25 18:29:14.337666 waagent[1872]: 2024-06-25T18:29:14.337477Z INFO ExtHandler ExtHandler Python: 3.11.9 Jun 25 18:29:15.540288 waagent[1872]: 2024-06-25T18:29:15.540184Z INFO ExtHandler ExtHandler Distro: flatcar-4012.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jun 25 18:29:15.540697 waagent[1872]: 2024-06-25T18:29:15.540492Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 18:29:15.540697 waagent[1872]: 2024-06-25T18:29:15.540569Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 18:29:15.549267 waagent[1872]: 2024-06-25T18:29:15.549186Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 18:29:15.555259 waagent[1872]: 2024-06-25T18:29:15.555212Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jun 25 18:29:15.555814 waagent[1872]: 2024-06-25T18:29:15.555768Z INFO ExtHandler Jun 25 18:29:15.555890 waagent[1872]: 2024-06-25T18:29:15.555859Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 9008e7bb-effa-466b-924c-8fcea7e1e5cb eTag: 15751868370831741721 source: Fabric] Jun 25 18:29:15.556197 waagent[1872]: 2024-06-25T18:29:15.556156Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 25 18:29:15.558404 waagent[1872]: 2024-06-25T18:29:15.558354Z INFO ExtHandler Jun 25 18:29:15.558501 waagent[1872]: 2024-06-25T18:29:15.558468Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 25 18:29:15.564417 waagent[1872]: 2024-06-25T18:29:15.564377Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 25 18:29:15.685687 waagent[1872]: 2024-06-25T18:29:15.685589Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D3FE67FBB9C8A35290138B1BCA70D35CE7834598', 'hasPrivateKey': True} Jun 25 18:29:15.686121 waagent[1872]: 2024-06-25T18:29:15.686075Z INFO ExtHandler Downloaded certificate {'thumbprint': '663C73007D8A0830518E5A9E7979E82D7B6FE1DD', 'hasPrivateKey': False} Jun 25 18:29:15.686615 waagent[1872]: 2024-06-25T18:29:15.686569Z INFO ExtHandler Fetch goal state completed Jun 25 18:29:15.703915 waagent[1872]: 2024-06-25T18:29:15.703854Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1872 Jun 25 18:29:15.704074 waagent[1872]: 2024-06-25T18:29:15.704038Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 25 18:29:15.705842 waagent[1872]: 2024-06-25T18:29:15.705794Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4012.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 25 18:29:15.706253 waagent[1872]: 2024-06-25T18:29:15.706215Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 25 18:29:15.760801 waagent[1872]: 2024-06-25T18:29:15.760753Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 25 18:29:15.760997 waagent[1872]: 2024-06-25T18:29:15.760956Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 25 18:29:15.767114 waagent[1872]: 2024-06-25T18:29:15.767053Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 25 18:29:15.773944 systemd[1]: Reloading requested from client PID 1890 ('systemctl') (unit waagent.service)... Jun 25 18:29:15.773961 systemd[1]: Reloading... Jun 25 18:29:15.852362 zram_generator::config[1918]: No configuration found. Jun 25 18:29:15.954726 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:29:16.029328 systemd[1]: Reloading finished in 255 ms. Jun 25 18:29:16.059352 waagent[1872]: 2024-06-25T18:29:16.055731Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jun 25 18:29:16.062829 systemd[1]: Reloading requested from client PID 1974 ('systemctl') (unit waagent.service)... Jun 25 18:29:16.062970 systemd[1]: Reloading... Jun 25 18:29:16.135378 zram_generator::config[2003]: No configuration found. Jun 25 18:29:16.240710 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:29:16.315699 systemd[1]: Reloading finished in 252 ms. Jun 25 18:29:16.342332 waagent[1872]: 2024-06-25T18:29:16.340597Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 25 18:29:16.342332 waagent[1872]: 2024-06-25T18:29:16.340787Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 25 18:29:16.797353 waagent[1872]: 2024-06-25T18:29:16.796298Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 25 18:29:16.797353 waagent[1872]: 2024-06-25T18:29:16.796969Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jun 25 18:29:16.798331 waagent[1872]: 2024-06-25T18:29:16.798256Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 25 18:29:16.798463 waagent[1872]: 2024-06-25T18:29:16.798407Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 18:29:16.798711 waagent[1872]: 2024-06-25T18:29:16.798670Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 18:29:16.799121 waagent[1872]: 2024-06-25T18:29:16.799067Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 25 18:29:16.799419 waagent[1872]: 2024-06-25T18:29:16.799333Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 25 18:29:16.799825 waagent[1872]: 2024-06-25T18:29:16.799774Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 25 18:29:16.800002 waagent[1872]: 2024-06-25T18:29:16.799962Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 18:29:16.800072 waagent[1872]: 2024-06-25T18:29:16.800040Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 18:29:16.800215 waagent[1872]: 2024-06-25T18:29:16.800176Z INFO EnvHandler ExtHandler Configure routes Jun 25 18:29:16.800279 waagent[1872]: 2024-06-25T18:29:16.800248Z INFO EnvHandler ExtHandler Gateway:None Jun 25 18:29:16.800360 waagent[1872]: 2024-06-25T18:29:16.800304Z INFO EnvHandler ExtHandler Routes:None Jun 25 18:29:16.800689 waagent[1872]: 2024-06-25T18:29:16.800542Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 25 18:29:16.801223 waagent[1872]: 2024-06-25T18:29:16.801168Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 25 18:29:16.801475 waagent[1872]: 2024-06-25T18:29:16.801431Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 25 18:29:16.801475 waagent[1872]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 25 18:29:16.801475 waagent[1872]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jun 25 18:29:16.801475 waagent[1872]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 25 18:29:16.801475 waagent[1872]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 25 18:29:16.801475 waagent[1872]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 18:29:16.801475 waagent[1872]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 18:29:16.801914 waagent[1872]: 2024-06-25T18:29:16.801790Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 25 18:29:16.802077 waagent[1872]: 2024-06-25T18:29:16.802015Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 25 18:29:16.809472 waagent[1872]: 2024-06-25T18:29:16.809399Z INFO ExtHandler ExtHandler Jun 25 18:29:16.809983 waagent[1872]: 2024-06-25T18:29:16.809932Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 3af24917-0c1d-428a-8c41-ac1feae35e97 correlation 68ca4473-3d86-4904-8b24-43ed95224231 created: 2024-06-25T18:27:50.224167Z] Jun 25 18:29:16.811329 waagent[1872]: 2024-06-25T18:29:16.810479Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 25 18:29:16.811329 waagent[1872]: 2024-06-25T18:29:16.811122Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jun 25 18:29:16.844083 waagent[1872]: 2024-06-25T18:29:16.844006Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 6A939B05-B50D-44DB-94B0-146CD3E3AEB1;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jun 25 18:29:16.895963 waagent[1872]: 2024-06-25T18:29:16.895495Z INFO MonitorHandler ExtHandler Network interfaces: Jun 25 18:29:16.895963 waagent[1872]: Executing ['ip', '-a', '-o', 'link']: Jun 25 18:29:16.895963 waagent[1872]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 25 18:29:16.895963 waagent[1872]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b9:0c:04 brd ff:ff:ff:ff:ff:ff Jun 25 18:29:16.895963 waagent[1872]: 3: enP63746s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b9:0c:04 brd ff:ff:ff:ff:ff:ff\ altname enP63746p0s2 Jun 25 18:29:16.895963 waagent[1872]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 25 18:29:16.895963 waagent[1872]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 25 18:29:16.895963 waagent[1872]: 2: eth0 inet 10.200.20.36/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 25 18:29:16.895963 waagent[1872]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 25 18:29:16.895963 waagent[1872]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 25 18:29:16.895963 waagent[1872]: 2: eth0 inet6 fe80::222:48ff:feb9:c04/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 25 18:29:16.895963 waagent[1872]: 3: enP63746s1 inet6 fe80::222:48ff:feb9:c04/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 25 18:29:16.942344 waagent[1872]: 2024-06-25T18:29:16.941848Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jun 25 18:29:16.942344 waagent[1872]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:29:16.942344 waagent[1872]: pkts bytes target prot opt in out source destination Jun 25 18:29:16.942344 waagent[1872]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:29:16.942344 waagent[1872]: pkts bytes target prot opt in out source destination Jun 25 18:29:16.942344 waagent[1872]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:29:16.942344 waagent[1872]: pkts bytes target prot opt in out source destination Jun 25 18:29:16.942344 waagent[1872]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 18:29:16.942344 waagent[1872]: 9 1050 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 18:29:16.942344 waagent[1872]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 18:29:16.945664 waagent[1872]: 2024-06-25T18:29:16.945595Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 25 18:29:16.945664 waagent[1872]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:29:16.945664 waagent[1872]: pkts bytes target prot opt in out source destination Jun 25 18:29:16.945664 waagent[1872]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:29:16.945664 waagent[1872]: pkts bytes target prot opt in out source destination Jun 25 18:29:16.945664 waagent[1872]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:29:16.945664 waagent[1872]: pkts bytes target prot opt in out source destination Jun 25 18:29:16.945664 waagent[1872]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 18:29:16.945664 waagent[1872]: 11 1162 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 18:29:16.945664 waagent[1872]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 18:29:16.945930 waagent[1872]: 2024-06-25T18:29:16.945889Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jun 25 18:29:20.592730 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:29:20.600516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:29:20.689028 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:29:20.700695 (kubelet)[2099]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:29:21.263997 kubelet[2099]: E0625 18:29:21.263930 2099 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:29:21.268118 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:29:21.268270 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:29:31.392013 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 18:29:31.397508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:29:31.487999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:29:31.492532 (kubelet)[2115]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:29:32.067089 kubelet[2115]: E0625 18:29:32.067024 2115 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:29:32.069757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:29:32.069911 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:29:32.395915 chronyd[1663]: Selected source PHC0 Jun 25 18:29:42.141985 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 18:29:42.148496 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:29:42.237118 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:29:42.241506 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:29:42.775432 kubelet[2131]: E0625 18:29:42.775375 2131 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:29:42.778120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:29:42.778422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:29:50.892663 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jun 25 18:29:52.891926 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 25 18:29:52.897525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:29:52.995518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:29:53.006589 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:29:53.561387 kubelet[2150]: E0625 18:29:53.561304 2150 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:29:53.563853 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:29:53.563996 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:29:54.325993 update_engine[1686]: I0625 18:29:54.325439 1686 update_attempter.cc:509] Updating boot flags... Jun 25 18:29:54.376397 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2171) Jun 25 18:29:54.461698 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2170) Jun 25 18:29:57.154460 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:29:57.160592 systemd[1]: Started sshd@0-10.200.20.36:22-10.200.16.10:48918.service - OpenSSH per-connection server daemon (10.200.16.10:48918). Jun 25 18:29:57.712483 sshd[2226]: Accepted publickey for core from 10.200.16.10 port 48918 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:29:57.713882 sshd[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:29:57.718348 systemd-logind[1683]: New session 3 of user core. Jun 25 18:29:57.727479 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:29:58.139584 systemd[1]: Started sshd@1-10.200.20.36:22-10.200.16.10:48928.service - OpenSSH per-connection server daemon (10.200.16.10:48928). Jun 25 18:29:58.579377 sshd[2231]: Accepted publickey for core from 10.200.16.10 port 48928 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:29:58.580742 sshd[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:29:58.585624 systemd-logind[1683]: New session 4 of user core. Jun 25 18:29:58.595539 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:29:58.903616 sshd[2231]: pam_unix(sshd:session): session closed for user core Jun 25 18:29:58.906382 systemd[1]: sshd@1-10.200.20.36:22-10.200.16.10:48928.service: Deactivated successfully. Jun 25 18:29:58.908229 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 18:29:58.910001 systemd-logind[1683]: Session 4 logged out. Waiting for processes to exit. Jun 25 18:29:58.910878 systemd-logind[1683]: Removed session 4. Jun 25 18:29:58.988295 systemd[1]: Started sshd@2-10.200.20.36:22-10.200.16.10:48934.service - OpenSSH per-connection server daemon (10.200.16.10:48934). Jun 25 18:29:59.472824 sshd[2238]: Accepted publickey for core from 10.200.16.10 port 48934 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:29:59.474176 sshd[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:29:59.478394 systemd-logind[1683]: New session 5 of user core. Jun 25 18:29:59.488475 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:29:59.828987 sshd[2238]: pam_unix(sshd:session): session closed for user core Jun 25 18:29:59.832659 systemd[1]: sshd@2-10.200.20.36:22-10.200.16.10:48934.service: Deactivated successfully. Jun 25 18:29:59.834229 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 18:29:59.835942 systemd-logind[1683]: Session 5 logged out. Waiting for processes to exit. Jun 25 18:29:59.836939 systemd-logind[1683]: Removed session 5. Jun 25 18:29:59.908849 systemd[1]: Started sshd@3-10.200.20.36:22-10.200.16.10:48946.service - OpenSSH per-connection server daemon (10.200.16.10:48946). Jun 25 18:30:00.350757 sshd[2245]: Accepted publickey for core from 10.200.16.10 port 48946 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:30:00.352084 sshd[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:30:00.357272 systemd-logind[1683]: New session 6 of user core. Jun 25 18:30:00.367604 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:30:00.675390 sshd[2245]: pam_unix(sshd:session): session closed for user core Jun 25 18:30:00.678916 systemd[1]: sshd@3-10.200.20.36:22-10.200.16.10:48946.service: Deactivated successfully. Jun 25 18:30:00.681536 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:30:00.682460 systemd-logind[1683]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:30:00.683638 systemd-logind[1683]: Removed session 6. Jun 25 18:30:00.755424 systemd[1]: Started sshd@4-10.200.20.36:22-10.200.16.10:48956.service - OpenSSH per-connection server daemon (10.200.16.10:48956). Jun 25 18:30:01.197626 sshd[2252]: Accepted publickey for core from 10.200.16.10 port 48956 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:30:01.199001 sshd[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:30:01.202929 systemd-logind[1683]: New session 7 of user core. Jun 25 18:30:01.213484 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:30:01.626697 sudo[2255]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:30:01.626962 sudo[2255]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:30:01.665219 sudo[2255]: pam_unix(sudo:session): session closed for user root Jun 25 18:30:01.736049 sshd[2252]: pam_unix(sshd:session): session closed for user core Jun 25 18:30:01.739102 systemd[1]: sshd@4-10.200.20.36:22-10.200.16.10:48956.service: Deactivated successfully. Jun 25 18:30:01.740978 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:30:01.742746 systemd-logind[1683]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:30:01.743987 systemd-logind[1683]: Removed session 7. Jun 25 18:30:01.822605 systemd[1]: Started sshd@5-10.200.20.36:22-10.200.16.10:48958.service - OpenSSH per-connection server daemon (10.200.16.10:48958). Jun 25 18:30:02.300006 sshd[2260]: Accepted publickey for core from 10.200.16.10 port 48958 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:30:02.301452 sshd[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:30:02.305801 systemd-logind[1683]: New session 8 of user core. Jun 25 18:30:02.314469 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:30:02.570251 sudo[2264]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:30:02.570769 sudo[2264]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:30:02.574024 sudo[2264]: pam_unix(sudo:session): session closed for user root Jun 25 18:30:02.579010 sudo[2263]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:30:02.579240 sudo[2263]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:30:02.596658 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:30:02.598640 auditctl[2267]: No rules Jun 25 18:30:02.598956 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:30:02.599177 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:30:02.602246 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:30:02.625919 augenrules[2285]: No rules Jun 25 18:30:02.628366 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:30:02.630858 sudo[2263]: pam_unix(sudo:session): session closed for user root Jun 25 18:30:02.720920 sshd[2260]: pam_unix(sshd:session): session closed for user core Jun 25 18:30:02.723653 systemd[1]: sshd@5-10.200.20.36:22-10.200.16.10:48958.service: Deactivated successfully. Jun 25 18:30:02.725279 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:30:02.726687 systemd-logind[1683]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:30:02.727696 systemd-logind[1683]: Removed session 8. Jun 25 18:30:02.808523 systemd[1]: Started sshd@6-10.200.20.36:22-10.200.16.10:48960.service - OpenSSH per-connection server daemon (10.200.16.10:48960). Jun 25 18:30:03.285622 sshd[2293]: Accepted publickey for core from 10.200.16.10 port 48960 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:30:03.286959 sshd[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:30:03.290881 systemd-logind[1683]: New session 9 of user core. Jun 25 18:30:03.298545 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:30:03.555769 sudo[2296]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:30:03.556026 sudo[2296]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:30:03.641806 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 25 18:30:03.651522 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:30:03.834593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:30:03.838937 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:30:03.877963 kubelet[2309]: E0625 18:30:03.877906 2309 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:30:03.880489 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:30:03.880638 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:30:04.944588 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:30:04.945549 (dockerd)[2322]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:30:05.705041 dockerd[2322]: time="2024-06-25T18:30:05.704754671Z" level=info msg="Starting up" Jun 25 18:30:05.856667 dockerd[2322]: time="2024-06-25T18:30:05.856602909Z" level=info msg="Loading containers: start." Jun 25 18:30:06.068335 kernel: Initializing XFRM netlink socket Jun 25 18:30:06.229788 systemd-networkd[1347]: docker0: Link UP Jun 25 18:30:06.247227 dockerd[2322]: time="2024-06-25T18:30:06.247139111Z" level=info msg="Loading containers: done." Jun 25 18:30:06.625379 dockerd[2322]: time="2024-06-25T18:30:06.625265044Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:30:06.625528 dockerd[2322]: time="2024-06-25T18:30:06.625500644Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:30:06.625846 dockerd[2322]: time="2024-06-25T18:30:06.625625365Z" level=info msg="Daemon has completed initialization" Jun 25 18:30:06.669814 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:30:06.671078 dockerd[2322]: time="2024-06-25T18:30:06.670257110Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:30:08.218860 containerd[1730]: time="2024-06-25T18:30:08.218818086Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jun 25 18:30:09.224396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount99260335.mount: Deactivated successfully. Jun 25 18:30:13.891811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jun 25 18:30:13.902530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:30:13.999117 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:30:14.004445 (kubelet)[2498]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:30:14.043766 kubelet[2498]: E0625 18:30:14.043718 2498 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:30:14.045869 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:30:14.046003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:30:18.033357 containerd[1730]: time="2024-06-25T18:30:18.032746890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:18.036704 containerd[1730]: time="2024-06-25T18:30:18.036469458Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=29940430" Jun 25 18:30:18.039551 containerd[1730]: time="2024-06-25T18:30:18.039494985Z" level=info msg="ImageCreate event name:\"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:18.044439 containerd[1730]: time="2024-06-25T18:30:18.044380715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:18.045958 containerd[1730]: time="2024-06-25T18:30:18.045774078Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"29937230\" in 9.826515751s" Jun 25 18:30:18.045958 containerd[1730]: time="2024-06-25T18:30:18.045813598Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\"" Jun 25 18:30:18.066851 containerd[1730]: time="2024-06-25T18:30:18.066730164Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jun 25 18:30:22.076796 containerd[1730]: time="2024-06-25T18:30:22.076723127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:22.079684 containerd[1730]: time="2024-06-25T18:30:22.079487414Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=26881371" Jun 25 18:30:22.082805 containerd[1730]: time="2024-06-25T18:30:22.082778062Z" level=info msg="ImageCreate event name:\"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:22.088416 containerd[1730]: time="2024-06-25T18:30:22.088375715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:22.089887 containerd[1730]: time="2024-06-25T18:30:22.089538438Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"28368865\" in 4.022763314s" Jun 25 18:30:22.089887 containerd[1730]: time="2024-06-25T18:30:22.089572078Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\"" Jun 25 18:30:22.109934 containerd[1730]: time="2024-06-25T18:30:22.109879687Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jun 25 18:30:24.141941 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jun 25 18:30:24.150569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:30:25.540352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:30:25.544096 (kubelet)[2548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:30:25.580926 kubelet[2548]: E0625 18:30:25.580836 2548 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:30:25.583493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:30:25.583637 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:30:27.018745 containerd[1730]: time="2024-06-25T18:30:27.018687898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:27.021148 containerd[1730]: time="2024-06-25T18:30:27.020953023Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=16155688" Jun 25 18:30:27.024921 containerd[1730]: time="2024-06-25T18:30:27.024874512Z" level=info msg="ImageCreate event name:\"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:27.030638 containerd[1730]: time="2024-06-25T18:30:27.030375886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:27.031499 containerd[1730]: time="2024-06-25T18:30:27.031467208Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"17643200\" in 4.921367681s" Jun 25 18:30:27.031547 containerd[1730]: time="2024-06-25T18:30:27.031502968Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\"" Jun 25 18:30:27.050801 containerd[1730]: time="2024-06-25T18:30:27.050752894Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jun 25 18:30:28.236489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2654919533.mount: Deactivated successfully. Jun 25 18:30:28.585800 containerd[1730]: time="2024-06-25T18:30:28.585672670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:28.588152 containerd[1730]: time="2024-06-25T18:30:28.588118756Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=25634092" Jun 25 18:30:28.591976 containerd[1730]: time="2024-06-25T18:30:28.591947686Z" level=info msg="ImageCreate event name:\"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:28.596021 containerd[1730]: time="2024-06-25T18:30:28.595944096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:28.596931 containerd[1730]: time="2024-06-25T18:30:28.596499978Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"25633111\" in 1.545705364s" Jun 25 18:30:28.596931 containerd[1730]: time="2024-06-25T18:30:28.596535698Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\"" Jun 25 18:30:28.615816 containerd[1730]: time="2024-06-25T18:30:28.615773267Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 25 18:30:29.898812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount667172676.mount: Deactivated successfully. Jun 25 18:30:31.009868 containerd[1730]: time="2024-06-25T18:30:31.009812461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:31.013030 containerd[1730]: time="2024-06-25T18:30:31.012987149Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jun 25 18:30:31.016504 containerd[1730]: time="2024-06-25T18:30:31.016457877Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:31.021523 containerd[1730]: time="2024-06-25T18:30:31.021474490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:31.022761 containerd[1730]: time="2024-06-25T18:30:31.022617893Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.406639106s" Jun 25 18:30:31.022761 containerd[1730]: time="2024-06-25T18:30:31.022654773Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jun 25 18:30:31.043035 containerd[1730]: time="2024-06-25T18:30:31.042991705Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:30:31.668410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2662764044.mount: Deactivated successfully. Jun 25 18:30:31.693352 containerd[1730]: time="2024-06-25T18:30:31.693293275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:31.698163 containerd[1730]: time="2024-06-25T18:30:31.698113047Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jun 25 18:30:31.702162 containerd[1730]: time="2024-06-25T18:30:31.702111657Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:31.706900 containerd[1730]: time="2024-06-25T18:30:31.706850389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:31.707588 containerd[1730]: time="2024-06-25T18:30:31.707551951Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 664.522926ms" Jun 25 18:30:31.707588 containerd[1730]: time="2024-06-25T18:30:31.707584831Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jun 25 18:30:31.727034 containerd[1730]: time="2024-06-25T18:30:31.726827440Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jun 25 18:30:32.491834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257545893.mount: Deactivated successfully. Jun 25 18:30:35.641806 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jun 25 18:30:35.652474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:30:37.143870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:30:37.154606 (kubelet)[2683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:30:37.459378 kubelet[2683]: E0625 18:30:37.456830 2683 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:30:37.459892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:30:37.460147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:30:37.745234 containerd[1730]: time="2024-06-25T18:30:37.744583264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:37.747455 containerd[1730]: time="2024-06-25T18:30:37.747154591Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Jun 25 18:30:37.750562 containerd[1730]: time="2024-06-25T18:30:37.750518360Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:37.755767 containerd[1730]: time="2024-06-25T18:30:37.755718854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:37.757031 containerd[1730]: time="2024-06-25T18:30:37.756898817Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 6.030034097s" Jun 25 18:30:37.757031 containerd[1730]: time="2024-06-25T18:30:37.756938937Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jun 25 18:30:42.221794 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:30:42.233474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:30:42.244952 systemd[1]: Reloading requested from client PID 2758 ('systemctl') (unit session-9.scope)... Jun 25 18:30:42.245003 systemd[1]: Reloading... Jun 25 18:30:42.346493 zram_generator::config[2795]: No configuration found. Jun 25 18:30:42.458497 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:30:42.535988 systemd[1]: Reloading finished in 290 ms. Jun 25 18:30:42.588512 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:30:42.590891 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:30:42.591227 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:30:42.593558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:30:43.752885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:30:43.763835 (kubelet)[2864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:30:43.799978 kubelet[2864]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:30:43.800289 kubelet[2864]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:30:43.800360 kubelet[2864]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:30:43.800498 kubelet[2864]: I0625 18:30:43.800463 2864 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:30:44.577043 kubelet[2864]: I0625 18:30:44.577009 2864 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 18:30:44.577203 kubelet[2864]: I0625 18:30:44.577193 2864 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:30:44.577487 kubelet[2864]: I0625 18:30:44.577473 2864 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 18:30:44.590543 kubelet[2864]: E0625 18:30:44.590499 2864 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:44.591082 kubelet[2864]: I0625 18:30:44.591057 2864 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:30:44.598756 kubelet[2864]: I0625 18:30:44.598732 2864 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:30:44.599307 kubelet[2864]: I0625 18:30:44.599275 2864 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:30:44.601034 kubelet[2864]: I0625 18:30:44.599403 2864 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4012.0.0-a-5284b277fa","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:30:44.601150 kubelet[2864]: I0625 18:30:44.601043 2864 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:30:44.601150 kubelet[2864]: I0625 18:30:44.601055 2864 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:30:44.601209 kubelet[2864]: I0625 18:30:44.601191 2864 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:30:44.603047 kubelet[2864]: I0625 18:30:44.602495 2864 kubelet.go:400] "Attempting to sync node with API server" Jun 25 18:30:44.603047 kubelet[2864]: I0625 18:30:44.602520 2864 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:30:44.603047 kubelet[2864]: I0625 18:30:44.602553 2864 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:30:44.603047 kubelet[2864]: I0625 18:30:44.602579 2864 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:30:44.603782 kubelet[2864]: I0625 18:30:44.603762 2864 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:30:44.604026 kubelet[2864]: I0625 18:30:44.604013 2864 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:30:44.604121 kubelet[2864]: W0625 18:30:44.604112 2864 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:30:44.604715 kubelet[2864]: I0625 18:30:44.604699 2864 server.go:1264] "Started kubelet" Jun 25 18:30:44.607963 kubelet[2864]: I0625 18:30:44.607921 2864 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:30:44.608861 kubelet[2864]: I0625 18:30:44.608844 2864 server.go:455] "Adding debug handlers to kubelet server" Jun 25 18:30:44.609814 kubelet[2864]: I0625 18:30:44.609767 2864 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:30:44.610111 kubelet[2864]: I0625 18:30:44.610096 2864 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:30:44.610819 kubelet[2864]: I0625 18:30:44.610802 2864 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:30:44.612523 kubelet[2864]: W0625 18:30:44.612020 2864 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:44.612523 kubelet[2864]: E0625 18:30:44.612093 2864 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:44.612523 kubelet[2864]: W0625 18:30:44.612165 2864 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-5284b277fa&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:44.612523 kubelet[2864]: E0625 18:30:44.612190 2864 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-5284b277fa&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:44.612652 kubelet[2864]: E0625 18:30:44.612229 2864 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.36:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4012.0.0-a-5284b277fa.17dc52d7cfb1ca16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4012.0.0-a-5284b277fa,UID:ci-4012.0.0-a-5284b277fa,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4012.0.0-a-5284b277fa,},FirstTimestamp:2024-06-25 18:30:44.604668438 +0000 UTC m=+0.837838505,LastTimestamp:2024-06-25 18:30:44.604668438 +0000 UTC m=+0.837838505,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4012.0.0-a-5284b277fa,}" Jun 25 18:30:44.614858 kubelet[2864]: E0625 18:30:44.614830 2864 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:30:44.616279 kubelet[2864]: E0625 18:30:44.615970 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:44.616279 kubelet[2864]: I0625 18:30:44.616107 2864 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:30:44.616520 kubelet[2864]: I0625 18:30:44.616425 2864 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 18:30:44.616520 kubelet[2864]: I0625 18:30:44.616484 2864 reconciler.go:26] "Reconciler: start to sync state" Jun 25 18:30:44.616890 kubelet[2864]: W0625 18:30:44.616810 2864 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:44.616890 kubelet[2864]: E0625 18:30:44.616862 2864 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:44.617703 kubelet[2864]: E0625 18:30:44.617532 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-5284b277fa?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="200ms" Jun 25 18:30:44.618114 kubelet[2864]: I0625 18:30:44.618071 2864 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:30:44.619529 kubelet[2864]: I0625 18:30:44.619307 2864 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:30:44.619529 kubelet[2864]: I0625 18:30:44.619344 2864 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:30:44.667946 kubelet[2864]: I0625 18:30:44.667904 2864 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:30:44.669281 kubelet[2864]: I0625 18:30:44.669155 2864 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:30:44.669281 kubelet[2864]: I0625 18:30:44.669190 2864 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:30:44.669281 kubelet[2864]: I0625 18:30:44.669210 2864 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 18:30:44.669281 kubelet[2864]: E0625 18:30:44.669247 2864 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:30:44.670659 kubelet[2864]: W0625 18:30:44.670405 2864 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:44.670659 kubelet[2864]: E0625 18:30:44.670458 2864 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:44.675853 kubelet[2864]: I0625 18:30:44.675801 2864 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:30:44.675853 kubelet[2864]: I0625 18:30:44.675849 2864 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:30:44.675973 kubelet[2864]: I0625 18:30:44.675867 2864 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:30:44.680536 kubelet[2864]: I0625 18:30:44.680509 2864 policy_none.go:49] "None policy: Start" Jun 25 18:30:44.681103 kubelet[2864]: I0625 18:30:44.681083 2864 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:30:44.681103 kubelet[2864]: I0625 18:30:44.681110 2864 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:30:44.688757 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 18:30:44.699560 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 18:30:44.702467 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 18:30:44.710452 kubelet[2864]: I0625 18:30:44.710140 2864 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:30:44.711068 kubelet[2864]: I0625 18:30:44.710951 2864 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 18:30:44.712605 kubelet[2864]: I0625 18:30:44.711095 2864 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:30:44.716758 kubelet[2864]: E0625 18:30:44.716733 2864 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:44.718347 kubelet[2864]: I0625 18:30:44.718306 2864 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-a-5284b277fa" Jun 25 18:30:44.718803 kubelet[2864]: E0625 18:30:44.718782 2864 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4012.0.0-a-5284b277fa" Jun 25 18:30:44.769884 kubelet[2864]: I0625 18:30:44.769840 2864 topology_manager.go:215] "Topology Admit Handler" podUID="eb4f608165760b4d05def7e2edfddf50" podNamespace="kube-system" podName="kube-apiserver-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:44.771688 kubelet[2864]: I0625 18:30:44.771654 2864 topology_manager.go:215] "Topology Admit Handler" podUID="6ef70bdb750175280276046eb1196ed2" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:44.773262 kubelet[2864]: I0625 18:30:44.773230 2864 topology_manager.go:215] "Topology Admit Handler" podUID="86c3ff0ae6f54f6bfdb47ce8cce6d114" podNamespace="kube-system" podName="kube-scheduler-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:44.780177 systemd[1]: Created slice kubepods-burstable-podeb4f608165760b4d05def7e2edfddf50.slice - libcontainer container kubepods-burstable-podeb4f608165760b4d05def7e2edfddf50.slice. Jun 25 18:30:44.790929 systemd[1]: Created slice kubepods-burstable-pod6ef70bdb750175280276046eb1196ed2.slice - libcontainer container kubepods-burstable-pod6ef70bdb750175280276046eb1196ed2.slice. Jun 25 18:30:44.801205 systemd[1]: Created slice kubepods-burstable-pod86c3ff0ae6f54f6bfdb47ce8cce6d114.slice - libcontainer container kubepods-burstable-pod86c3ff0ae6f54f6bfdb47ce8cce6d114.slice. Jun 25 18:30:44.818218 kubelet[2864]: E0625 18:30:44.818176 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-5284b277fa?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="400ms" Jun 25 18:30:44.917465 kubelet[2864]: I0625 18:30:44.917423 2864 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb4f608165760b4d05def7e2edfddf50-ca-certs\") pod \"kube-apiserver-ci-4012.0.0-a-5284b277fa\" (UID: \"eb4f608165760b4d05def7e2edfddf50\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:44.917465 kubelet[2864]: I0625 18:30:44.917460 2864 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ef70bdb750175280276046eb1196ed2-ca-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-5284b277fa\" (UID: \"6ef70bdb750175280276046eb1196ed2\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:44.917742 kubelet[2864]: I0625 18:30:44.917480 2864 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ef70bdb750175280276046eb1196ed2-k8s-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-5284b277fa\" (UID: \"6ef70bdb750175280276046eb1196ed2\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:44.917742 kubelet[2864]: I0625 18:30:44.917496 2864 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb4f608165760b4d05def7e2edfddf50-k8s-certs\") pod \"kube-apiserver-ci-4012.0.0-a-5284b277fa\" (UID: \"eb4f608165760b4d05def7e2edfddf50\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:44.917742 kubelet[2864]: I0625 18:30:44.917512 2864 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb4f608165760b4d05def7e2edfddf50-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.0.0-a-5284b277fa\" (UID: \"eb4f608165760b4d05def7e2edfddf50\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:44.917742 kubelet[2864]: I0625 18:30:44.917530 2864 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6ef70bdb750175280276046eb1196ed2-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.0.0-a-5284b277fa\" (UID: \"6ef70bdb750175280276046eb1196ed2\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:44.917742 kubelet[2864]: I0625 18:30:44.917547 2864 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ef70bdb750175280276046eb1196ed2-kubeconfig\") pod \"kube-controller-manager-ci-4012.0.0-a-5284b277fa\" (UID: \"6ef70bdb750175280276046eb1196ed2\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:44.917853 kubelet[2864]: I0625 18:30:44.917564 2864 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ef70bdb750175280276046eb1196ed2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.0.0-a-5284b277fa\" (UID: \"6ef70bdb750175280276046eb1196ed2\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:44.917853 kubelet[2864]: I0625 18:30:44.917593 2864 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/86c3ff0ae6f54f6bfdb47ce8cce6d114-kubeconfig\") pod \"kube-scheduler-ci-4012.0.0-a-5284b277fa\" (UID: \"86c3ff0ae6f54f6bfdb47ce8cce6d114\") " pod="kube-system/kube-scheduler-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:44.920877 kubelet[2864]: I0625 18:30:44.920843 2864 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-a-5284b277fa" Jun 25 18:30:44.921236 kubelet[2864]: E0625 18:30:44.921209 2864 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4012.0.0-a-5284b277fa" Jun 25 18:30:45.089859 containerd[1730]: time="2024-06-25T18:30:45.089783698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.0.0-a-5284b277fa,Uid:eb4f608165760b4d05def7e2edfddf50,Namespace:kube-system,Attempt:0,}" Jun 25 18:30:45.099933 containerd[1730]: time="2024-06-25T18:30:45.099826604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.0.0-a-5284b277fa,Uid:6ef70bdb750175280276046eb1196ed2,Namespace:kube-system,Attempt:0,}" Jun 25 18:30:45.104650 containerd[1730]: time="2024-06-25T18:30:45.104404656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.0.0-a-5284b277fa,Uid:86c3ff0ae6f54f6bfdb47ce8cce6d114,Namespace:kube-system,Attempt:0,}" Jun 25 18:30:45.218745 kubelet[2864]: E0625 18:30:45.218625 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-5284b277fa?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="800ms" Jun 25 18:30:45.322991 kubelet[2864]: I0625 18:30:45.322695 2864 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-a-5284b277fa" Jun 25 18:30:45.322991 kubelet[2864]: E0625 18:30:45.322971 2864 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4012.0.0-a-5284b277fa" Jun 25 18:30:45.642608 kubelet[2864]: W0625 18:30:45.642541 2864 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:45.642699 kubelet[2864]: E0625 18:30:45.642616 2864 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:45.681416 kubelet[2864]: W0625 18:30:45.681333 2864 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:45.681416 kubelet[2864]: E0625 18:30:45.681395 2864 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:46.019226 kubelet[2864]: E0625 18:30:46.019100 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-5284b277fa?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="1.6s" Jun 25 18:30:46.062007 kubelet[2864]: W0625 18:30:46.061930 2864 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-5284b277fa&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:46.062007 kubelet[2864]: E0625 18:30:46.062008 2864 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-5284b277fa&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:46.125493 kubelet[2864]: I0625 18:30:46.125352 2864 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-a-5284b277fa" Jun 25 18:30:46.125761 kubelet[2864]: E0625 18:30:46.125679 2864 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4012.0.0-a-5284b277fa" Jun 25 18:30:46.156392 kubelet[2864]: W0625 18:30:46.156287 2864 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:46.156392 kubelet[2864]: E0625 18:30:46.156370 2864 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:46.714675 kubelet[2864]: E0625 18:30:46.714643 2864 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:47.159075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1421023306.mount: Deactivated successfully. Jun 25 18:30:47.187179 containerd[1730]: time="2024-06-25T18:30:47.187108545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:30:47.190803 containerd[1730]: time="2024-06-25T18:30:47.190762435Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jun 25 18:30:47.193534 containerd[1730]: time="2024-06-25T18:30:47.193498002Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:30:47.196340 containerd[1730]: time="2024-06-25T18:30:47.196205329Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:30:47.199683 containerd[1730]: time="2024-06-25T18:30:47.199644938Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:30:47.205286 containerd[1730]: time="2024-06-25T18:30:47.205249352Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:30:47.209131 containerd[1730]: time="2024-06-25T18:30:47.209075642Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:30:47.213081 containerd[1730]: time="2024-06-25T18:30:47.213040372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:30:47.214235 containerd[1730]: time="2024-06-25T18:30:47.214039375Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.114125971s" Jun 25 18:30:47.216286 containerd[1730]: time="2024-06-25T18:30:47.216247821Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.111761485s" Jun 25 18:30:47.216989 containerd[1730]: time="2024-06-25T18:30:47.216954983Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.127076765s" Jun 25 18:30:47.619778 kubelet[2864]: E0625 18:30:47.619663 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-5284b277fa?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="3.2s" Jun 25 18:30:47.728170 kubelet[2864]: I0625 18:30:47.728138 2864 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-a-5284b277fa" Jun 25 18:30:47.728778 kubelet[2864]: E0625 18:30:47.728747 2864 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4012.0.0-a-5284b277fa" Jun 25 18:30:47.748387 kubelet[2864]: W0625 18:30:47.748332 2864 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:47.748387 kubelet[2864]: E0625 18:30:47.748371 2864 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:47.913795 containerd[1730]: time="2024-06-25T18:30:47.912917310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:30:47.913795 containerd[1730]: time="2024-06-25T18:30:47.913344911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:30:47.913795 containerd[1730]: time="2024-06-25T18:30:47.913363231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:30:47.913795 containerd[1730]: time="2024-06-25T18:30:47.913373191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:30:47.920340 containerd[1730]: time="2024-06-25T18:30:47.919225047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:30:47.920340 containerd[1730]: time="2024-06-25T18:30:47.919484927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:30:47.920340 containerd[1730]: time="2024-06-25T18:30:47.919513607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:30:47.920340 containerd[1730]: time="2024-06-25T18:30:47.919525527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:30:47.921868 containerd[1730]: time="2024-06-25T18:30:47.921220332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:30:47.921868 containerd[1730]: time="2024-06-25T18:30:47.921276972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:30:47.921868 containerd[1730]: time="2024-06-25T18:30:47.921296572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:30:47.921868 containerd[1730]: time="2024-06-25T18:30:47.921407572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:30:47.956556 systemd[1]: Started cri-containerd-752610c1ce9e5b08fae10e03da275b7f92918bd2c962605538ea18c63da29f6f.scope - libcontainer container 752610c1ce9e5b08fae10e03da275b7f92918bd2c962605538ea18c63da29f6f. Jun 25 18:30:47.958492 systemd[1]: Started cri-containerd-9d247b654e0c0589269e96829d97873ec41450bbc38cde938d2d5e8a520121e1.scope - libcontainer container 9d247b654e0c0589269e96829d97873ec41450bbc38cde938d2d5e8a520121e1. Jun 25 18:30:47.960343 systemd[1]: Started cri-containerd-bca5474793c576748acb067566c82a98fa779b6ba3727d89ab1c41f227348325.scope - libcontainer container bca5474793c576748acb067566c82a98fa779b6ba3727d89ab1c41f227348325. Jun 25 18:30:47.988214 kubelet[2864]: W0625 18:30:47.988155 2864 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:47.988214 kubelet[2864]: E0625 18:30:47.988204 2864 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:48.005862 containerd[1730]: time="2024-06-25T18:30:48.005702391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.0.0-a-5284b277fa,Uid:6ef70bdb750175280276046eb1196ed2,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d247b654e0c0589269e96829d97873ec41450bbc38cde938d2d5e8a520121e1\"" Jun 25 18:30:48.010525 containerd[1730]: time="2024-06-25T18:30:48.010396363Z" level=info msg="CreateContainer within sandbox \"9d247b654e0c0589269e96829d97873ec41450bbc38cde938d2d5e8a520121e1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:30:48.018558 containerd[1730]: time="2024-06-25T18:30:48.018423424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.0.0-a-5284b277fa,Uid:86c3ff0ae6f54f6bfdb47ce8cce6d114,Namespace:kube-system,Attempt:0,} returns sandbox id \"752610c1ce9e5b08fae10e03da275b7f92918bd2c962605538ea18c63da29f6f\"" Jun 25 18:30:48.022204 containerd[1730]: time="2024-06-25T18:30:48.021530352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.0.0-a-5284b277fa,Uid:eb4f608165760b4d05def7e2edfddf50,Namespace:kube-system,Attempt:0,} returns sandbox id \"bca5474793c576748acb067566c82a98fa779b6ba3727d89ab1c41f227348325\"" Jun 25 18:30:48.022667 containerd[1730]: time="2024-06-25T18:30:48.022644835Z" level=info msg="CreateContainer within sandbox \"752610c1ce9e5b08fae10e03da275b7f92918bd2c962605538ea18c63da29f6f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:30:48.027379 containerd[1730]: time="2024-06-25T18:30:48.027340567Z" level=info msg="CreateContainer within sandbox \"bca5474793c576748acb067566c82a98fa779b6ba3727d89ab1c41f227348325\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:30:48.068128 containerd[1730]: time="2024-06-25T18:30:48.068074473Z" level=info msg="CreateContainer within sandbox \"9d247b654e0c0589269e96829d97873ec41450bbc38cde938d2d5e8a520121e1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a6cbcb5cb6e45c31cf95a5e59184d8798d99642aa750aa340f44c45840306ac5\"" Jun 25 18:30:48.068856 containerd[1730]: time="2024-06-25T18:30:48.068827835Z" level=info msg="StartContainer for \"a6cbcb5cb6e45c31cf95a5e59184d8798d99642aa750aa340f44c45840306ac5\"" Jun 25 18:30:48.077531 containerd[1730]: time="2024-06-25T18:30:48.077481418Z" level=info msg="CreateContainer within sandbox \"bca5474793c576748acb067566c82a98fa779b6ba3727d89ab1c41f227348325\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"90a6e0043976c282b95240bd0dd60396d3eb2f209f5b1789ee9a7064b883acc3\"" Jun 25 18:30:48.078935 containerd[1730]: time="2024-06-25T18:30:48.077981699Z" level=info msg="StartContainer for \"90a6e0043976c282b95240bd0dd60396d3eb2f209f5b1789ee9a7064b883acc3\"" Jun 25 18:30:48.084974 containerd[1730]: time="2024-06-25T18:30:48.084509036Z" level=info msg="CreateContainer within sandbox \"752610c1ce9e5b08fae10e03da275b7f92918bd2c962605538ea18c63da29f6f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"00c71c73e32a6f33d8e63c5579462ef9a7870a239e405fe66404e7c69b6163a1\"" Jun 25 18:30:48.086251 containerd[1730]: time="2024-06-25T18:30:48.085889120Z" level=info msg="StartContainer for \"00c71c73e32a6f33d8e63c5579462ef9a7870a239e405fe66404e7c69b6163a1\"" Jun 25 18:30:48.094488 systemd[1]: Started cri-containerd-a6cbcb5cb6e45c31cf95a5e59184d8798d99642aa750aa340f44c45840306ac5.scope - libcontainer container a6cbcb5cb6e45c31cf95a5e59184d8798d99642aa750aa340f44c45840306ac5. Jun 25 18:30:48.119517 systemd[1]: Started cri-containerd-90a6e0043976c282b95240bd0dd60396d3eb2f209f5b1789ee9a7064b883acc3.scope - libcontainer container 90a6e0043976c282b95240bd0dd60396d3eb2f209f5b1789ee9a7064b883acc3. Jun 25 18:30:48.123623 systemd[1]: Started cri-containerd-00c71c73e32a6f33d8e63c5579462ef9a7870a239e405fe66404e7c69b6163a1.scope - libcontainer container 00c71c73e32a6f33d8e63c5579462ef9a7870a239e405fe66404e7c69b6163a1. Jun 25 18:30:48.153406 containerd[1730]: time="2024-06-25T18:30:48.153267815Z" level=info msg="StartContainer for \"a6cbcb5cb6e45c31cf95a5e59184d8798d99642aa750aa340f44c45840306ac5\" returns successfully" Jun 25 18:30:48.186209 containerd[1730]: time="2024-06-25T18:30:48.185348738Z" level=info msg="StartContainer for \"00c71c73e32a6f33d8e63c5579462ef9a7870a239e405fe66404e7c69b6163a1\" returns successfully" Jun 25 18:30:48.192894 containerd[1730]: time="2024-06-25T18:30:48.192791157Z" level=info msg="StartContainer for \"90a6e0043976c282b95240bd0dd60396d3eb2f209f5b1789ee9a7064b883acc3\" returns successfully" Jun 25 18:30:48.214761 kubelet[2864]: W0625 18:30:48.214714 2864 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-5284b277fa&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:48.214761 kubelet[2864]: E0625 18:30:48.214757 2864 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-5284b277fa&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 18:30:50.905823 kubelet[2864]: E0625 18:30:50.905772 2864 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4012.0.0-a-5284b277fa\" not found" node="ci-4012.0.0-a-5284b277fa" Jun 25 18:30:50.932849 kubelet[2864]: I0625 18:30:50.932812 2864 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-a-5284b277fa" Jun 25 18:30:51.003629 kubelet[2864]: I0625 18:30:51.003587 2864 kubelet_node_status.go:76] "Successfully registered node" node="ci-4012.0.0-a-5284b277fa" Jun 25 18:30:51.063585 kubelet[2864]: E0625 18:30:51.063538 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:51.165051 kubelet[2864]: E0625 18:30:51.164653 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:51.265141 kubelet[2864]: E0625 18:30:51.265100 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:51.365556 kubelet[2864]: E0625 18:30:51.365516 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:51.466751 kubelet[2864]: E0625 18:30:51.466363 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:51.566753 kubelet[2864]: E0625 18:30:51.566711 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:51.667704 kubelet[2864]: E0625 18:30:51.667660 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:51.768396 kubelet[2864]: E0625 18:30:51.768029 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:51.869096 kubelet[2864]: E0625 18:30:51.869052 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:51.969562 kubelet[2864]: E0625 18:30:51.969513 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:52.070787 kubelet[2864]: E0625 18:30:52.070372 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:52.170912 kubelet[2864]: E0625 18:30:52.170875 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:52.271380 kubelet[2864]: E0625 18:30:52.271340 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:52.372732 kubelet[2864]: E0625 18:30:52.372449 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:52.473214 kubelet[2864]: E0625 18:30:52.473167 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:52.573750 kubelet[2864]: E0625 18:30:52.573689 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:52.674035 kubelet[2864]: E0625 18:30:52.673997 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:52.774646 kubelet[2864]: E0625 18:30:52.774608 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:52.875114 kubelet[2864]: E0625 18:30:52.875081 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:52.975655 kubelet[2864]: E0625 18:30:52.975561 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:53.075967 kubelet[2864]: E0625 18:30:53.075924 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:53.176715 kubelet[2864]: E0625 18:30:53.176676 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:53.277305 kubelet[2864]: E0625 18:30:53.277192 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-5284b277fa\" not found" Jun 25 18:30:53.428890 systemd[1]: Reloading requested from client PID 3139 ('systemctl') (unit session-9.scope)... Jun 25 18:30:53.428904 systemd[1]: Reloading... Jun 25 18:30:53.503391 zram_generator::config[3176]: No configuration found. Jun 25 18:30:53.608928 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:30:53.609691 kubelet[2864]: I0625 18:30:53.609270 2864 apiserver.go:52] "Watching apiserver" Jun 25 18:30:53.616965 kubelet[2864]: I0625 18:30:53.616940 2864 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 18:30:53.698103 systemd[1]: Reloading finished in 268 ms. Jun 25 18:30:53.731989 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:30:53.732216 kubelet[2864]: I0625 18:30:53.732148 2864 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:30:53.743122 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:30:53.743358 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:30:53.750522 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:30:53.836965 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:30:53.842047 (kubelet)[3240]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:30:53.890276 kubelet[3240]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:30:53.892341 kubelet[3240]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:30:53.892341 kubelet[3240]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:30:53.892341 kubelet[3240]: I0625 18:30:53.890821 3240 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:30:53.898340 kubelet[3240]: I0625 18:30:53.896788 3240 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 18:30:53.898494 kubelet[3240]: I0625 18:30:53.898481 3240 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:30:53.898772 kubelet[3240]: I0625 18:30:53.898755 3240 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 18:30:53.900109 kubelet[3240]: I0625 18:30:53.900081 3240 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:30:53.903591 kubelet[3240]: I0625 18:30:53.903411 3240 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:30:53.922589 kubelet[3240]: I0625 18:30:53.922558 3240 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:30:53.922940 kubelet[3240]: I0625 18:30:53.922908 3240 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:30:53.925347 kubelet[3240]: I0625 18:30:53.924364 3240 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4012.0.0-a-5284b277fa","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:30:53.925347 kubelet[3240]: I0625 18:30:53.924558 3240 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:30:53.925347 kubelet[3240]: I0625 18:30:53.924570 3240 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:30:53.925347 kubelet[3240]: I0625 18:30:53.924604 3240 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:30:53.925347 kubelet[3240]: I0625 18:30:53.924720 3240 kubelet.go:400] "Attempting to sync node with API server" Jun 25 18:30:53.925551 kubelet[3240]: I0625 18:30:53.924731 3240 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:30:53.925551 kubelet[3240]: I0625 18:30:53.924756 3240 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:30:53.925551 kubelet[3240]: I0625 18:30:53.924772 3240 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:30:53.929773 kubelet[3240]: I0625 18:30:53.929743 3240 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:30:53.929949 kubelet[3240]: I0625 18:30:53.929929 3240 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:30:53.930368 kubelet[3240]: I0625 18:30:53.930352 3240 server.go:1264] "Started kubelet" Jun 25 18:30:53.932820 kubelet[3240]: I0625 18:30:53.932741 3240 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:30:53.935645 kubelet[3240]: I0625 18:30:53.935623 3240 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:30:53.935772 kubelet[3240]: I0625 18:30:53.935752 3240 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:30:53.936689 kubelet[3240]: I0625 18:30:53.936667 3240 server.go:455] "Adding debug handlers to kubelet server" Jun 25 18:30:53.937723 kubelet[3240]: I0625 18:30:53.936718 3240 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:30:53.946949 kubelet[3240]: I0625 18:30:53.946928 3240 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:30:53.960694 kubelet[3240]: I0625 18:30:53.949846 3240 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 18:30:53.960962 kubelet[3240]: I0625 18:30:53.960947 3240 reconciler.go:26] "Reconciler: start to sync state" Jun 25 18:30:53.967253 kubelet[3240]: I0625 18:30:53.965680 3240 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:30:53.975430 kubelet[3240]: I0625 18:30:53.975404 3240 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:30:53.976265 kubelet[3240]: I0625 18:30:53.976246 3240 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:30:53.976392 kubelet[3240]: I0625 18:30:53.976382 3240 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:30:53.976465 kubelet[3240]: I0625 18:30:53.976456 3240 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 18:30:53.976556 kubelet[3240]: E0625 18:30:53.976541 3240 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:30:53.981688 kubelet[3240]: I0625 18:30:53.981669 3240 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:30:53.981778 kubelet[3240]: I0625 18:30:53.981768 3240 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:30:53.982432 kubelet[3240]: E0625 18:30:53.982399 3240 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:30:54.013441 kubelet[3240]: I0625 18:30:54.013401 3240 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:30:54.013441 kubelet[3240]: I0625 18:30:54.013430 3240 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:30:54.013441 kubelet[3240]: I0625 18:30:54.013450 3240 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:30:54.013614 kubelet[3240]: I0625 18:30:54.013581 3240 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:30:54.013614 kubelet[3240]: I0625 18:30:54.013591 3240 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:30:54.013614 kubelet[3240]: I0625 18:30:54.013607 3240 policy_none.go:49] "None policy: Start" Jun 25 18:30:54.014163 kubelet[3240]: I0625 18:30:54.014143 3240 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:30:54.014202 kubelet[3240]: I0625 18:30:54.014169 3240 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:30:54.014306 kubelet[3240]: I0625 18:30:54.014289 3240 state_mem.go:75] "Updated machine memory state" Jun 25 18:30:54.018493 kubelet[3240]: I0625 18:30:54.018400 3240 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:30:54.018589 kubelet[3240]: I0625 18:30:54.018556 3240 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 18:30:54.018669 kubelet[3240]: I0625 18:30:54.018652 3240 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:30:54.052894 kubelet[3240]: I0625 18:30:54.052857 3240 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-a-5284b277fa" Jun 25 18:30:54.063820 kubelet[3240]: I0625 18:30:54.063651 3240 kubelet_node_status.go:112] "Node was previously registered" node="ci-4012.0.0-a-5284b277fa" Jun 25 18:30:54.063820 kubelet[3240]: I0625 18:30:54.063732 3240 kubelet_node_status.go:76] "Successfully registered node" node="ci-4012.0.0-a-5284b277fa" Jun 25 18:30:54.076838 kubelet[3240]: I0625 18:30:54.076798 3240 topology_manager.go:215] "Topology Admit Handler" podUID="eb4f608165760b4d05def7e2edfddf50" podNamespace="kube-system" podName="kube-apiserver-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:54.077408 kubelet[3240]: I0625 18:30:54.077078 3240 topology_manager.go:215] "Topology Admit Handler" podUID="6ef70bdb750175280276046eb1196ed2" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:54.077408 kubelet[3240]: I0625 18:30:54.077131 3240 topology_manager.go:215] "Topology Admit Handler" podUID="86c3ff0ae6f54f6bfdb47ce8cce6d114" podNamespace="kube-system" podName="kube-scheduler-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:54.401388 kubelet[3240]: W0625 18:30:54.401247 3240 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:30:54.404967 kubelet[3240]: W0625 18:30:54.404821 3240 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:30:54.404967 kubelet[3240]: W0625 18:30:54.404874 3240 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:30:54.496750 kubelet[3240]: I0625 18:30:54.496702 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb4f608165760b4d05def7e2edfddf50-ca-certs\") pod \"kube-apiserver-ci-4012.0.0-a-5284b277fa\" (UID: \"eb4f608165760b4d05def7e2edfddf50\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:54.496750 kubelet[3240]: I0625 18:30:54.496742 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb4f608165760b4d05def7e2edfddf50-k8s-certs\") pod \"kube-apiserver-ci-4012.0.0-a-5284b277fa\" (UID: \"eb4f608165760b4d05def7e2edfddf50\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:54.496750 kubelet[3240]: I0625 18:30:54.496762 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ef70bdb750175280276046eb1196ed2-ca-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-5284b277fa\" (UID: \"6ef70bdb750175280276046eb1196ed2\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:54.496946 kubelet[3240]: I0625 18:30:54.496778 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ef70bdb750175280276046eb1196ed2-kubeconfig\") pod \"kube-controller-manager-ci-4012.0.0-a-5284b277fa\" (UID: \"6ef70bdb750175280276046eb1196ed2\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:54.496946 kubelet[3240]: I0625 18:30:54.496797 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ef70bdb750175280276046eb1196ed2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.0.0-a-5284b277fa\" (UID: \"6ef70bdb750175280276046eb1196ed2\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:54.496946 kubelet[3240]: I0625 18:30:54.496814 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb4f608165760b4d05def7e2edfddf50-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.0.0-a-5284b277fa\" (UID: \"eb4f608165760b4d05def7e2edfddf50\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:54.496946 kubelet[3240]: I0625 18:30:54.496828 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6ef70bdb750175280276046eb1196ed2-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.0.0-a-5284b277fa\" (UID: \"6ef70bdb750175280276046eb1196ed2\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:54.496946 kubelet[3240]: I0625 18:30:54.496844 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ef70bdb750175280276046eb1196ed2-k8s-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-5284b277fa\" (UID: \"6ef70bdb750175280276046eb1196ed2\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:54.497058 kubelet[3240]: I0625 18:30:54.496858 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/86c3ff0ae6f54f6bfdb47ce8cce6d114-kubeconfig\") pod \"kube-scheduler-ci-4012.0.0-a-5284b277fa\" (UID: \"86c3ff0ae6f54f6bfdb47ce8cce6d114\") " pod="kube-system/kube-scheduler-ci-4012.0.0-a-5284b277fa" Jun 25 18:30:54.926273 kubelet[3240]: I0625 18:30:54.926172 3240 apiserver.go:52] "Watching apiserver" Jun 25 18:30:54.961602 kubelet[3240]: I0625 18:30:54.961550 3240 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 18:30:55.026966 kubelet[3240]: I0625 18:30:55.026806 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4012.0.0-a-5284b277fa" podStartSLOduration=1.026789734 podStartE2EDuration="1.026789734s" podCreationTimestamp="2024-06-25 18:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:30:55.020618957 +0000 UTC m=+1.175434442" watchObservedRunningTime="2024-06-25 18:30:55.026789734 +0000 UTC m=+1.181605179" Jun 25 18:30:55.036497 kubelet[3240]: I0625 18:30:55.036056 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4012.0.0-a-5284b277fa" podStartSLOduration=1.036043198 podStartE2EDuration="1.036043198s" podCreationTimestamp="2024-06-25 18:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:30:55.027472575 +0000 UTC m=+1.182288020" watchObservedRunningTime="2024-06-25 18:30:55.036043198 +0000 UTC m=+1.190858643" Jun 25 18:30:55.036497 kubelet[3240]: I0625 18:30:55.036148 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4012.0.0-a-5284b277fa" podStartSLOduration=1.036144439 podStartE2EDuration="1.036144439s" podCreationTimestamp="2024-06-25 18:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:30:55.035595517 +0000 UTC m=+1.190410962" watchObservedRunningTime="2024-06-25 18:30:55.036144439 +0000 UTC m=+1.190959884" Jun 25 18:30:59.074177 sudo[2296]: pam_unix(sudo:session): session closed for user root Jun 25 18:30:59.161520 sshd[2293]: pam_unix(sshd:session): session closed for user core Jun 25 18:30:59.164359 systemd[1]: sshd@6-10.200.20.36:22-10.200.16.10:48960.service: Deactivated successfully. Jun 25 18:30:59.166078 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:30:59.166385 systemd[1]: session-9.scope: Consumed 5.977s CPU time, 136.4M memory peak, 0B memory swap peak. Jun 25 18:30:59.168123 systemd-logind[1683]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:30:59.169393 systemd-logind[1683]: Removed session 9. Jun 25 18:31:10.137906 kubelet[3240]: I0625 18:31:10.137727 3240 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:31:10.138293 containerd[1730]: time="2024-06-25T18:31:10.138030788Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:31:10.139681 kubelet[3240]: I0625 18:31:10.139320 3240 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:31:10.244008 kubelet[3240]: I0625 18:31:10.242220 3240 topology_manager.go:215] "Topology Admit Handler" podUID="487ecc8d-b112-4666-a134-609800215caa" podNamespace="kube-system" podName="kube-proxy-926jb" Jun 25 18:31:10.255879 systemd[1]: Created slice kubepods-besteffort-pod487ecc8d_b112_4666_a134_609800215caa.slice - libcontainer container kubepods-besteffort-pod487ecc8d_b112_4666_a134_609800215caa.slice. Jun 25 18:31:10.282368 kubelet[3240]: I0625 18:31:10.282334 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/487ecc8d-b112-4666-a134-609800215caa-lib-modules\") pod \"kube-proxy-926jb\" (UID: \"487ecc8d-b112-4666-a134-609800215caa\") " pod="kube-system/kube-proxy-926jb" Jun 25 18:31:10.282649 kubelet[3240]: I0625 18:31:10.282527 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/487ecc8d-b112-4666-a134-609800215caa-kube-proxy\") pod \"kube-proxy-926jb\" (UID: \"487ecc8d-b112-4666-a134-609800215caa\") " pod="kube-system/kube-proxy-926jb" Jun 25 18:31:10.282649 kubelet[3240]: I0625 18:31:10.282554 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/487ecc8d-b112-4666-a134-609800215caa-xtables-lock\") pod \"kube-proxy-926jb\" (UID: \"487ecc8d-b112-4666-a134-609800215caa\") " pod="kube-system/kube-proxy-926jb" Jun 25 18:31:10.282649 kubelet[3240]: I0625 18:31:10.282571 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkgjf\" (UniqueName: \"kubernetes.io/projected/487ecc8d-b112-4666-a134-609800215caa-kube-api-access-kkgjf\") pod \"kube-proxy-926jb\" (UID: \"487ecc8d-b112-4666-a134-609800215caa\") " pod="kube-system/kube-proxy-926jb" Jun 25 18:31:10.311643 kubelet[3240]: I0625 18:31:10.310667 3240 topology_manager.go:215] "Topology Admit Handler" podUID="23458f76-9bf7-460c-b90c-4dd8c3b36cab" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-fkxh2" Jun 25 18:31:10.316821 kubelet[3240]: W0625 18:31:10.316790 3240 reflector.go:547] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4012.0.0-a-5284b277fa" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4012.0.0-a-5284b277fa' and this object Jun 25 18:31:10.317030 kubelet[3240]: W0625 18:31:10.316988 3240 reflector.go:547] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4012.0.0-a-5284b277fa" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4012.0.0-a-5284b277fa' and this object Jun 25 18:31:10.317171 kubelet[3240]: E0625 18:31:10.317158 3240 reflector.go:150] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4012.0.0-a-5284b277fa" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4012.0.0-a-5284b277fa' and this object Jun 25 18:31:10.317256 kubelet[3240]: E0625 18:31:10.317245 3240 reflector.go:150] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4012.0.0-a-5284b277fa" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4012.0.0-a-5284b277fa' and this object Jun 25 18:31:10.320130 systemd[1]: Created slice kubepods-besteffort-pod23458f76_9bf7_460c_b90c_4dd8c3b36cab.slice - libcontainer container kubepods-besteffort-pod23458f76_9bf7_460c_b90c_4dd8c3b36cab.slice. Jun 25 18:31:10.384384 kubelet[3240]: I0625 18:31:10.383018 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pgcb\" (UniqueName: \"kubernetes.io/projected/23458f76-9bf7-460c-b90c-4dd8c3b36cab-kube-api-access-5pgcb\") pod \"tigera-operator-76ff79f7fd-fkxh2\" (UID: \"23458f76-9bf7-460c-b90c-4dd8c3b36cab\") " pod="tigera-operator/tigera-operator-76ff79f7fd-fkxh2" Jun 25 18:31:10.384384 kubelet[3240]: I0625 18:31:10.383062 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/23458f76-9bf7-460c-b90c-4dd8c3b36cab-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-fkxh2\" (UID: \"23458f76-9bf7-460c-b90c-4dd8c3b36cab\") " pod="tigera-operator/tigera-operator-76ff79f7fd-fkxh2" Jun 25 18:31:10.565951 containerd[1730]: time="2024-06-25T18:31:10.565876286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-926jb,Uid:487ecc8d-b112-4666-a134-609800215caa,Namespace:kube-system,Attempt:0,}" Jun 25 18:31:10.609548 containerd[1730]: time="2024-06-25T18:31:10.609412093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:31:10.609548 containerd[1730]: time="2024-06-25T18:31:10.609470053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:10.609824 containerd[1730]: time="2024-06-25T18:31:10.609581693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:31:10.610225 containerd[1730]: time="2024-06-25T18:31:10.610150254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:10.631630 systemd[1]: Started cri-containerd-89dd47176770ebfee4ae712138e133eb0f0e365364ae5778b2f57fbf8539658c.scope - libcontainer container 89dd47176770ebfee4ae712138e133eb0f0e365364ae5778b2f57fbf8539658c. Jun 25 18:31:10.652723 containerd[1730]: time="2024-06-25T18:31:10.652549379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-926jb,Uid:487ecc8d-b112-4666-a134-609800215caa,Namespace:kube-system,Attempt:0,} returns sandbox id \"89dd47176770ebfee4ae712138e133eb0f0e365364ae5778b2f57fbf8539658c\"" Jun 25 18:31:10.655764 containerd[1730]: time="2024-06-25T18:31:10.655662506Z" level=info msg="CreateContainer within sandbox \"89dd47176770ebfee4ae712138e133eb0f0e365364ae5778b2f57fbf8539658c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:31:10.698662 containerd[1730]: time="2024-06-25T18:31:10.698620272Z" level=info msg="CreateContainer within sandbox \"89dd47176770ebfee4ae712138e133eb0f0e365364ae5778b2f57fbf8539658c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"43e74bfa1b17892cb19da4bf61f51db77b6675c4da582a241214bc5fc90ebaf5\"" Jun 25 18:31:10.700474 containerd[1730]: time="2024-06-25T18:31:10.699418193Z" level=info msg="StartContainer for \"43e74bfa1b17892cb19da4bf61f51db77b6675c4da582a241214bc5fc90ebaf5\"" Jun 25 18:31:10.727516 systemd[1]: Started cri-containerd-43e74bfa1b17892cb19da4bf61f51db77b6675c4da582a241214bc5fc90ebaf5.scope - libcontainer container 43e74bfa1b17892cb19da4bf61f51db77b6675c4da582a241214bc5fc90ebaf5. Jun 25 18:31:10.756965 containerd[1730]: time="2024-06-25T18:31:10.756383747Z" level=info msg="StartContainer for \"43e74bfa1b17892cb19da4bf61f51db77b6675c4da582a241214bc5fc90ebaf5\" returns successfully" Jun 25 18:31:11.524607 containerd[1730]: time="2024-06-25T18:31:11.524559767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-fkxh2,Uid:23458f76-9bf7-460c-b90c-4dd8c3b36cab,Namespace:tigera-operator,Attempt:0,}" Jun 25 18:31:11.566490 containerd[1730]: time="2024-06-25T18:31:11.566353171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:31:11.566490 containerd[1730]: time="2024-06-25T18:31:11.566419651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:11.566490 containerd[1730]: time="2024-06-25T18:31:11.566442771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:31:11.566490 containerd[1730]: time="2024-06-25T18:31:11.566457331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:11.583510 systemd[1]: run-containerd-runc-k8s.io-c80e8d1a5ce37c3c42dc044a2fccc571d60d8f9de86dce518d995b02b0871490-runc.9Icd2L.mount: Deactivated successfully. Jun 25 18:31:11.594504 systemd[1]: Started cri-containerd-c80e8d1a5ce37c3c42dc044a2fccc571d60d8f9de86dce518d995b02b0871490.scope - libcontainer container c80e8d1a5ce37c3c42dc044a2fccc571d60d8f9de86dce518d995b02b0871490. Jun 25 18:31:11.620966 containerd[1730]: time="2024-06-25T18:31:11.620921161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-fkxh2,Uid:23458f76-9bf7-460c-b90c-4dd8c3b36cab,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c80e8d1a5ce37c3c42dc044a2fccc571d60d8f9de86dce518d995b02b0871490\"" Jun 25 18:31:11.622608 containerd[1730]: time="2024-06-25T18:31:11.622573124Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 18:31:13.129913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3356187045.mount: Deactivated successfully. Jun 25 18:31:13.468354 containerd[1730]: time="2024-06-25T18:31:13.468006224Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:13.470526 containerd[1730]: time="2024-06-25T18:31:13.470494389Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473618" Jun 25 18:31:13.474288 containerd[1730]: time="2024-06-25T18:31:13.474252276Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:13.479548 containerd[1730]: time="2024-06-25T18:31:13.479463607Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:13.480489 containerd[1730]: time="2024-06-25T18:31:13.480376168Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 1.857763644s" Jun 25 18:31:13.480489 containerd[1730]: time="2024-06-25T18:31:13.480405568Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jun 25 18:31:13.482909 containerd[1730]: time="2024-06-25T18:31:13.482717773Z" level=info msg="CreateContainer within sandbox \"c80e8d1a5ce37c3c42dc044a2fccc571d60d8f9de86dce518d995b02b0871490\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 18:31:13.518977 containerd[1730]: time="2024-06-25T18:31:13.518895006Z" level=info msg="CreateContainer within sandbox \"c80e8d1a5ce37c3c42dc044a2fccc571d60d8f9de86dce518d995b02b0871490\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2ea5a390c1a81996947ecfb0346173ab0d01602570a18115c7f23ec7e9643c85\"" Jun 25 18:31:13.519573 containerd[1730]: time="2024-06-25T18:31:13.519448007Z" level=info msg="StartContainer for \"2ea5a390c1a81996947ecfb0346173ab0d01602570a18115c7f23ec7e9643c85\"" Jun 25 18:31:13.548518 systemd[1]: Started cri-containerd-2ea5a390c1a81996947ecfb0346173ab0d01602570a18115c7f23ec7e9643c85.scope - libcontainer container 2ea5a390c1a81996947ecfb0346173ab0d01602570a18115c7f23ec7e9643c85. Jun 25 18:31:13.574462 containerd[1730]: time="2024-06-25T18:31:13.574395477Z" level=info msg="StartContainer for \"2ea5a390c1a81996947ecfb0346173ab0d01602570a18115c7f23ec7e9643c85\" returns successfully" Jun 25 18:31:13.988384 kubelet[3240]: I0625 18:31:13.988038 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-926jb" podStartSLOduration=3.988021639 podStartE2EDuration="3.988021639s" podCreationTimestamp="2024-06-25 18:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:31:11.030587737 +0000 UTC m=+17.185403182" watchObservedRunningTime="2024-06-25 18:31:13.988021639 +0000 UTC m=+20.142837084" Jun 25 18:31:14.039340 kubelet[3240]: I0625 18:31:14.038948 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-fkxh2" podStartSLOduration=2.179732885 podStartE2EDuration="4.038928892s" podCreationTimestamp="2024-06-25 18:31:10 +0000 UTC" firstStartedPulling="2024-06-25 18:31:11.622185843 +0000 UTC m=+17.777001248" lastFinishedPulling="2024-06-25 18:31:13.48138181 +0000 UTC m=+19.636197255" observedRunningTime="2024-06-25 18:31:14.03822357 +0000 UTC m=+20.193038975" watchObservedRunningTime="2024-06-25 18:31:14.038928892 +0000 UTC m=+20.193744337" Jun 25 18:31:17.012378 kubelet[3240]: I0625 18:31:17.012323 3240 topology_manager.go:215] "Topology Admit Handler" podUID="73b5f527-b192-4243-84fd-859ba2824e87" podNamespace="calico-system" podName="calico-typha-66b7657474-sxsmx" Jun 25 18:31:17.018855 systemd[1]: Created slice kubepods-besteffort-pod73b5f527_b192_4243_84fd_859ba2824e87.slice - libcontainer container kubepods-besteffort-pod73b5f527_b192_4243_84fd_859ba2824e87.slice. Jun 25 18:31:17.078234 kubelet[3240]: I0625 18:31:17.077087 3240 topology_manager.go:215] "Topology Admit Handler" podUID="5337cb80-2a81-49f7-8368-b41f510b0dab" podNamespace="calico-system" podName="calico-node-zbfkn" Jun 25 18:31:17.085636 systemd[1]: Created slice kubepods-besteffort-pod5337cb80_2a81_49f7_8368_b41f510b0dab.slice - libcontainer container kubepods-besteffort-pod5337cb80_2a81_49f7_8368_b41f510b0dab.slice. Jun 25 18:31:17.124486 kubelet[3240]: I0625 18:31:17.124446 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtgwt\" (UniqueName: \"kubernetes.io/projected/73b5f527-b192-4243-84fd-859ba2824e87-kube-api-access-vtgwt\") pod \"calico-typha-66b7657474-sxsmx\" (UID: \"73b5f527-b192-4243-84fd-859ba2824e87\") " pod="calico-system/calico-typha-66b7657474-sxsmx" Jun 25 18:31:17.124486 kubelet[3240]: I0625 18:31:17.124490 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5337cb80-2a81-49f7-8368-b41f510b0dab-tigera-ca-bundle\") pod \"calico-node-zbfkn\" (UID: \"5337cb80-2a81-49f7-8368-b41f510b0dab\") " pod="calico-system/calico-node-zbfkn" Jun 25 18:31:17.124655 kubelet[3240]: I0625 18:31:17.124514 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73b5f527-b192-4243-84fd-859ba2824e87-tigera-ca-bundle\") pod \"calico-typha-66b7657474-sxsmx\" (UID: \"73b5f527-b192-4243-84fd-859ba2824e87\") " pod="calico-system/calico-typha-66b7657474-sxsmx" Jun 25 18:31:17.124655 kubelet[3240]: I0625 18:31:17.124534 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5337cb80-2a81-49f7-8368-b41f510b0dab-cni-log-dir\") pod \"calico-node-zbfkn\" (UID: \"5337cb80-2a81-49f7-8368-b41f510b0dab\") " pod="calico-system/calico-node-zbfkn" Jun 25 18:31:17.124655 kubelet[3240]: I0625 18:31:17.124550 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5337cb80-2a81-49f7-8368-b41f510b0dab-flexvol-driver-host\") pod \"calico-node-zbfkn\" (UID: \"5337cb80-2a81-49f7-8368-b41f510b0dab\") " pod="calico-system/calico-node-zbfkn" Jun 25 18:31:17.124655 kubelet[3240]: I0625 18:31:17.124567 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5337cb80-2a81-49f7-8368-b41f510b0dab-var-run-calico\") pod \"calico-node-zbfkn\" (UID: \"5337cb80-2a81-49f7-8368-b41f510b0dab\") " pod="calico-system/calico-node-zbfkn" Jun 25 18:31:17.124655 kubelet[3240]: I0625 18:31:17.124583 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5337cb80-2a81-49f7-8368-b41f510b0dab-cni-bin-dir\") pod \"calico-node-zbfkn\" (UID: \"5337cb80-2a81-49f7-8368-b41f510b0dab\") " pod="calico-system/calico-node-zbfkn" Jun 25 18:31:17.124765 kubelet[3240]: I0625 18:31:17.124598 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5337cb80-2a81-49f7-8368-b41f510b0dab-lib-modules\") pod \"calico-node-zbfkn\" (UID: \"5337cb80-2a81-49f7-8368-b41f510b0dab\") " pod="calico-system/calico-node-zbfkn" Jun 25 18:31:17.124765 kubelet[3240]: I0625 18:31:17.124612 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5337cb80-2a81-49f7-8368-b41f510b0dab-cni-net-dir\") pod \"calico-node-zbfkn\" (UID: \"5337cb80-2a81-49f7-8368-b41f510b0dab\") " pod="calico-system/calico-node-zbfkn" Jun 25 18:31:17.124765 kubelet[3240]: I0625 18:31:17.124629 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5337cb80-2a81-49f7-8368-b41f510b0dab-policysync\") pod \"calico-node-zbfkn\" (UID: \"5337cb80-2a81-49f7-8368-b41f510b0dab\") " pod="calico-system/calico-node-zbfkn" Jun 25 18:31:17.124765 kubelet[3240]: I0625 18:31:17.124645 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vlm8\" (UniqueName: \"kubernetes.io/projected/5337cb80-2a81-49f7-8368-b41f510b0dab-kube-api-access-7vlm8\") pod \"calico-node-zbfkn\" (UID: \"5337cb80-2a81-49f7-8368-b41f510b0dab\") " pod="calico-system/calico-node-zbfkn" Jun 25 18:31:17.124765 kubelet[3240]: I0625 18:31:17.124660 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/73b5f527-b192-4243-84fd-859ba2824e87-typha-certs\") pod \"calico-typha-66b7657474-sxsmx\" (UID: \"73b5f527-b192-4243-84fd-859ba2824e87\") " pod="calico-system/calico-typha-66b7657474-sxsmx" Jun 25 18:31:17.124916 kubelet[3240]: I0625 18:31:17.124676 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5337cb80-2a81-49f7-8368-b41f510b0dab-xtables-lock\") pod \"calico-node-zbfkn\" (UID: \"5337cb80-2a81-49f7-8368-b41f510b0dab\") " pod="calico-system/calico-node-zbfkn" Jun 25 18:31:17.124916 kubelet[3240]: I0625 18:31:17.124690 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5337cb80-2a81-49f7-8368-b41f510b0dab-node-certs\") pod \"calico-node-zbfkn\" (UID: \"5337cb80-2a81-49f7-8368-b41f510b0dab\") " pod="calico-system/calico-node-zbfkn" Jun 25 18:31:17.124916 kubelet[3240]: I0625 18:31:17.124703 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5337cb80-2a81-49f7-8368-b41f510b0dab-var-lib-calico\") pod \"calico-node-zbfkn\" (UID: \"5337cb80-2a81-49f7-8368-b41f510b0dab\") " pod="calico-system/calico-node-zbfkn" Jun 25 18:31:17.192423 kubelet[3240]: I0625 18:31:17.191940 3240 topology_manager.go:215] "Topology Admit Handler" podUID="d7c63612-a464-4ae6-b2ed-1cf040476205" podNamespace="calico-system" podName="csi-node-driver-9fsbs" Jun 25 18:31:17.193415 kubelet[3240]: E0625 18:31:17.192936 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fsbs" podUID="d7c63612-a464-4ae6-b2ed-1cf040476205" Jun 25 18:31:17.242355 kubelet[3240]: E0625 18:31:17.241843 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.242355 kubelet[3240]: W0625 18:31:17.241869 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.242355 kubelet[3240]: E0625 18:31:17.241894 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.242688 kubelet[3240]: E0625 18:31:17.242660 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.242688 kubelet[3240]: W0625 18:31:17.242684 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.242762 kubelet[3240]: E0625 18:31:17.242698 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.262340 kubelet[3240]: E0625 18:31:17.260371 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.262340 kubelet[3240]: W0625 18:31:17.260402 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.262340 kubelet[3240]: E0625 18:31:17.260420 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.270765 kubelet[3240]: E0625 18:31:17.270676 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.270765 kubelet[3240]: W0625 18:31:17.270697 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.270765 kubelet[3240]: E0625 18:31:17.270726 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.291458 kubelet[3240]: E0625 18:31:17.291274 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.291458 kubelet[3240]: W0625 18:31:17.291335 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.291458 kubelet[3240]: E0625 18:31:17.291359 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.291634 kubelet[3240]: E0625 18:31:17.291554 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.291634 kubelet[3240]: W0625 18:31:17.291569 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.291634 kubelet[3240]: E0625 18:31:17.291580 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.291868 kubelet[3240]: E0625 18:31:17.291762 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.291868 kubelet[3240]: W0625 18:31:17.291776 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.291868 kubelet[3240]: E0625 18:31:17.291786 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.291944 kubelet[3240]: E0625 18:31:17.291936 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.291964 kubelet[3240]: W0625 18:31:17.291943 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.291964 kubelet[3240]: E0625 18:31:17.291951 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.292258 kubelet[3240]: E0625 18:31:17.292137 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.292258 kubelet[3240]: W0625 18:31:17.292150 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.292258 kubelet[3240]: E0625 18:31:17.292160 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.293158 kubelet[3240]: E0625 18:31:17.292409 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.293158 kubelet[3240]: W0625 18:31:17.292424 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.293158 kubelet[3240]: E0625 18:31:17.292448 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.293158 kubelet[3240]: E0625 18:31:17.292592 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.293158 kubelet[3240]: W0625 18:31:17.292623 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.293158 kubelet[3240]: E0625 18:31:17.292632 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.293158 kubelet[3240]: E0625 18:31:17.292786 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.293158 kubelet[3240]: W0625 18:31:17.292794 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.293158 kubelet[3240]: E0625 18:31:17.292803 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.293158 kubelet[3240]: E0625 18:31:17.292971 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.293389 kubelet[3240]: W0625 18:31:17.293002 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.293389 kubelet[3240]: E0625 18:31:17.293012 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.293389 kubelet[3240]: E0625 18:31:17.293174 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.293389 kubelet[3240]: W0625 18:31:17.293182 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.293389 kubelet[3240]: E0625 18:31:17.293191 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.293495 kubelet[3240]: E0625 18:31:17.293445 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.293495 kubelet[3240]: W0625 18:31:17.293454 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.293495 kubelet[3240]: E0625 18:31:17.293482 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.293668 kubelet[3240]: E0625 18:31:17.293647 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.293668 kubelet[3240]: W0625 18:31:17.293661 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.293719 kubelet[3240]: E0625 18:31:17.293670 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.293895 kubelet[3240]: E0625 18:31:17.293877 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.293938 kubelet[3240]: W0625 18:31:17.293890 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.293938 kubelet[3240]: E0625 18:31:17.293910 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.294108 kubelet[3240]: E0625 18:31:17.294087 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.294108 kubelet[3240]: W0625 18:31:17.294101 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.294171 kubelet[3240]: E0625 18:31:17.294111 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.294636 kubelet[3240]: E0625 18:31:17.294298 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.294636 kubelet[3240]: W0625 18:31:17.294326 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.294636 kubelet[3240]: E0625 18:31:17.294342 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.294636 kubelet[3240]: E0625 18:31:17.294516 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.294636 kubelet[3240]: W0625 18:31:17.294524 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.294636 kubelet[3240]: E0625 18:31:17.294533 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.295203 kubelet[3240]: E0625 18:31:17.295168 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.295203 kubelet[3240]: W0625 18:31:17.295189 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.295203 kubelet[3240]: E0625 18:31:17.295201 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.295438 kubelet[3240]: E0625 18:31:17.295415 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.295438 kubelet[3240]: W0625 18:31:17.295430 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.295438 kubelet[3240]: E0625 18:31:17.295439 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.295950 kubelet[3240]: E0625 18:31:17.295738 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.295950 kubelet[3240]: W0625 18:31:17.295754 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.295950 kubelet[3240]: E0625 18:31:17.295764 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.296216 kubelet[3240]: E0625 18:31:17.296192 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.296216 kubelet[3240]: W0625 18:31:17.296209 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.296293 kubelet[3240]: E0625 18:31:17.296221 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.324068 containerd[1730]: time="2024-06-25T18:31:17.324018891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66b7657474-sxsmx,Uid:73b5f527-b192-4243-84fd-859ba2824e87,Namespace:calico-system,Attempt:0,}" Jun 25 18:31:17.336162 kubelet[3240]: E0625 18:31:17.336126 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.336162 kubelet[3240]: W0625 18:31:17.336150 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.336293 kubelet[3240]: E0625 18:31:17.336170 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.336293 kubelet[3240]: I0625 18:31:17.336200 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d7c63612-a464-4ae6-b2ed-1cf040476205-varrun\") pod \"csi-node-driver-9fsbs\" (UID: \"d7c63612-a464-4ae6-b2ed-1cf040476205\") " pod="calico-system/csi-node-driver-9fsbs" Jun 25 18:31:17.336441 kubelet[3240]: E0625 18:31:17.336419 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.336441 kubelet[3240]: W0625 18:31:17.336437 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.336654 kubelet[3240]: E0625 18:31:17.336449 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.336654 kubelet[3240]: I0625 18:31:17.336466 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7c63612-a464-4ae6-b2ed-1cf040476205-kubelet-dir\") pod \"csi-node-driver-9fsbs\" (UID: \"d7c63612-a464-4ae6-b2ed-1cf040476205\") " pod="calico-system/csi-node-driver-9fsbs" Jun 25 18:31:17.336862 kubelet[3240]: E0625 18:31:17.336828 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.336862 kubelet[3240]: W0625 18:31:17.336853 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.336938 kubelet[3240]: E0625 18:31:17.336896 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.336938 kubelet[3240]: I0625 18:31:17.336919 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d7c63612-a464-4ae6-b2ed-1cf040476205-socket-dir\") pod \"csi-node-driver-9fsbs\" (UID: \"d7c63612-a464-4ae6-b2ed-1cf040476205\") " pod="calico-system/csi-node-driver-9fsbs" Jun 25 18:31:17.337290 kubelet[3240]: E0625 18:31:17.337267 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.337290 kubelet[3240]: W0625 18:31:17.337284 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.337290 kubelet[3240]: E0625 18:31:17.337301 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.337411 kubelet[3240]: I0625 18:31:17.337328 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrxtj\" (UniqueName: \"kubernetes.io/projected/d7c63612-a464-4ae6-b2ed-1cf040476205-kube-api-access-xrxtj\") pod \"csi-node-driver-9fsbs\" (UID: \"d7c63612-a464-4ae6-b2ed-1cf040476205\") " pod="calico-system/csi-node-driver-9fsbs" Jun 25 18:31:17.337638 kubelet[3240]: E0625 18:31:17.337502 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.337638 kubelet[3240]: W0625 18:31:17.337514 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.337638 kubelet[3240]: E0625 18:31:17.337524 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.337638 kubelet[3240]: I0625 18:31:17.337540 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d7c63612-a464-4ae6-b2ed-1cf040476205-registration-dir\") pod \"csi-node-driver-9fsbs\" (UID: \"d7c63612-a464-4ae6-b2ed-1cf040476205\") " pod="calico-system/csi-node-driver-9fsbs" Jun 25 18:31:17.337902 kubelet[3240]: E0625 18:31:17.337808 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.337902 kubelet[3240]: W0625 18:31:17.337819 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.338042 kubelet[3240]: E0625 18:31:17.337987 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.338461 kubelet[3240]: E0625 18:31:17.338407 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.338461 kubelet[3240]: W0625 18:31:17.338427 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.338461 kubelet[3240]: E0625 18:31:17.338446 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.338832 kubelet[3240]: E0625 18:31:17.338624 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.338832 kubelet[3240]: W0625 18:31:17.338634 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.338832 kubelet[3240]: E0625 18:31:17.338649 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.338832 kubelet[3240]: E0625 18:31:17.338798 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.338832 kubelet[3240]: W0625 18:31:17.338806 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.338832 kubelet[3240]: E0625 18:31:17.338822 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.339185 kubelet[3240]: E0625 18:31:17.338974 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.339185 kubelet[3240]: W0625 18:31:17.338983 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.339185 kubelet[3240]: E0625 18:31:17.338996 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.339185 kubelet[3240]: E0625 18:31:17.339129 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.339185 kubelet[3240]: W0625 18:31:17.339136 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.339185 kubelet[3240]: E0625 18:31:17.339144 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.339592 kubelet[3240]: E0625 18:31:17.339568 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.339592 kubelet[3240]: W0625 18:31:17.339584 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.339666 kubelet[3240]: E0625 18:31:17.339596 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.339796 kubelet[3240]: E0625 18:31:17.339779 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.339796 kubelet[3240]: W0625 18:31:17.339792 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.339876 kubelet[3240]: E0625 18:31:17.339801 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.339968 kubelet[3240]: E0625 18:31:17.339952 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.339968 kubelet[3240]: W0625 18:31:17.339964 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.340043 kubelet[3240]: E0625 18:31:17.339973 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.340146 kubelet[3240]: E0625 18:31:17.340130 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.340146 kubelet[3240]: W0625 18:31:17.340144 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.340207 kubelet[3240]: E0625 18:31:17.340153 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.371987 containerd[1730]: time="2024-06-25T18:31:17.371798684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:31:17.371987 containerd[1730]: time="2024-06-25T18:31:17.371874885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:17.372486 containerd[1730]: time="2024-06-25T18:31:17.371893765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:31:17.372486 containerd[1730]: time="2024-06-25T18:31:17.371940125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:17.388485 systemd[1]: Started cri-containerd-5546d2168bf23c6494987caa04f46e446e3dcca07d45c0b3e67b1fb59187fc36.scope - libcontainer container 5546d2168bf23c6494987caa04f46e446e3dcca07d45c0b3e67b1fb59187fc36. Jun 25 18:31:17.390739 containerd[1730]: time="2024-06-25T18:31:17.390620129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zbfkn,Uid:5337cb80-2a81-49f7-8368-b41f510b0dab,Namespace:calico-system,Attempt:0,}" Jun 25 18:31:17.433342 containerd[1730]: time="2024-06-25T18:31:17.433204951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66b7657474-sxsmx,Uid:73b5f527-b192-4243-84fd-859ba2824e87,Namespace:calico-system,Attempt:0,} returns sandbox id \"5546d2168bf23c6494987caa04f46e446e3dcca07d45c0b3e67b1fb59187fc36\"" Jun 25 18:31:17.436591 containerd[1730]: time="2024-06-25T18:31:17.436374638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 18:31:17.439073 kubelet[3240]: E0625 18:31:17.438937 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.439402 kubelet[3240]: W0625 18:31:17.439377 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.439749 kubelet[3240]: E0625 18:31:17.439406 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.440237 kubelet[3240]: E0625 18:31:17.440141 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.440237 kubelet[3240]: W0625 18:31:17.440157 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.440237 kubelet[3240]: E0625 18:31:17.440180 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.440649 kubelet[3240]: E0625 18:31:17.440387 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.440649 kubelet[3240]: W0625 18:31:17.440408 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.440649 kubelet[3240]: E0625 18:31:17.440426 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.441488 kubelet[3240]: E0625 18:31:17.441139 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.441488 kubelet[3240]: W0625 18:31:17.441156 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.441488 kubelet[3240]: E0625 18:31:17.441227 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.441945 kubelet[3240]: E0625 18:31:17.441718 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.441945 kubelet[3240]: W0625 18:31:17.441730 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.441945 kubelet[3240]: E0625 18:31:17.441773 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.441945 kubelet[3240]: E0625 18:31:17.441911 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.441945 kubelet[3240]: W0625 18:31:17.441919 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.442211 kubelet[3240]: E0625 18:31:17.442083 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.442750 kubelet[3240]: E0625 18:31:17.442086 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.442750 kubelet[3240]: W0625 18:31:17.442395 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.442750 kubelet[3240]: E0625 18:31:17.442640 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.443381 kubelet[3240]: E0625 18:31:17.443164 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.443381 kubelet[3240]: W0625 18:31:17.443184 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.443381 kubelet[3240]: E0625 18:31:17.443350 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.443688 kubelet[3240]: E0625 18:31:17.443478 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.443688 kubelet[3240]: W0625 18:31:17.443519 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.443688 kubelet[3240]: E0625 18:31:17.443624 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.444015 kubelet[3240]: E0625 18:31:17.443743 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.444015 kubelet[3240]: W0625 18:31:17.443752 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.444015 kubelet[3240]: E0625 18:31:17.443835 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.444940 kubelet[3240]: E0625 18:31:17.444751 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.444940 kubelet[3240]: W0625 18:31:17.444773 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.444940 kubelet[3240]: E0625 18:31:17.444906 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.446397 kubelet[3240]: E0625 18:31:17.446364 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.446397 kubelet[3240]: W0625 18:31:17.446392 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.446612 kubelet[3240]: E0625 18:31:17.446557 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.446696 kubelet[3240]: E0625 18:31:17.446650 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.446696 kubelet[3240]: W0625 18:31:17.446660 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.446696 kubelet[3240]: E0625 18:31:17.446741 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.447648 kubelet[3240]: E0625 18:31:17.447594 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.447648 kubelet[3240]: W0625 18:31:17.447621 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.447932 kubelet[3240]: E0625 18:31:17.447899 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.448391 kubelet[3240]: E0625 18:31:17.448367 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.448391 kubelet[3240]: W0625 18:31:17.448385 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.448391 kubelet[3240]: E0625 18:31:17.448417 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.449555 containerd[1730]: time="2024-06-25T18:31:17.446904223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:31:17.449555 containerd[1730]: time="2024-06-25T18:31:17.446954384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:17.449555 containerd[1730]: time="2024-06-25T18:31:17.446972104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:31:17.449555 containerd[1730]: time="2024-06-25T18:31:17.446985624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:17.449690 kubelet[3240]: E0625 18:31:17.449593 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.449690 kubelet[3240]: W0625 18:31:17.449610 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.450874 kubelet[3240]: E0625 18:31:17.450846 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.451465 kubelet[3240]: E0625 18:31:17.451437 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.451465 kubelet[3240]: W0625 18:31:17.451456 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.451465 kubelet[3240]: E0625 18:31:17.451492 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.453289 kubelet[3240]: E0625 18:31:17.453109 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.453289 kubelet[3240]: W0625 18:31:17.453129 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.453582 kubelet[3240]: E0625 18:31:17.453347 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.453582 kubelet[3240]: E0625 18:31:17.453540 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.453582 kubelet[3240]: W0625 18:31:17.453551 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.454036 kubelet[3240]: E0625 18:31:17.453815 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.454419 kubelet[3240]: E0625 18:31:17.454296 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.454419 kubelet[3240]: W0625 18:31:17.454332 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.454726 kubelet[3240]: E0625 18:31:17.454520 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.455980 kubelet[3240]: E0625 18:31:17.455626 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.455980 kubelet[3240]: W0625 18:31:17.455648 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.456549 kubelet[3240]: E0625 18:31:17.456523 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.456549 kubelet[3240]: W0625 18:31:17.456544 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.460144 kubelet[3240]: E0625 18:31:17.459670 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.460144 kubelet[3240]: W0625 18:31:17.459688 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.460253 kubelet[3240]: E0625 18:31:17.460202 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.460253 kubelet[3240]: E0625 18:31:17.460227 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.460253 kubelet[3240]: E0625 18:31:17.460238 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.460470 kubelet[3240]: E0625 18:31:17.460458 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.460470 kubelet[3240]: W0625 18:31:17.460469 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.460586 kubelet[3240]: E0625 18:31:17.460505 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.460863 kubelet[3240]: E0625 18:31:17.460718 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.460863 kubelet[3240]: W0625 18:31:17.460737 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.460863 kubelet[3240]: E0625 18:31:17.460749 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.476946 kubelet[3240]: E0625 18:31:17.476783 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:17.476946 kubelet[3240]: W0625 18:31:17.476800 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:17.476946 kubelet[3240]: E0625 18:31:17.476819 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:17.478514 systemd[1]: Started cri-containerd-dc1d5c1b91adb8ebaa5b7054cb97a2e710f6b58e09d3dbd335b2624cb063a463.scope - libcontainer container dc1d5c1b91adb8ebaa5b7054cb97a2e710f6b58e09d3dbd335b2624cb063a463. Jun 25 18:31:17.512774 containerd[1730]: time="2024-06-25T18:31:17.512728420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zbfkn,Uid:5337cb80-2a81-49f7-8368-b41f510b0dab,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc1d5c1b91adb8ebaa5b7054cb97a2e710f6b58e09d3dbd335b2624cb063a463\"" Jun 25 18:31:18.977764 kubelet[3240]: E0625 18:31:18.977013 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fsbs" podUID="d7c63612-a464-4ae6-b2ed-1cf040476205" Jun 25 18:31:19.292875 containerd[1730]: time="2024-06-25T18:31:19.292046699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:19.294701 containerd[1730]: time="2024-06-25T18:31:19.294663785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jun 25 18:31:19.299637 containerd[1730]: time="2024-06-25T18:31:19.299538877Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:19.304669 containerd[1730]: time="2024-06-25T18:31:19.304603569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:19.305188 containerd[1730]: time="2024-06-25T18:31:19.305077730Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 1.868664332s" Jun 25 18:31:19.305188 containerd[1730]: time="2024-06-25T18:31:19.305112290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jun 25 18:31:19.306577 containerd[1730]: time="2024-06-25T18:31:19.306545693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 18:31:19.321220 containerd[1730]: time="2024-06-25T18:31:19.321157448Z" level=info msg="CreateContainer within sandbox \"5546d2168bf23c6494987caa04f46e446e3dcca07d45c0b3e67b1fb59187fc36\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 18:31:19.371690 containerd[1730]: time="2024-06-25T18:31:19.371561048Z" level=info msg="CreateContainer within sandbox \"5546d2168bf23c6494987caa04f46e446e3dcca07d45c0b3e67b1fb59187fc36\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7813ad08d58b7f2284b14c20cdff5542a5d663aa53942aa11e2ea8cd00fc2509\"" Jun 25 18:31:19.373764 containerd[1730]: time="2024-06-25T18:31:19.373721613Z" level=info msg="StartContainer for \"7813ad08d58b7f2284b14c20cdff5542a5d663aa53942aa11e2ea8cd00fc2509\"" Jun 25 18:31:19.402502 systemd[1]: Started cri-containerd-7813ad08d58b7f2284b14c20cdff5542a5d663aa53942aa11e2ea8cd00fc2509.scope - libcontainer container 7813ad08d58b7f2284b14c20cdff5542a5d663aa53942aa11e2ea8cd00fc2509. Jun 25 18:31:19.444884 containerd[1730]: time="2024-06-25T18:31:19.444812303Z" level=info msg="StartContainer for \"7813ad08d58b7f2284b14c20cdff5542a5d663aa53942aa11e2ea8cd00fc2509\" returns successfully" Jun 25 18:31:20.111328 kubelet[3240]: E0625 18:31:20.111274 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.111328 kubelet[3240]: W0625 18:31:20.111297 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.111328 kubelet[3240]: E0625 18:31:20.111328 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.111716 kubelet[3240]: E0625 18:31:20.111535 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.111716 kubelet[3240]: W0625 18:31:20.111545 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.111716 kubelet[3240]: E0625 18:31:20.111556 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.111716 kubelet[3240]: E0625 18:31:20.111703 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.111716 kubelet[3240]: W0625 18:31:20.111711 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.111830 kubelet[3240]: E0625 18:31:20.111721 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.111882 kubelet[3240]: E0625 18:31:20.111856 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.111882 kubelet[3240]: W0625 18:31:20.111873 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.111882 kubelet[3240]: E0625 18:31:20.111882 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.112047 kubelet[3240]: E0625 18:31:20.112028 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.112047 kubelet[3240]: W0625 18:31:20.112042 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.112094 kubelet[3240]: E0625 18:31:20.112051 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.112196 kubelet[3240]: E0625 18:31:20.112181 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.112196 kubelet[3240]: W0625 18:31:20.112194 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.112247 kubelet[3240]: E0625 18:31:20.112202 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.112405 kubelet[3240]: E0625 18:31:20.112360 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.112405 kubelet[3240]: W0625 18:31:20.112372 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.112405 kubelet[3240]: E0625 18:31:20.112381 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.112537 kubelet[3240]: E0625 18:31:20.112518 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.112537 kubelet[3240]: W0625 18:31:20.112532 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.112594 kubelet[3240]: E0625 18:31:20.112576 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.112799 kubelet[3240]: E0625 18:31:20.112781 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.112799 kubelet[3240]: W0625 18:31:20.112795 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.112863 kubelet[3240]: E0625 18:31:20.112804 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.112956 kubelet[3240]: E0625 18:31:20.112940 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.112956 kubelet[3240]: W0625 18:31:20.112953 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.113009 kubelet[3240]: E0625 18:31:20.112961 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.113106 kubelet[3240]: E0625 18:31:20.113090 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.113106 kubelet[3240]: W0625 18:31:20.113104 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.113153 kubelet[3240]: E0625 18:31:20.113112 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.113242 kubelet[3240]: E0625 18:31:20.113227 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.113242 kubelet[3240]: W0625 18:31:20.113239 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.113291 kubelet[3240]: E0625 18:31:20.113248 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.113429 kubelet[3240]: E0625 18:31:20.113414 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.113429 kubelet[3240]: W0625 18:31:20.113427 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.113486 kubelet[3240]: E0625 18:31:20.113438 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.113583 kubelet[3240]: E0625 18:31:20.113568 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.113583 kubelet[3240]: W0625 18:31:20.113580 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.113635 kubelet[3240]: E0625 18:31:20.113590 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.113810 kubelet[3240]: E0625 18:31:20.113793 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.113810 kubelet[3240]: W0625 18:31:20.113806 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.113920 kubelet[3240]: E0625 18:31:20.113815 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.159699 kubelet[3240]: E0625 18:31:20.159521 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.159699 kubelet[3240]: W0625 18:31:20.159543 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.159699 kubelet[3240]: E0625 18:31:20.159563 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.159955 kubelet[3240]: E0625 18:31:20.159932 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.159994 kubelet[3240]: W0625 18:31:20.159967 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.159994 kubelet[3240]: E0625 18:31:20.159982 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.160174 kubelet[3240]: E0625 18:31:20.160148 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.160174 kubelet[3240]: W0625 18:31:20.160164 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.160174 kubelet[3240]: E0625 18:31:20.160174 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.160408 kubelet[3240]: E0625 18:31:20.160381 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.160459 kubelet[3240]: W0625 18:31:20.160399 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.160485 kubelet[3240]: E0625 18:31:20.160457 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.160663 kubelet[3240]: E0625 18:31:20.160646 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.160663 kubelet[3240]: W0625 18:31:20.160661 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.160765 kubelet[3240]: E0625 18:31:20.160678 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.160889 kubelet[3240]: E0625 18:31:20.160873 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.160889 kubelet[3240]: W0625 18:31:20.160887 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.160991 kubelet[3240]: E0625 18:31:20.160902 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.161041 kubelet[3240]: E0625 18:31:20.161032 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.161041 kubelet[3240]: W0625 18:31:20.161040 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.161513 kubelet[3240]: E0625 18:31:20.161066 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.161513 kubelet[3240]: E0625 18:31:20.161172 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.161513 kubelet[3240]: W0625 18:31:20.161179 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.161513 kubelet[3240]: E0625 18:31:20.161291 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.161513 kubelet[3240]: W0625 18:31:20.161298 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.161513 kubelet[3240]: E0625 18:31:20.161307 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.161513 kubelet[3240]: E0625 18:31:20.161373 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.161513 kubelet[3240]: E0625 18:31:20.161449 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.161513 kubelet[3240]: W0625 18:31:20.161456 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.161513 kubelet[3240]: E0625 18:31:20.161472 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.161714 kubelet[3240]: E0625 18:31:20.161588 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.161714 kubelet[3240]: W0625 18:31:20.161598 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.161714 kubelet[3240]: E0625 18:31:20.161605 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.161775 kubelet[3240]: E0625 18:31:20.161740 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.161775 kubelet[3240]: W0625 18:31:20.161747 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.161775 kubelet[3240]: E0625 18:31:20.161754 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.162204 kubelet[3240]: E0625 18:31:20.162106 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.162204 kubelet[3240]: W0625 18:31:20.162122 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.162204 kubelet[3240]: E0625 18:31:20.162145 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.162537 kubelet[3240]: E0625 18:31:20.162469 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.162537 kubelet[3240]: W0625 18:31:20.162482 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.162537 kubelet[3240]: E0625 18:31:20.162501 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.162886 kubelet[3240]: E0625 18:31:20.162799 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.162886 kubelet[3240]: W0625 18:31:20.162811 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.162886 kubelet[3240]: E0625 18:31:20.162829 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.163362 kubelet[3240]: E0625 18:31:20.163216 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.163362 kubelet[3240]: W0625 18:31:20.163229 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.163362 kubelet[3240]: E0625 18:31:20.163250 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.163479 kubelet[3240]: E0625 18:31:20.163455 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.163528 kubelet[3240]: W0625 18:31:20.163505 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.163557 kubelet[3240]: E0625 18:31:20.163526 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.163890 kubelet[3240]: E0625 18:31:20.163845 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:31:20.163890 kubelet[3240]: W0625 18:31:20.163858 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:31:20.163890 kubelet[3240]: E0625 18:31:20.163869 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:31:20.440334 containerd[1730]: time="2024-06-25T18:31:20.440034433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:20.441995 containerd[1730]: time="2024-06-25T18:31:20.441963158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jun 25 18:31:20.445632 containerd[1730]: time="2024-06-25T18:31:20.445547567Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:20.449306 containerd[1730]: time="2024-06-25T18:31:20.449257295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:20.450269 containerd[1730]: time="2024-06-25T18:31:20.449794857Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.143205444s" Jun 25 18:31:20.450269 containerd[1730]: time="2024-06-25T18:31:20.449830777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jun 25 18:31:20.451840 containerd[1730]: time="2024-06-25T18:31:20.451713501Z" level=info msg="CreateContainer within sandbox \"dc1d5c1b91adb8ebaa5b7054cb97a2e710f6b58e09d3dbd335b2624cb063a463\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 18:31:20.489805 containerd[1730]: time="2024-06-25T18:31:20.489756952Z" level=info msg="CreateContainer within sandbox \"dc1d5c1b91adb8ebaa5b7054cb97a2e710f6b58e09d3dbd335b2624cb063a463\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"22d841e3791c1742fcb5d558df82fb04a26152632525154e7b42244b71141f45\"" Jun 25 18:31:20.490506 containerd[1730]: time="2024-06-25T18:31:20.490477274Z" level=info msg="StartContainer for \"22d841e3791c1742fcb5d558df82fb04a26152632525154e7b42244b71141f45\"" Jun 25 18:31:20.519696 systemd[1]: Started cri-containerd-22d841e3791c1742fcb5d558df82fb04a26152632525154e7b42244b71141f45.scope - libcontainer container 22d841e3791c1742fcb5d558df82fb04a26152632525154e7b42244b71141f45. Jun 25 18:31:20.550165 containerd[1730]: time="2024-06-25T18:31:20.550000295Z" level=info msg="StartContainer for \"22d841e3791c1742fcb5d558df82fb04a26152632525154e7b42244b71141f45\" returns successfully" Jun 25 18:31:20.556796 systemd[1]: cri-containerd-22d841e3791c1742fcb5d558df82fb04a26152632525154e7b42244b71141f45.scope: Deactivated successfully. Jun 25 18:31:20.583401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22d841e3791c1742fcb5d558df82fb04a26152632525154e7b42244b71141f45-rootfs.mount: Deactivated successfully. Jun 25 18:31:20.978046 kubelet[3240]: E0625 18:31:20.977714 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fsbs" podUID="d7c63612-a464-4ae6-b2ed-1cf040476205" Jun 25 18:31:21.050917 kubelet[3240]: I0625 18:31:21.050882 3240 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:31:21.065713 kubelet[3240]: I0625 18:31:21.065645 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66b7657474-sxsmx" podStartSLOduration=3.195227988 podStartE2EDuration="5.065626484s" podCreationTimestamp="2024-06-25 18:31:16 +0000 UTC" firstStartedPulling="2024-06-25 18:31:17.435593676 +0000 UTC m=+23.590409121" lastFinishedPulling="2024-06-25 18:31:19.305992172 +0000 UTC m=+25.460807617" observedRunningTime="2024-06-25 18:31:20.057631963 +0000 UTC m=+26.212447408" watchObservedRunningTime="2024-06-25 18:31:21.065626484 +0000 UTC m=+27.220441929" Jun 25 18:31:21.475358 containerd[1730]: time="2024-06-25T18:31:21.475266060Z" level=info msg="shim disconnected" id=22d841e3791c1742fcb5d558df82fb04a26152632525154e7b42244b71141f45 namespace=k8s.io Jun 25 18:31:21.475358 containerd[1730]: time="2024-06-25T18:31:21.475352580Z" level=warning msg="cleaning up after shim disconnected" id=22d841e3791c1742fcb5d558df82fb04a26152632525154e7b42244b71141f45 namespace=k8s.io Jun 25 18:31:21.475358 containerd[1730]: time="2024-06-25T18:31:21.475363420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:31:21.484957 containerd[1730]: time="2024-06-25T18:31:21.484892682Z" level=warning msg="cleanup warnings time=\"2024-06-25T18:31:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 25 18:31:22.055630 containerd[1730]: time="2024-06-25T18:31:22.055394281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 18:31:22.977585 kubelet[3240]: E0625 18:31:22.977460 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fsbs" podUID="d7c63612-a464-4ae6-b2ed-1cf040476205" Jun 25 18:31:24.740388 containerd[1730]: time="2024-06-25T18:31:24.740301781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:24.743167 containerd[1730]: time="2024-06-25T18:31:24.742975707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jun 25 18:31:24.746550 containerd[1730]: time="2024-06-25T18:31:24.746489516Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:24.751265 containerd[1730]: time="2024-06-25T18:31:24.751191168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:24.752359 containerd[1730]: time="2024-06-25T18:31:24.751864850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 2.696429128s" Jun 25 18:31:24.752359 containerd[1730]: time="2024-06-25T18:31:24.751903730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jun 25 18:31:24.754735 containerd[1730]: time="2024-06-25T18:31:24.754681857Z" level=info msg="CreateContainer within sandbox \"dc1d5c1b91adb8ebaa5b7054cb97a2e710f6b58e09d3dbd335b2624cb063a463\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 18:31:24.790029 containerd[1730]: time="2024-06-25T18:31:24.789975305Z" level=info msg="CreateContainer within sandbox \"dc1d5c1b91adb8ebaa5b7054cb97a2e710f6b58e09d3dbd335b2624cb063a463\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c2a911180fc2c4dd46e4925fe5aa4753b9604f279f94d1e58eaac82448546835\"" Jun 25 18:31:24.790745 containerd[1730]: time="2024-06-25T18:31:24.790707107Z" level=info msg="StartContainer for \"c2a911180fc2c4dd46e4925fe5aa4753b9604f279f94d1e58eaac82448546835\"" Jun 25 18:31:24.818991 systemd[1]: run-containerd-runc-k8s.io-c2a911180fc2c4dd46e4925fe5aa4753b9604f279f94d1e58eaac82448546835-runc.6FS00r.mount: Deactivated successfully. Jun 25 18:31:24.826815 systemd[1]: Started cri-containerd-c2a911180fc2c4dd46e4925fe5aa4753b9604f279f94d1e58eaac82448546835.scope - libcontainer container c2a911180fc2c4dd46e4925fe5aa4753b9604f279f94d1e58eaac82448546835. Jun 25 18:31:24.861057 containerd[1730]: time="2024-06-25T18:31:24.860876323Z" level=info msg="StartContainer for \"c2a911180fc2c4dd46e4925fe5aa4753b9604f279f94d1e58eaac82448546835\" returns successfully" Jun 25 18:31:24.977797 kubelet[3240]: E0625 18:31:24.977662 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fsbs" podUID="d7c63612-a464-4ae6-b2ed-1cf040476205" Jun 25 18:31:25.925037 containerd[1730]: time="2024-06-25T18:31:25.924903393Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:31:25.928017 systemd[1]: cri-containerd-c2a911180fc2c4dd46e4925fe5aa4753b9604f279f94d1e58eaac82448546835.scope: Deactivated successfully. Jun 25 18:31:25.952018 kubelet[3240]: I0625 18:31:25.951982 3240 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 25 18:31:25.962269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2a911180fc2c4dd46e4925fe5aa4753b9604f279f94d1e58eaac82448546835-rootfs.mount: Deactivated successfully. Jun 25 18:31:25.988052 kubelet[3240]: I0625 18:31:25.987999 3240 topology_manager.go:215] "Topology Admit Handler" podUID="8cd02293-f883-498a-bd4f-78e9c942abd5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9n2sl" Jun 25 18:31:26.305636 kubelet[3240]: I0625 18:31:25.991892 3240 topology_manager.go:215] "Topology Admit Handler" podUID="6cfbff32-c1d6-4bd6-b422-e4dce0d07843" podNamespace="calico-system" podName="calico-kube-controllers-7d4dd6c659-jr6s7" Jun 25 18:31:26.305636 kubelet[3240]: I0625 18:31:25.994054 3240 topology_manager.go:215] "Topology Admit Handler" podUID="82ebf39f-e0ab-44c8-9f59-93bb792499e0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kqz2s" Jun 25 18:31:26.305636 kubelet[3240]: I0625 18:31:26.100838 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgtbk\" (UniqueName: \"kubernetes.io/projected/6cfbff32-c1d6-4bd6-b422-e4dce0d07843-kube-api-access-wgtbk\") pod \"calico-kube-controllers-7d4dd6c659-jr6s7\" (UID: \"6cfbff32-c1d6-4bd6-b422-e4dce0d07843\") " pod="calico-system/calico-kube-controllers-7d4dd6c659-jr6s7" Jun 25 18:31:26.305636 kubelet[3240]: I0625 18:31:26.100878 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cd02293-f883-498a-bd4f-78e9c942abd5-config-volume\") pod \"coredns-7db6d8ff4d-9n2sl\" (UID: \"8cd02293-f883-498a-bd4f-78e9c942abd5\") " pod="kube-system/coredns-7db6d8ff4d-9n2sl" Jun 25 18:31:26.305636 kubelet[3240]: I0625 18:31:26.100898 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rbtg\" (UniqueName: \"kubernetes.io/projected/82ebf39f-e0ab-44c8-9f59-93bb792499e0-kube-api-access-4rbtg\") pod \"coredns-7db6d8ff4d-kqz2s\" (UID: \"82ebf39f-e0ab-44c8-9f59-93bb792499e0\") " pod="kube-system/coredns-7db6d8ff4d-kqz2s" Jun 25 18:31:26.305636 kubelet[3240]: I0625 18:31:26.100925 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfbff32-c1d6-4bd6-b422-e4dce0d07843-tigera-ca-bundle\") pod \"calico-kube-controllers-7d4dd6c659-jr6s7\" (UID: \"6cfbff32-c1d6-4bd6-b422-e4dce0d07843\") " pod="calico-system/calico-kube-controllers-7d4dd6c659-jr6s7" Jun 25 18:31:25.995603 systemd[1]: Created slice kubepods-burstable-pod8cd02293_f883_498a_bd4f_78e9c942abd5.slice - libcontainer container kubepods-burstable-pod8cd02293_f883_498a_bd4f_78e9c942abd5.slice. Jun 25 18:31:26.305950 kubelet[3240]: I0625 18:31:26.101038 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82ebf39f-e0ab-44c8-9f59-93bb792499e0-config-volume\") pod \"coredns-7db6d8ff4d-kqz2s\" (UID: \"82ebf39f-e0ab-44c8-9f59-93bb792499e0\") " pod="kube-system/coredns-7db6d8ff4d-kqz2s" Jun 25 18:31:26.305950 kubelet[3240]: I0625 18:31:26.101082 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm2vv\" (UniqueName: \"kubernetes.io/projected/8cd02293-f883-498a-bd4f-78e9c942abd5-kube-api-access-tm2vv\") pod \"coredns-7db6d8ff4d-9n2sl\" (UID: \"8cd02293-f883-498a-bd4f-78e9c942abd5\") " pod="kube-system/coredns-7db6d8ff4d-9n2sl" Jun 25 18:31:26.005997 systemd[1]: Created slice kubepods-besteffort-pod6cfbff32_c1d6_4bd6_b422_e4dce0d07843.slice - libcontainer container kubepods-besteffort-pod6cfbff32_c1d6_4bd6_b422_e4dce0d07843.slice. Jun 25 18:31:26.013648 systemd[1]: Created slice kubepods-burstable-pod82ebf39f_e0ab_44c8_9f59_93bb792499e0.slice - libcontainer container kubepods-burstable-pod82ebf39f_e0ab_44c8_9f59_93bb792499e0.slice. Jun 25 18:31:26.606985 containerd[1730]: time="2024-06-25T18:31:26.606855504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9n2sl,Uid:8cd02293-f883-498a-bd4f-78e9c942abd5,Namespace:kube-system,Attempt:0,}" Jun 25 18:31:26.609078 containerd[1730]: time="2024-06-25T18:31:26.608764469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4dd6c659-jr6s7,Uid:6cfbff32-c1d6-4bd6-b422-e4dce0d07843,Namespace:calico-system,Attempt:0,}" Jun 25 18:31:26.614660 containerd[1730]: time="2024-06-25T18:31:26.614466363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kqz2s,Uid:82ebf39f-e0ab-44c8-9f59-93bb792499e0,Namespace:kube-system,Attempt:0,}" Jun 25 18:31:26.982453 systemd[1]: Created slice kubepods-besteffort-podd7c63612_a464_4ae6_b2ed_1cf040476205.slice - libcontainer container kubepods-besteffort-podd7c63612_a464_4ae6_b2ed_1cf040476205.slice. Jun 25 18:31:26.985469 containerd[1730]: time="2024-06-25T18:31:26.985384774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9fsbs,Uid:d7c63612-a464-4ae6-b2ed-1cf040476205,Namespace:calico-system,Attempt:0,}" Jun 25 18:31:27.115929 containerd[1730]: time="2024-06-25T18:31:27.115871261Z" level=info msg="shim disconnected" id=c2a911180fc2c4dd46e4925fe5aa4753b9604f279f94d1e58eaac82448546835 namespace=k8s.io Jun 25 18:31:27.115929 containerd[1730]: time="2024-06-25T18:31:27.115922941Z" level=warning msg="cleaning up after shim disconnected" id=c2a911180fc2c4dd46e4925fe5aa4753b9604f279f94d1e58eaac82448546835 namespace=k8s.io Jun 25 18:31:27.115929 containerd[1730]: time="2024-06-25T18:31:27.115931141Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:31:27.298332 containerd[1730]: time="2024-06-25T18:31:27.298205399Z" level=error msg="Failed to destroy network for sandbox \"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:31:27.300531 containerd[1730]: time="2024-06-25T18:31:27.299978483Z" level=error msg="encountered an error cleaning up failed sandbox \"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:31:27.300531 containerd[1730]: time="2024-06-25T18:31:27.300037323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9n2sl,Uid:8cd02293-f883-498a-bd4f-78e9c942abd5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:31:27.300642 kubelet[3240]: E0625 18:31:27.300600 3240 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:31:27.301075 kubelet[3240]: E0625 18:31:27.300678 3240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9n2sl" Jun 25 18:31:27.301075 kubelet[3240]: E0625 18:31:27.300697 3240 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9n2sl" Jun 25 18:31:27.301075 kubelet[3240]: E0625 18:31:27.300739 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9n2sl_kube-system(8cd02293-f883-498a-bd4f-78e9c942abd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9n2sl_kube-system(8cd02293-f883-498a-bd4f-78e9c942abd5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9n2sl" podUID="8cd02293-f883-498a-bd4f-78e9c942abd5" Jun 25 18:31:27.304387 containerd[1730]: time="2024-06-25T18:31:27.304235814Z" level=error msg="Failed to destroy network for sandbox \"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:31:27.304839 containerd[1730]: time="2024-06-25T18:31:27.304810695Z" level=error msg="encountered an error cleaning up failed sandbox \"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:31:27.305414 containerd[1730]: time="2024-06-25T18:31:27.305382297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kqz2s,Uid:82ebf39f-e0ab-44c8-9f59-93bb792499e0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:31:27.305822 kubelet[3240]: E0625 18:31:27.305670 3240 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:31:27.305822 kubelet[3240]: E0625 18:31:27.305721 3240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kqz2s" Jun 25 18:31:27.305822 kubelet[3240]: E0625 18:31:27.305738 3240 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kqz2s" Jun 25 18:31:27.305957 kubelet[3240]: E0625 18:31:27.305783 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-kqz2s_kube-system(82ebf39f-e0ab-44c8-9f59-93bb792499e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-kqz2s_kube-system(82ebf39f-e0ab-44c8-9f59-93bb792499e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kqz2s" podUID="82ebf39f-e0ab-44c8-9f59-93bb792499e0" Jun 25 18:31:27.321100 containerd[1730]: time="2024-06-25T18:31:27.321013256Z" level=error msg="Failed to destroy network for sandbox \"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:31:27.321942 containerd[1730]: time="2024-06-25T18:31:27.321882578Z" level=error msg="encountered an error cleaning up failed sandbox \"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:31:27.322932 containerd[1730]: time="2024-06-25T18:31:27.322858861Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9fsbs,Uid:d7c63612-a464-4ae6-b2ed-1cf040476205,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:31:27.322932 containerd[1730]: time="2024-06-25T18:31:27.322901541Z" level=error msg="Failed to destroy network for sandbox \"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:31:27.324297 containerd[1730]: time="2024-06-25T18:31:27.323177781Z" level=error msg="encountered an error cleaning up failed sandbox \"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:31:27.324297 containerd[1730]: time="2024-06-25T18:31:27.323226262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4dd6c659-jr6s7,Uid:6cfbff32-c1d6-4bd6-b422-e4dce0d07843,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:31:27.324439 kubelet[3240]: E0625 18:31:27.323230 3240 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:31:27.324439 kubelet[3240]: E0625 18:31:27.323276 3240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9fsbs" Jun 25 18:31:27.324439 kubelet[3240]: E0625 18:31:27.323388 3240 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:31:27.324439 kubelet[3240]: E0625 18:31:27.323411 3240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d4dd6c659-jr6s7" Jun 25 18:31:27.324568 kubelet[3240]: E0625 18:31:27.323428 3240 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d4dd6c659-jr6s7" Jun 25 18:31:27.324568 kubelet[3240]: E0625 18:31:27.323467 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d4dd6c659-jr6s7_calico-system(6cfbff32-c1d6-4bd6-b422-e4dce0d07843)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d4dd6c659-jr6s7_calico-system(6cfbff32-c1d6-4bd6-b422-e4dce0d07843)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d4dd6c659-jr6s7" podUID="6cfbff32-c1d6-4bd6-b422-e4dce0d07843" Jun 25 18:31:27.324880 kubelet[3240]: E0625 18:31:27.324726 3240 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9fsbs" Jun 25 18:31:27.324880 kubelet[3240]: E0625 18:31:27.324789 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9fsbs_calico-system(d7c63612-a464-4ae6-b2ed-1cf040476205)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9fsbs_calico-system(d7c63612-a464-4ae6-b2ed-1cf040476205)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9fsbs" podUID="d7c63612-a464-4ae6-b2ed-1cf040476205" Jun 25 18:31:28.071776 kubelet[3240]: I0625 18:31:28.071742 3240 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Jun 25 18:31:28.073000 containerd[1730]: time="2024-06-25T18:31:28.072720262Z" level=info msg="StopPodSandbox for \"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\"" Jun 25 18:31:28.073894 containerd[1730]: time="2024-06-25T18:31:28.073580904Z" level=info msg="Ensure that sandbox 0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372 in task-service has been cleanup successfully" Jun 25 18:31:28.074297 kubelet[3240]: I0625 18:31:28.073994 3240 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Jun 25 18:31:28.074919 containerd[1730]: time="2024-06-25T18:31:28.074529467Z" level=info msg="StopPodSandbox for \"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\"" Jun 25 18:31:28.074919 containerd[1730]: time="2024-06-25T18:31:28.074687227Z" level=info msg="Ensure that sandbox e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92 in task-service has been cleanup successfully" Jun 25 18:31:28.077718 kubelet[3240]: I0625 18:31:28.077682 3240 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Jun 25 18:31:28.078347 containerd[1730]: time="2024-06-25T18:31:28.078052396Z" level=info msg="StopPodSandbox for \"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\"" Jun 25 18:31:28.078347 containerd[1730]: time="2024-06-25T18:31:28.078202876Z" level=info msg="Ensure that sandbox 9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b in task-service has been cleanup successfully" Jun 25 18:31:28.082781 kubelet[3240]: I0625 18:31:28.081829 3240 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Jun 25 18:31:28.082889 containerd[1730]: time="2024-06-25T18:31:28.082752727Z" level=info msg="StopPodSandbox for \"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\"" Jun 25 18:31:28.084412 containerd[1730]: time="2024-06-25T18:31:28.084293611Z" level=info msg="Ensure that sandbox bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63 in task-service has been cleanup successfully" Jun 25 18:31:28.092693 containerd[1730]: time="2024-06-25T18:31:28.091671150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 18:31:28.138967 containerd[1730]: time="2024-06-25T18:31:28.138913788Z" level=error msg="StopPodSandbox for \"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\" failed" error="failed to destroy network for sandbox \"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:31:28.139339 kubelet[3240]: E0625 18:31:28.139286 3240 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Jun 25 18:31:28.139419 kubelet[3240]: E0625 18:31:28.139358 3240 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372"} Jun 25 18:31:28.139419 kubelet[3240]: E0625 18:31:28.139412 3240 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"82ebf39f-e0ab-44c8-9f59-93bb792499e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:31:28.139497 kubelet[3240]: E0625 18:31:28.139433 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"82ebf39f-e0ab-44c8-9f59-93bb792499e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kqz2s" podUID="82ebf39f-e0ab-44c8-9f59-93bb792499e0" Jun 25 18:31:28.143246 containerd[1730]: time="2024-06-25T18:31:28.143210039Z" level=error msg="StopPodSandbox for \"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\" failed" error="failed to destroy network for sandbox \"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:31:28.143537 kubelet[3240]: E0625 18:31:28.143481 3240 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Jun 25 18:31:28.143607 kubelet[3240]: E0625 18:31:28.143541 3240 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63"} Jun 25 18:31:28.143607 kubelet[3240]: E0625 18:31:28.143587 3240 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7c63612-a464-4ae6-b2ed-1cf040476205\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:31:28.143689 kubelet[3240]: E0625 18:31:28.143606 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7c63612-a464-4ae6-b2ed-1cf040476205\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9fsbs" podUID="d7c63612-a464-4ae6-b2ed-1cf040476205" Jun 25 18:31:28.144846 containerd[1730]: time="2024-06-25T18:31:28.144800003Z" level=error msg="StopPodSandbox for \"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\" failed" error="failed to destroy network for sandbox \"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:31:28.145073 kubelet[3240]: E0625 18:31:28.145041 3240 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Jun 25 18:31:28.145137 kubelet[3240]: E0625 18:31:28.145082 3240 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b"} Jun 25 18:31:28.145137 kubelet[3240]: E0625 18:31:28.145109 3240 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8cd02293-f883-498a-bd4f-78e9c942abd5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:31:28.145137 kubelet[3240]: E0625 18:31:28.145127 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8cd02293-f883-498a-bd4f-78e9c942abd5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9n2sl" podUID="8cd02293-f883-498a-bd4f-78e9c942abd5" Jun 25 18:31:28.145671 containerd[1730]: time="2024-06-25T18:31:28.145628365Z" level=error msg="StopPodSandbox for \"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\" failed" error="failed to destroy network for sandbox \"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:31:28.145795 kubelet[3240]: E0625 18:31:28.145761 3240 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Jun 25 18:31:28.145795 kubelet[3240]: E0625 18:31:28.145790 3240 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92"} Jun 25 18:31:28.145855 kubelet[3240]: E0625 18:31:28.145814 3240 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6cfbff32-c1d6-4bd6-b422-e4dce0d07843\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:31:28.145855 kubelet[3240]: E0625 18:31:28.145831 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6cfbff32-c1d6-4bd6-b422-e4dce0d07843\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d4dd6c659-jr6s7" podUID="6cfbff32-c1d6-4bd6-b422-e4dce0d07843" Jun 25 18:31:28.173269 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63-shm.mount: Deactivated successfully. Jun 25 18:31:28.173375 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372-shm.mount: Deactivated successfully. Jun 25 18:31:28.173426 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92-shm.mount: Deactivated successfully. Jun 25 18:31:28.173473 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b-shm.mount: Deactivated successfully. Jun 25 18:31:31.562931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount636733131.mount: Deactivated successfully. Jun 25 18:31:31.614359 containerd[1730]: time="2024-06-25T18:31:31.614279509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:31.616396 containerd[1730]: time="2024-06-25T18:31:31.616247473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jun 25 18:31:31.619400 containerd[1730]: time="2024-06-25T18:31:31.619295641Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:31.623630 containerd[1730]: time="2024-06-25T18:31:31.623581172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:31.624561 containerd[1730]: time="2024-06-25T18:31:31.624070853Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 3.532365623s" Jun 25 18:31:31.624561 containerd[1730]: time="2024-06-25T18:31:31.624106373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jun 25 18:31:31.637483 containerd[1730]: time="2024-06-25T18:31:31.637446767Z" level=info msg="CreateContainer within sandbox \"dc1d5c1b91adb8ebaa5b7054cb97a2e710f6b58e09d3dbd335b2624cb063a463\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 18:31:31.676292 containerd[1730]: time="2024-06-25T18:31:31.676235544Z" level=info msg="CreateContainer within sandbox \"dc1d5c1b91adb8ebaa5b7054cb97a2e710f6b58e09d3dbd335b2624cb063a463\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d207c3c2803ea47ffc6ddac8d40f9e5a846a00bc27a805017353659d7d6e39e0\"" Jun 25 18:31:31.676942 containerd[1730]: time="2024-06-25T18:31:31.676908586Z" level=info msg="StartContainer for \"d207c3c2803ea47ffc6ddac8d40f9e5a846a00bc27a805017353659d7d6e39e0\"" Jun 25 18:31:31.704522 systemd[1]: Started cri-containerd-d207c3c2803ea47ffc6ddac8d40f9e5a846a00bc27a805017353659d7d6e39e0.scope - libcontainer container d207c3c2803ea47ffc6ddac8d40f9e5a846a00bc27a805017353659d7d6e39e0. Jun 25 18:31:31.734809 containerd[1730]: time="2024-06-25T18:31:31.734770691Z" level=info msg="StartContainer for \"d207c3c2803ea47ffc6ddac8d40f9e5a846a00bc27a805017353659d7d6e39e0\" returns successfully" Jun 25 18:31:32.142895 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 18:31:32.143084 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 18:31:38.977934 containerd[1730]: time="2024-06-25T18:31:38.977603785Z" level=info msg="StopPodSandbox for \"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\"" Jun 25 18:31:39.017624 kubelet[3240]: I0625 18:31:39.017548 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zbfkn" podStartSLOduration=7.90679769 podStartE2EDuration="22.017382121s" podCreationTimestamp="2024-06-25 18:31:17 +0000 UTC" firstStartedPulling="2024-06-25 18:31:17.514149624 +0000 UTC m=+23.668965029" lastFinishedPulling="2024-06-25 18:31:31.624734015 +0000 UTC m=+37.779549460" observedRunningTime="2024-06-25 18:31:32.112603679 +0000 UTC m=+38.267419124" watchObservedRunningTime="2024-06-25 18:31:39.017382121 +0000 UTC m=+45.172197566" Jun 25 18:31:39.045761 containerd[1730]: 2024-06-25 18:31:39.015 [INFO][4444] k8s.go 608: Cleaning up netns ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Jun 25 18:31:39.045761 containerd[1730]: 2024-06-25 18:31:39.016 [INFO][4444] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" iface="eth0" netns="/var/run/netns/cni-4fe62a2d-3f6d-2b29-614e-670367f7bebe" Jun 25 18:31:39.045761 containerd[1730]: 2024-06-25 18:31:39.016 [INFO][4444] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" iface="eth0" netns="/var/run/netns/cni-4fe62a2d-3f6d-2b29-614e-670367f7bebe" Jun 25 18:31:39.045761 containerd[1730]: 2024-06-25 18:31:39.017 [INFO][4444] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" iface="eth0" netns="/var/run/netns/cni-4fe62a2d-3f6d-2b29-614e-670367f7bebe" Jun 25 18:31:39.045761 containerd[1730]: 2024-06-25 18:31:39.017 [INFO][4444] k8s.go 615: Releasing IP address(es) ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Jun 25 18:31:39.045761 containerd[1730]: 2024-06-25 18:31:39.017 [INFO][4444] utils.go 188: Calico CNI releasing IP address ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Jun 25 18:31:39.045761 containerd[1730]: 2024-06-25 18:31:39.034 [INFO][4450] ipam_plugin.go 411: Releasing address using handleID ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" HandleID="k8s-pod-network.e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Workload="ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0" Jun 25 18:31:39.045761 containerd[1730]: 2024-06-25 18:31:39.034 [INFO][4450] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:31:39.045761 containerd[1730]: 2024-06-25 18:31:39.034 [INFO][4450] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:31:39.045761 containerd[1730]: 2024-06-25 18:31:39.042 [WARNING][4450] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" HandleID="k8s-pod-network.e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Workload="ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0" Jun 25 18:31:39.045761 containerd[1730]: 2024-06-25 18:31:39.042 [INFO][4450] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" HandleID="k8s-pod-network.e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Workload="ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0" Jun 25 18:31:39.045761 containerd[1730]: 2024-06-25 18:31:39.043 [INFO][4450] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:31:39.045761 containerd[1730]: 2024-06-25 18:31:39.044 [INFO][4444] k8s.go 621: Teardown processing complete. ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Jun 25 18:31:39.046975 containerd[1730]: time="2024-06-25T18:31:39.045879869Z" level=info msg="TearDown network for sandbox \"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\" successfully" Jun 25 18:31:39.046975 containerd[1730]: time="2024-06-25T18:31:39.045904509Z" level=info msg="StopPodSandbox for \"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\" returns successfully" Jun 25 18:31:39.047968 systemd[1]: run-netns-cni\x2d4fe62a2d\x2d3f6d\x2d2b29\x2d614e\x2d670367f7bebe.mount: Deactivated successfully. Jun 25 18:31:39.048896 containerd[1730]: time="2024-06-25T18:31:39.048858836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4dd6c659-jr6s7,Uid:6cfbff32-c1d6-4bd6-b422-e4dce0d07843,Namespace:calico-system,Attempt:1,}" Jun 25 18:31:39.197619 systemd-networkd[1347]: calif7688ec77b5: Link UP Jun 25 18:31:39.197796 systemd-networkd[1347]: calif7688ec77b5: Gained carrier Jun 25 18:31:39.211005 containerd[1730]: 2024-06-25 18:31:39.111 [INFO][4456] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 18:31:39.211005 containerd[1730]: 2024-06-25 18:31:39.124 [INFO][4456] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0 calico-kube-controllers-7d4dd6c659- calico-system 6cfbff32-c1d6-4bd6-b422-e4dce0d07843 676 0 2024-06-25 18:31:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d4dd6c659 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4012.0.0-a-5284b277fa calico-kube-controllers-7d4dd6c659-jr6s7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif7688ec77b5 [] []}} ContainerID="2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d" Namespace="calico-system" Pod="calico-kube-controllers-7d4dd6c659-jr6s7" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-" Jun 25 18:31:39.211005 containerd[1730]: 2024-06-25 18:31:39.125 [INFO][4456] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d" Namespace="calico-system" Pod="calico-kube-controllers-7d4dd6c659-jr6s7" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0" Jun 25 18:31:39.211005 containerd[1730]: 2024-06-25 18:31:39.149 [INFO][4470] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d" HandleID="k8s-pod-network.2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d" Workload="ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0" Jun 25 18:31:39.211005 containerd[1730]: 2024-06-25 18:31:39.160 [INFO][4470] ipam_plugin.go 264: Auto assigning IP ContainerID="2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d" HandleID="k8s-pod-network.2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d" Workload="ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000261ed0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012.0.0-a-5284b277fa", "pod":"calico-kube-controllers-7d4dd6c659-jr6s7", "timestamp":"2024-06-25 18:31:39.149087116 +0000 UTC"}, Hostname:"ci-4012.0.0-a-5284b277fa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:31:39.211005 containerd[1730]: 2024-06-25 18:31:39.160 [INFO][4470] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:31:39.211005 containerd[1730]: 2024-06-25 18:31:39.160 [INFO][4470] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:31:39.211005 containerd[1730]: 2024-06-25 18:31:39.160 [INFO][4470] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-5284b277fa' Jun 25 18:31:39.211005 containerd[1730]: 2024-06-25 18:31:39.161 [INFO][4470] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:39.211005 containerd[1730]: 2024-06-25 18:31:39.165 [INFO][4470] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:39.211005 containerd[1730]: 2024-06-25 18:31:39.170 [INFO][4470] ipam.go 489: Trying affinity for 192.168.80.128/26 host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:39.211005 containerd[1730]: 2024-06-25 18:31:39.172 [INFO][4470] ipam.go 155: Attempting to load block cidr=192.168.80.128/26 host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:39.211005 containerd[1730]: 2024-06-25 18:31:39.174 [INFO][4470] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:39.211005 containerd[1730]: 2024-06-25 18:31:39.174 [INFO][4470] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:39.211005 containerd[1730]: 2024-06-25 18:31:39.177 [INFO][4470] ipam.go 1685: Creating new handle: k8s-pod-network.2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d Jun 25 18:31:39.211005 containerd[1730]: 2024-06-25 18:31:39.180 [INFO][4470] ipam.go 1203: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:39.211005 containerd[1730]: 2024-06-25 18:31:39.183 [INFO][4470] ipam.go 1216: Successfully claimed IPs: [192.168.80.129/26] block=192.168.80.128/26 handle="k8s-pod-network.2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:39.211005 containerd[1730]: 2024-06-25 18:31:39.183 [INFO][4470] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.129/26] handle="k8s-pod-network.2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:39.211005 containerd[1730]: 2024-06-25 18:31:39.183 [INFO][4470] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:31:39.211005 containerd[1730]: 2024-06-25 18:31:39.184 [INFO][4470] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.80.129/26] IPv6=[] ContainerID="2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d" HandleID="k8s-pod-network.2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d" Workload="ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0" Jun 25 18:31:39.211602 containerd[1730]: 2024-06-25 18:31:39.186 [INFO][4456] k8s.go 386: Populated endpoint ContainerID="2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d" Namespace="calico-system" Pod="calico-kube-controllers-7d4dd6c659-jr6s7" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0", GenerateName:"calico-kube-controllers-7d4dd6c659-", Namespace:"calico-system", SelfLink:"", UID:"6cfbff32-c1d6-4bd6-b422-e4dce0d07843", ResourceVersion:"676", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 31, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4dd6c659", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-5284b277fa", ContainerID:"", Pod:"calico-kube-controllers-7d4dd6c659-jr6s7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif7688ec77b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:31:39.211602 containerd[1730]: 2024-06-25 18:31:39.186 [INFO][4456] k8s.go 387: Calico CNI using IPs: [192.168.80.129/32] ContainerID="2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d" Namespace="calico-system" Pod="calico-kube-controllers-7d4dd6c659-jr6s7" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0" Jun 25 18:31:39.211602 containerd[1730]: 2024-06-25 18:31:39.186 [INFO][4456] dataplane_linux.go 68: Setting the host side veth name to calif7688ec77b5 ContainerID="2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d" Namespace="calico-system" Pod="calico-kube-controllers-7d4dd6c659-jr6s7" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0" Jun 25 18:31:39.211602 containerd[1730]: 2024-06-25 18:31:39.194 [INFO][4456] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d" Namespace="calico-system" Pod="calico-kube-controllers-7d4dd6c659-jr6s7" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0" Jun 25 18:31:39.211602 containerd[1730]: 2024-06-25 18:31:39.194 [INFO][4456] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d" Namespace="calico-system" Pod="calico-kube-controllers-7d4dd6c659-jr6s7" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0", GenerateName:"calico-kube-controllers-7d4dd6c659-", Namespace:"calico-system", SelfLink:"", UID:"6cfbff32-c1d6-4bd6-b422-e4dce0d07843", ResourceVersion:"676", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 31, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4dd6c659", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-5284b277fa", ContainerID:"2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d", Pod:"calico-kube-controllers-7d4dd6c659-jr6s7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif7688ec77b5", MAC:"7e:51:39:f4:82:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:31:39.211602 containerd[1730]: 2024-06-25 18:31:39.208 [INFO][4456] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d" Namespace="calico-system" Pod="calico-kube-controllers-7d4dd6c659-jr6s7" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0" Jun 25 18:31:39.230025 containerd[1730]: time="2024-06-25T18:31:39.229865310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:31:39.230025 containerd[1730]: time="2024-06-25T18:31:39.229913750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:39.230025 containerd[1730]: time="2024-06-25T18:31:39.229927150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:31:39.230564 containerd[1730]: time="2024-06-25T18:31:39.229938950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:39.252482 systemd[1]: Started cri-containerd-2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d.scope - libcontainer container 2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d. Jun 25 18:31:39.283780 containerd[1730]: time="2024-06-25T18:31:39.283729639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4dd6c659-jr6s7,Uid:6cfbff32-c1d6-4bd6-b422-e4dce0d07843,Namespace:calico-system,Attempt:1,} returns sandbox id \"2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d\"" Jun 25 18:31:39.286175 containerd[1730]: time="2024-06-25T18:31:39.286142085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 18:31:40.410625 systemd-networkd[1347]: calif7688ec77b5: Gained IPv6LL Jun 25 18:31:40.978200 containerd[1730]: time="2024-06-25T18:31:40.977964226Z" level=info msg="StopPodSandbox for \"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\"" Jun 25 18:31:40.979176 containerd[1730]: time="2024-06-25T18:31:40.978105547Z" level=info msg="StopPodSandbox for \"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\"" Jun 25 18:31:41.087070 containerd[1730]: 2024-06-25 18:31:41.043 [INFO][4600] k8s.go 608: Cleaning up netns ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Jun 25 18:31:41.087070 containerd[1730]: 2024-06-25 18:31:41.043 [INFO][4600] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" iface="eth0" netns="/var/run/netns/cni-066ab288-5530-9bb5-90ee-4ee16be76b4f" Jun 25 18:31:41.087070 containerd[1730]: 2024-06-25 18:31:41.047 [INFO][4600] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" iface="eth0" netns="/var/run/netns/cni-066ab288-5530-9bb5-90ee-4ee16be76b4f" Jun 25 18:31:41.087070 containerd[1730]: 2024-06-25 18:31:41.048 [INFO][4600] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" iface="eth0" netns="/var/run/netns/cni-066ab288-5530-9bb5-90ee-4ee16be76b4f" Jun 25 18:31:41.087070 containerd[1730]: 2024-06-25 18:31:41.048 [INFO][4600] k8s.go 615: Releasing IP address(es) ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Jun 25 18:31:41.087070 containerd[1730]: 2024-06-25 18:31:41.048 [INFO][4600] utils.go 188: Calico CNI releasing IP address ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Jun 25 18:31:41.087070 containerd[1730]: 2024-06-25 18:31:41.070 [INFO][4615] ipam_plugin.go 411: Releasing address using handleID ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" HandleID="k8s-pod-network.bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Workload="ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0" Jun 25 18:31:41.087070 containerd[1730]: 2024-06-25 18:31:41.070 [INFO][4615] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:31:41.087070 containerd[1730]: 2024-06-25 18:31:41.070 [INFO][4615] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:31:41.087070 containerd[1730]: 2024-06-25 18:31:41.080 [WARNING][4615] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" HandleID="k8s-pod-network.bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Workload="ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0" Jun 25 18:31:41.087070 containerd[1730]: 2024-06-25 18:31:41.080 [INFO][4615] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" HandleID="k8s-pod-network.bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Workload="ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0" Jun 25 18:31:41.087070 containerd[1730]: 2024-06-25 18:31:41.082 [INFO][4615] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:31:41.087070 containerd[1730]: 2024-06-25 18:31:41.084 [INFO][4600] k8s.go 621: Teardown processing complete. ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Jun 25 18:31:41.091086 containerd[1730]: time="2024-06-25T18:31:41.087150450Z" level=info msg="TearDown network for sandbox \"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\" successfully" Jun 25 18:31:41.091086 containerd[1730]: time="2024-06-25T18:31:41.087350650Z" level=info msg="StopPodSandbox for \"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\" returns successfully" Jun 25 18:31:41.091086 containerd[1730]: time="2024-06-25T18:31:41.090150057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9fsbs,Uid:d7c63612-a464-4ae6-b2ed-1cf040476205,Namespace:calico-system,Attempt:1,}" Jun 25 18:31:41.091235 systemd[1]: run-netns-cni\x2d066ab288\x2d5530\x2d9bb5\x2d90ee\x2d4ee16be76b4f.mount: Deactivated successfully. Jun 25 18:31:41.112895 containerd[1730]: 2024-06-25 18:31:41.041 [INFO][4601] k8s.go 608: Cleaning up netns ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Jun 25 18:31:41.112895 containerd[1730]: 2024-06-25 18:31:41.041 [INFO][4601] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" iface="eth0" netns="/var/run/netns/cni-a7f5b92b-d9b1-944a-3b19-9d01ce1e8646" Jun 25 18:31:41.112895 containerd[1730]: 2024-06-25 18:31:41.041 [INFO][4601] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" iface="eth0" netns="/var/run/netns/cni-a7f5b92b-d9b1-944a-3b19-9d01ce1e8646" Jun 25 18:31:41.112895 containerd[1730]: 2024-06-25 18:31:41.042 [INFO][4601] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" iface="eth0" netns="/var/run/netns/cni-a7f5b92b-d9b1-944a-3b19-9d01ce1e8646" Jun 25 18:31:41.112895 containerd[1730]: 2024-06-25 18:31:41.042 [INFO][4601] k8s.go 615: Releasing IP address(es) ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Jun 25 18:31:41.112895 containerd[1730]: 2024-06-25 18:31:41.042 [INFO][4601] utils.go 188: Calico CNI releasing IP address ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Jun 25 18:31:41.112895 containerd[1730]: 2024-06-25 18:31:41.095 [INFO][4613] ipam_plugin.go 411: Releasing address using handleID ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" HandleID="k8s-pod-network.9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0" Jun 25 18:31:41.112895 containerd[1730]: 2024-06-25 18:31:41.095 [INFO][4613] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:31:41.112895 containerd[1730]: 2024-06-25 18:31:41.095 [INFO][4613] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:31:41.112895 containerd[1730]: 2024-06-25 18:31:41.107 [WARNING][4613] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" HandleID="k8s-pod-network.9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0" Jun 25 18:31:41.112895 containerd[1730]: 2024-06-25 18:31:41.107 [INFO][4613] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" HandleID="k8s-pod-network.9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0" Jun 25 18:31:41.112895 containerd[1730]: 2024-06-25 18:31:41.108 [INFO][4613] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:31:41.112895 containerd[1730]: 2024-06-25 18:31:41.111 [INFO][4601] k8s.go 621: Teardown processing complete. ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Jun 25 18:31:41.114501 containerd[1730]: time="2024-06-25T18:31:41.114454476Z" level=info msg="TearDown network for sandbox \"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\" successfully" Jun 25 18:31:41.114501 containerd[1730]: time="2024-06-25T18:31:41.114490636Z" level=info msg="StopPodSandbox for \"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\" returns successfully" Jun 25 18:31:41.116802 containerd[1730]: time="2024-06-25T18:31:41.116539761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9n2sl,Uid:8cd02293-f883-498a-bd4f-78e9c942abd5,Namespace:kube-system,Attempt:1,}" Jun 25 18:31:41.117112 systemd[1]: run-netns-cni\x2da7f5b92b\x2dd9b1\x2d944a\x2d3b19\x2d9d01ce1e8646.mount: Deactivated successfully. Jun 25 18:31:41.238455 systemd-networkd[1347]: calide9111e14b9: Link UP Jun 25 18:31:41.238794 systemd-networkd[1347]: calide9111e14b9: Gained carrier Jun 25 18:31:41.252889 containerd[1730]: 2024-06-25 18:31:41.160 [INFO][4628] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 18:31:41.252889 containerd[1730]: 2024-06-25 18:31:41.174 [INFO][4628] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0 csi-node-driver- calico-system d7c63612-a464-4ae6-b2ed-1cf040476205 687 0 2024-06-25 18:31:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4012.0.0-a-5284b277fa csi-node-driver-9fsbs eth0 default [] [] [kns.calico-system ksa.calico-system.default] calide9111e14b9 [] []}} ContainerID="337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba" Namespace="calico-system" Pod="csi-node-driver-9fsbs" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-" Jun 25 18:31:41.252889 containerd[1730]: 2024-06-25 18:31:41.174 [INFO][4628] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba" Namespace="calico-system" Pod="csi-node-driver-9fsbs" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0" Jun 25 18:31:41.252889 containerd[1730]: 2024-06-25 18:31:41.198 [INFO][4640] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba" HandleID="k8s-pod-network.337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba" Workload="ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0" Jun 25 18:31:41.252889 containerd[1730]: 2024-06-25 18:31:41.209 [INFO][4640] ipam_plugin.go 264: Auto assigning IP ContainerID="337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba" HandleID="k8s-pod-network.337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba" Workload="ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c700), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012.0.0-a-5284b277fa", "pod":"csi-node-driver-9fsbs", "timestamp":"2024-06-25 18:31:41.198551359 +0000 UTC"}, Hostname:"ci-4012.0.0-a-5284b277fa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:31:41.252889 containerd[1730]: 2024-06-25 18:31:41.209 [INFO][4640] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:31:41.252889 containerd[1730]: 2024-06-25 18:31:41.209 [INFO][4640] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:31:41.252889 containerd[1730]: 2024-06-25 18:31:41.209 [INFO][4640] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-5284b277fa' Jun 25 18:31:41.252889 containerd[1730]: 2024-06-25 18:31:41.211 [INFO][4640] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:41.252889 containerd[1730]: 2024-06-25 18:31:41.216 [INFO][4640] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:41.252889 containerd[1730]: 2024-06-25 18:31:41.220 [INFO][4640] ipam.go 489: Trying affinity for 192.168.80.128/26 host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:41.252889 containerd[1730]: 2024-06-25 18:31:41.222 [INFO][4640] ipam.go 155: Attempting to load block cidr=192.168.80.128/26 host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:41.252889 containerd[1730]: 2024-06-25 18:31:41.223 [INFO][4640] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:41.252889 containerd[1730]: 2024-06-25 18:31:41.223 [INFO][4640] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:41.252889 containerd[1730]: 2024-06-25 18:31:41.225 [INFO][4640] ipam.go 1685: Creating new handle: k8s-pod-network.337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba Jun 25 18:31:41.252889 containerd[1730]: 2024-06-25 18:31:41.228 [INFO][4640] ipam.go 1203: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:41.252889 containerd[1730]: 2024-06-25 18:31:41.233 [INFO][4640] ipam.go 1216: Successfully claimed IPs: [192.168.80.130/26] block=192.168.80.128/26 handle="k8s-pod-network.337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:41.252889 containerd[1730]: 2024-06-25 18:31:41.233 [INFO][4640] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.130/26] handle="k8s-pod-network.337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:41.252889 containerd[1730]: 2024-06-25 18:31:41.233 [INFO][4640] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:31:41.252889 containerd[1730]: 2024-06-25 18:31:41.233 [INFO][4640] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.80.130/26] IPv6=[] ContainerID="337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba" HandleID="k8s-pod-network.337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba" Workload="ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0" Jun 25 18:31:41.253808 containerd[1730]: 2024-06-25 18:31:41.235 [INFO][4628] k8s.go 386: Populated endpoint ContainerID="337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba" Namespace="calico-system" Pod="csi-node-driver-9fsbs" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7c63612-a464-4ae6-b2ed-1cf040476205", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 31, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-5284b277fa", ContainerID:"", Pod:"csi-node-driver-9fsbs", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.80.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calide9111e14b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:31:41.253808 containerd[1730]: 2024-06-25 18:31:41.235 [INFO][4628] k8s.go 387: Calico CNI using IPs: [192.168.80.130/32] ContainerID="337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba" Namespace="calico-system" Pod="csi-node-driver-9fsbs" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0" Jun 25 18:31:41.253808 containerd[1730]: 2024-06-25 18:31:41.235 [INFO][4628] dataplane_linux.go 68: Setting the host side veth name to calide9111e14b9 ContainerID="337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba" Namespace="calico-system" Pod="csi-node-driver-9fsbs" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0" Jun 25 18:31:41.253808 containerd[1730]: 2024-06-25 18:31:41.239 [INFO][4628] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba" Namespace="calico-system" Pod="csi-node-driver-9fsbs" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0" Jun 25 18:31:41.253808 containerd[1730]: 2024-06-25 18:31:41.240 [INFO][4628] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba" Namespace="calico-system" Pod="csi-node-driver-9fsbs" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7c63612-a464-4ae6-b2ed-1cf040476205", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 31, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-5284b277fa", ContainerID:"337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba", Pod:"csi-node-driver-9fsbs", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.80.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calide9111e14b9", MAC:"2a:c3:25:36:91:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:31:41.253808 containerd[1730]: 2024-06-25 18:31:41.251 [INFO][4628] k8s.go 500: Wrote updated endpoint to datastore ContainerID="337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba" Namespace="calico-system" Pod="csi-node-driver-9fsbs" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0" Jun 25 18:31:41.385884 containerd[1730]: time="2024-06-25T18:31:41.385329249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:31:41.385884 containerd[1730]: time="2024-06-25T18:31:41.385388209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:41.385884 containerd[1730]: time="2024-06-25T18:31:41.385494450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:31:41.385884 containerd[1730]: time="2024-06-25T18:31:41.385513850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:41.404746 systemd[1]: Started cri-containerd-337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba.scope - libcontainer container 337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba. Jun 25 18:31:41.477419 containerd[1730]: time="2024-06-25T18:31:41.477368351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9fsbs,Uid:d7c63612-a464-4ae6-b2ed-1cf040476205,Namespace:calico-system,Attempt:1,} returns sandbox id \"337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba\"" Jun 25 18:31:41.595197 systemd-networkd[1347]: cali16abb49b0ce: Link UP Jun 25 18:31:41.595958 systemd-networkd[1347]: cali16abb49b0ce: Gained carrier Jun 25 18:31:41.622272 containerd[1730]: 2024-06-25 18:31:41.383 [INFO][4660] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 18:31:41.622272 containerd[1730]: 2024-06-25 18:31:41.410 [INFO][4660] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0 coredns-7db6d8ff4d- kube-system 8cd02293-f883-498a-bd4f-78e9c942abd5 686 0 2024-06-25 18:31:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012.0.0-a-5284b277fa coredns-7db6d8ff4d-9n2sl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali16abb49b0ce [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9n2sl" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-" Jun 25 18:31:41.622272 containerd[1730]: 2024-06-25 18:31:41.410 [INFO][4660] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9n2sl" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0" Jun 25 18:31:41.622272 containerd[1730]: 2024-06-25 18:31:41.509 [INFO][4712] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106" HandleID="k8s-pod-network.37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0" Jun 25 18:31:41.622272 containerd[1730]: 2024-06-25 18:31:41.539 [INFO][4712] ipam_plugin.go 264: Auto assigning IP ContainerID="37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106" HandleID="k8s-pod-network.37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002bef90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012.0.0-a-5284b277fa", "pod":"coredns-7db6d8ff4d-9n2sl", "timestamp":"2024-06-25 18:31:41.509201708 +0000 UTC"}, Hostname:"ci-4012.0.0-a-5284b277fa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:31:41.622272 containerd[1730]: 2024-06-25 18:31:41.540 [INFO][4712] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:31:41.622272 containerd[1730]: 2024-06-25 18:31:41.540 [INFO][4712] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:31:41.622272 containerd[1730]: 2024-06-25 18:31:41.541 [INFO][4712] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-5284b277fa' Jun 25 18:31:41.622272 containerd[1730]: 2024-06-25 18:31:41.543 [INFO][4712] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:41.622272 containerd[1730]: 2024-06-25 18:31:41.548 [INFO][4712] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:41.622272 containerd[1730]: 2024-06-25 18:31:41.556 [INFO][4712] ipam.go 489: Trying affinity for 192.168.80.128/26 host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:41.622272 containerd[1730]: 2024-06-25 18:31:41.558 [INFO][4712] ipam.go 155: Attempting to load block cidr=192.168.80.128/26 host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:41.622272 containerd[1730]: 2024-06-25 18:31:41.565 [INFO][4712] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:41.622272 containerd[1730]: 2024-06-25 18:31:41.566 [INFO][4712] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:41.622272 containerd[1730]: 2024-06-25 18:31:41.567 [INFO][4712] ipam.go 1685: Creating new handle: k8s-pod-network.37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106 Jun 25 18:31:41.622272 containerd[1730]: 2024-06-25 18:31:41.574 [INFO][4712] ipam.go 1203: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:41.622272 containerd[1730]: 2024-06-25 18:31:41.582 [INFO][4712] ipam.go 1216: Successfully claimed IPs: [192.168.80.131/26] block=192.168.80.128/26 handle="k8s-pod-network.37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:41.622272 containerd[1730]: 2024-06-25 18:31:41.582 [INFO][4712] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.131/26] handle="k8s-pod-network.37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:41.622272 containerd[1730]: 2024-06-25 18:31:41.583 [INFO][4712] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:31:41.622272 containerd[1730]: 2024-06-25 18:31:41.583 [INFO][4712] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.80.131/26] IPv6=[] ContainerID="37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106" HandleID="k8s-pod-network.37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0" Jun 25 18:31:41.624595 containerd[1730]: 2024-06-25 18:31:41.587 [INFO][4660] k8s.go 386: Populated endpoint ContainerID="37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9n2sl" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8cd02293-f883-498a-bd4f-78e9c942abd5", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 31, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-5284b277fa", ContainerID:"", Pod:"coredns-7db6d8ff4d-9n2sl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali16abb49b0ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:31:41.624595 containerd[1730]: 2024-06-25 18:31:41.588 [INFO][4660] k8s.go 387: Calico CNI using IPs: [192.168.80.131/32] ContainerID="37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9n2sl" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0" Jun 25 18:31:41.624595 containerd[1730]: 2024-06-25 18:31:41.588 [INFO][4660] dataplane_linux.go 68: Setting the host side veth name to cali16abb49b0ce ContainerID="37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9n2sl" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0" Jun 25 18:31:41.624595 containerd[1730]: 2024-06-25 18:31:41.595 [INFO][4660] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9n2sl" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0" Jun 25 18:31:41.624595 containerd[1730]: 2024-06-25 18:31:41.596 [INFO][4660] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9n2sl" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8cd02293-f883-498a-bd4f-78e9c942abd5", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 31, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-5284b277fa", ContainerID:"37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106", Pod:"coredns-7db6d8ff4d-9n2sl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali16abb49b0ce", MAC:"3a:1e:37:4b:a1:89", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:31:41.624595 containerd[1730]: 2024-06-25 18:31:41.614 [INFO][4660] k8s.go 500: Wrote updated endpoint to datastore ContainerID="37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9n2sl" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0" Jun 25 18:31:41.665480 containerd[1730]: time="2024-06-25T18:31:41.665138524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:31:41.665480 containerd[1730]: time="2024-06-25T18:31:41.665263645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:41.666782 containerd[1730]: time="2024-06-25T18:31:41.665845846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:31:41.667268 containerd[1730]: time="2024-06-25T18:31:41.666851128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:41.687654 systemd[1]: Started cri-containerd-37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106.scope - libcontainer container 37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106. Jun 25 18:31:41.741976 containerd[1730]: time="2024-06-25T18:31:41.741717989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9n2sl,Uid:8cd02293-f883-498a-bd4f-78e9c942abd5,Namespace:kube-system,Attempt:1,} returns sandbox id \"37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106\"" Jun 25 18:31:41.748292 containerd[1730]: time="2024-06-25T18:31:41.748105244Z" level=info msg="CreateContainer within sandbox \"37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:31:41.791870 containerd[1730]: time="2024-06-25T18:31:41.791823590Z" level=info msg="CreateContainer within sandbox \"37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6056f938815874fe7b099f3116bf9cb4f4144f2c0444ed5b3960943923f48fa8\"" Jun 25 18:31:41.794413 containerd[1730]: time="2024-06-25T18:31:41.793504074Z" level=info msg="StartContainer for \"6056f938815874fe7b099f3116bf9cb4f4144f2c0444ed5b3960943923f48fa8\"" Jun 25 18:31:41.823698 systemd[1]: Started cri-containerd-6056f938815874fe7b099f3116bf9cb4f4144f2c0444ed5b3960943923f48fa8.scope - libcontainer container 6056f938815874fe7b099f3116bf9cb4f4144f2c0444ed5b3960943923f48fa8. Jun 25 18:31:41.856642 containerd[1730]: time="2024-06-25T18:31:41.856525786Z" level=info msg="StartContainer for \"6056f938815874fe7b099f3116bf9cb4f4144f2c0444ed5b3960943923f48fa8\" returns successfully" Jun 25 18:31:41.985061 containerd[1730]: time="2024-06-25T18:31:41.984892576Z" level=info msg="StopPodSandbox for \"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\"" Jun 25 18:31:42.081474 containerd[1730]: 2024-06-25 18:31:42.035 [INFO][4844] k8s.go 608: Cleaning up netns ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Jun 25 18:31:42.081474 containerd[1730]: 2024-06-25 18:31:42.035 [INFO][4844] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" iface="eth0" netns="/var/run/netns/cni-df65e6ab-b59f-b1c6-7545-7831cd755622" Jun 25 18:31:42.081474 containerd[1730]: 2024-06-25 18:31:42.036 [INFO][4844] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" iface="eth0" netns="/var/run/netns/cni-df65e6ab-b59f-b1c6-7545-7831cd755622" Jun 25 18:31:42.081474 containerd[1730]: 2024-06-25 18:31:42.037 [INFO][4844] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" iface="eth0" netns="/var/run/netns/cni-df65e6ab-b59f-b1c6-7545-7831cd755622" Jun 25 18:31:42.081474 containerd[1730]: 2024-06-25 18:31:42.037 [INFO][4844] k8s.go 615: Releasing IP address(es) ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Jun 25 18:31:42.081474 containerd[1730]: 2024-06-25 18:31:42.037 [INFO][4844] utils.go 188: Calico CNI releasing IP address ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Jun 25 18:31:42.081474 containerd[1730]: 2024-06-25 18:31:42.063 [INFO][4851] ipam_plugin.go 411: Releasing address using handleID ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" HandleID="k8s-pod-network.0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0" Jun 25 18:31:42.081474 containerd[1730]: 2024-06-25 18:31:42.063 [INFO][4851] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:31:42.081474 containerd[1730]: 2024-06-25 18:31:42.064 [INFO][4851] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:31:42.081474 containerd[1730]: 2024-06-25 18:31:42.075 [WARNING][4851] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" HandleID="k8s-pod-network.0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0" Jun 25 18:31:42.081474 containerd[1730]: 2024-06-25 18:31:42.075 [INFO][4851] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" HandleID="k8s-pod-network.0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0" Jun 25 18:31:42.081474 containerd[1730]: 2024-06-25 18:31:42.079 [INFO][4851] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:31:42.081474 containerd[1730]: 2024-06-25 18:31:42.080 [INFO][4844] k8s.go 621: Teardown processing complete. ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Jun 25 18:31:42.082749 containerd[1730]: time="2024-06-25T18:31:42.081511609Z" level=info msg="TearDown network for sandbox \"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\" successfully" Jun 25 18:31:42.082749 containerd[1730]: time="2024-06-25T18:31:42.081536929Z" level=info msg="StopPodSandbox for \"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\" returns successfully" Jun 25 18:31:42.082749 containerd[1730]: time="2024-06-25T18:31:42.082133010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kqz2s,Uid:82ebf39f-e0ab-44c8-9f59-93bb792499e0,Namespace:kube-system,Attempt:1,}" Jun 25 18:31:42.092693 systemd[1]: run-netns-cni\x2ddf65e6ab\x2db59f\x2db1c6\x2d7545\x2d7831cd755622.mount: Deactivated successfully. Jun 25 18:31:42.108651 containerd[1730]: time="2024-06-25T18:31:42.108542434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:42.111413 containerd[1730]: time="2024-06-25T18:31:42.111378121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jun 25 18:31:42.116991 containerd[1730]: time="2024-06-25T18:31:42.116115732Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:42.135492 containerd[1730]: time="2024-06-25T18:31:42.135451139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:42.137000 containerd[1730]: time="2024-06-25T18:31:42.136959342Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 2.850764057s" Jun 25 18:31:42.137000 containerd[1730]: time="2024-06-25T18:31:42.137000503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jun 25 18:31:42.141603 kubelet[3240]: I0625 18:31:42.141410 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9n2sl" podStartSLOduration=32.141391073 podStartE2EDuration="32.141391073s" podCreationTimestamp="2024-06-25 18:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:31:42.138430226 +0000 UTC m=+48.293245671" watchObservedRunningTime="2024-06-25 18:31:42.141391073 +0000 UTC m=+48.296206518" Jun 25 18:31:42.149057 containerd[1730]: time="2024-06-25T18:31:42.143764999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 18:31:42.166853 containerd[1730]: time="2024-06-25T18:31:42.166497014Z" level=info msg="CreateContainer within sandbox \"2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 18:31:42.214500 containerd[1730]: time="2024-06-25T18:31:42.214349969Z" level=info msg="CreateContainer within sandbox \"2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4ae646cba7b390c67d5e1f427a95767037f67d575d26bd9b713cf8cf3f6ecd8a\"" Jun 25 18:31:42.215451 containerd[1730]: time="2024-06-25T18:31:42.215413092Z" level=info msg="StartContainer for \"4ae646cba7b390c67d5e1f427a95767037f67d575d26bd9b713cf8cf3f6ecd8a\"" Jun 25 18:31:42.253506 systemd[1]: Started cri-containerd-4ae646cba7b390c67d5e1f427a95767037f67d575d26bd9b713cf8cf3f6ecd8a.scope - libcontainer container 4ae646cba7b390c67d5e1f427a95767037f67d575d26bd9b713cf8cf3f6ecd8a. Jun 25 18:31:42.299010 containerd[1730]: time="2024-06-25T18:31:42.298888693Z" level=info msg="StartContainer for \"4ae646cba7b390c67d5e1f427a95767037f67d575d26bd9b713cf8cf3f6ecd8a\" returns successfully" Jun 25 18:31:42.324514 systemd-networkd[1347]: cali73882737eab: Link UP Jun 25 18:31:42.326734 systemd-networkd[1347]: cali73882737eab: Gained carrier Jun 25 18:31:42.340001 containerd[1730]: 2024-06-25 18:31:42.209 [INFO][4862] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 18:31:42.340001 containerd[1730]: 2024-06-25 18:31:42.228 [INFO][4862] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0 coredns-7db6d8ff4d- kube-system 82ebf39f-e0ab-44c8-9f59-93bb792499e0 703 0 2024-06-25 18:31:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012.0.0-a-5284b277fa coredns-7db6d8ff4d-kqz2s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali73882737eab [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kqz2s" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-" Jun 25 18:31:42.340001 containerd[1730]: 2024-06-25 18:31:42.228 [INFO][4862] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kqz2s" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0" Jun 25 18:31:42.340001 containerd[1730]: 2024-06-25 18:31:42.267 [INFO][4893] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7" HandleID="k8s-pod-network.c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0" Jun 25 18:31:42.340001 containerd[1730]: 2024-06-25 18:31:42.279 [INFO][4893] ipam_plugin.go 264: Auto assigning IP ContainerID="c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7" HandleID="k8s-pod-network.c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004fe440), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012.0.0-a-5284b277fa", "pod":"coredns-7db6d8ff4d-kqz2s", "timestamp":"2024-06-25 18:31:42.267246297 +0000 UTC"}, Hostname:"ci-4012.0.0-a-5284b277fa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:31:42.340001 containerd[1730]: 2024-06-25 18:31:42.279 [INFO][4893] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:31:42.340001 containerd[1730]: 2024-06-25 18:31:42.279 [INFO][4893] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:31:42.340001 containerd[1730]: 2024-06-25 18:31:42.279 [INFO][4893] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-5284b277fa' Jun 25 18:31:42.340001 containerd[1730]: 2024-06-25 18:31:42.281 [INFO][4893] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:42.340001 containerd[1730]: 2024-06-25 18:31:42.293 [INFO][4893] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:42.340001 containerd[1730]: 2024-06-25 18:31:42.298 [INFO][4893] ipam.go 489: Trying affinity for 192.168.80.128/26 host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:42.340001 containerd[1730]: 2024-06-25 18:31:42.302 [INFO][4893] ipam.go 155: Attempting to load block cidr=192.168.80.128/26 host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:42.340001 containerd[1730]: 2024-06-25 18:31:42.305 [INFO][4893] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:42.340001 containerd[1730]: 2024-06-25 18:31:42.305 [INFO][4893] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:42.340001 containerd[1730]: 2024-06-25 18:31:42.307 [INFO][4893] ipam.go 1685: Creating new handle: k8s-pod-network.c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7 Jun 25 18:31:42.340001 containerd[1730]: 2024-06-25 18:31:42.311 [INFO][4893] ipam.go 1203: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:42.340001 containerd[1730]: 2024-06-25 18:31:42.318 [INFO][4893] ipam.go 1216: Successfully claimed IPs: [192.168.80.132/26] block=192.168.80.128/26 handle="k8s-pod-network.c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:42.340001 containerd[1730]: 2024-06-25 18:31:42.318 [INFO][4893] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.132/26] handle="k8s-pod-network.c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:31:42.340001 containerd[1730]: 2024-06-25 18:31:42.318 [INFO][4893] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:31:42.340001 containerd[1730]: 2024-06-25 18:31:42.318 [INFO][4893] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.80.132/26] IPv6=[] ContainerID="c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7" HandleID="k8s-pod-network.c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0" Jun 25 18:31:42.343193 containerd[1730]: 2024-06-25 18:31:42.320 [INFO][4862] k8s.go 386: Populated endpoint ContainerID="c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kqz2s" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"82ebf39f-e0ab-44c8-9f59-93bb792499e0", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 31, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-5284b277fa", ContainerID:"", Pod:"coredns-7db6d8ff4d-kqz2s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73882737eab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:31:42.343193 containerd[1730]: 2024-06-25 18:31:42.321 [INFO][4862] k8s.go 387: Calico CNI using IPs: [192.168.80.132/32] ContainerID="c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kqz2s" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0" Jun 25 18:31:42.343193 containerd[1730]: 2024-06-25 18:31:42.321 [INFO][4862] dataplane_linux.go 68: Setting the host side veth name to cali73882737eab ContainerID="c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kqz2s" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0" Jun 25 18:31:42.343193 containerd[1730]: 2024-06-25 18:31:42.323 [INFO][4862] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kqz2s" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0" Jun 25 18:31:42.343193 containerd[1730]: 2024-06-25 18:31:42.323 [INFO][4862] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kqz2s" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"82ebf39f-e0ab-44c8-9f59-93bb792499e0", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 31, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-5284b277fa", ContainerID:"c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7", Pod:"coredns-7db6d8ff4d-kqz2s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73882737eab", MAC:"1a:f3:41:e3:aa:41", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:31:42.343193 containerd[1730]: 2024-06-25 18:31:42.336 [INFO][4862] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kqz2s" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0" Jun 25 18:31:42.379391 containerd[1730]: time="2024-06-25T18:31:42.377293922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:31:42.379391 containerd[1730]: time="2024-06-25T18:31:42.377428283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:42.379391 containerd[1730]: time="2024-06-25T18:31:42.377495363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:31:42.379391 containerd[1730]: time="2024-06-25T18:31:42.377510363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:42.415525 systemd[1]: Started cri-containerd-c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7.scope - libcontainer container c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7. Jun 25 18:31:42.463865 containerd[1730]: time="2024-06-25T18:31:42.463828811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kqz2s,Uid:82ebf39f-e0ab-44c8-9f59-93bb792499e0,Namespace:kube-system,Attempt:1,} returns sandbox id \"c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7\"" Jun 25 18:31:42.469857 containerd[1730]: time="2024-06-25T18:31:42.469760625Z" level=info msg="CreateContainer within sandbox \"c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:31:42.503165 containerd[1730]: time="2024-06-25T18:31:42.503122786Z" level=info msg="CreateContainer within sandbox \"c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e10d49086102c53c5837f3ea88bf923b70a6d41a641cf8d0f0b9dbbcdf6f4c61\"" Jun 25 18:31:42.504213 containerd[1730]: time="2024-06-25T18:31:42.504171308Z" level=info msg="StartContainer for \"e10d49086102c53c5837f3ea88bf923b70a6d41a641cf8d0f0b9dbbcdf6f4c61\"" Jun 25 18:31:42.538677 systemd[1]: Started cri-containerd-e10d49086102c53c5837f3ea88bf923b70a6d41a641cf8d0f0b9dbbcdf6f4c61.scope - libcontainer container e10d49086102c53c5837f3ea88bf923b70a6d41a641cf8d0f0b9dbbcdf6f4c61. Jun 25 18:31:42.582710 containerd[1730]: time="2024-06-25T18:31:42.582638858Z" level=info msg="StartContainer for \"e10d49086102c53c5837f3ea88bf923b70a6d41a641cf8d0f0b9dbbcdf6f4c61\" returns successfully" Jun 25 18:31:42.970482 systemd-networkd[1347]: calide9111e14b9: Gained IPv6LL Jun 25 18:31:42.971417 systemd-networkd[1347]: cali16abb49b0ce: Gained IPv6LL Jun 25 18:31:43.181844 kubelet[3240]: I0625 18:31:43.181188 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-kqz2s" podStartSLOduration=33.181166302 podStartE2EDuration="33.181166302s" podCreationTimestamp="2024-06-25 18:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:31:43.149644306 +0000 UTC m=+49.304459751" watchObservedRunningTime="2024-06-25 18:31:43.181166302 +0000 UTC m=+49.335981747" Jun 25 18:31:43.185591 kubelet[3240]: I0625 18:31:43.184241 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7d4dd6c659-jr6s7" podStartSLOduration=23.326676556 podStartE2EDuration="26.184221709s" podCreationTimestamp="2024-06-25 18:31:17 +0000 UTC" firstStartedPulling="2024-06-25 18:31:39.285633044 +0000 UTC m=+45.440448489" lastFinishedPulling="2024-06-25 18:31:42.143178197 +0000 UTC m=+48.297993642" observedRunningTime="2024-06-25 18:31:43.183477147 +0000 UTC m=+49.338292592" watchObservedRunningTime="2024-06-25 18:31:43.184221709 +0000 UTC m=+49.339037154" Jun 25 18:31:43.419464 systemd-networkd[1347]: cali73882737eab: Gained IPv6LL Jun 25 18:31:43.985057 containerd[1730]: time="2024-06-25T18:31:43.985011161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:43.987124 containerd[1730]: time="2024-06-25T18:31:43.987092246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jun 25 18:31:43.990620 containerd[1730]: time="2024-06-25T18:31:43.990574774Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:43.995828 containerd[1730]: time="2024-06-25T18:31:43.995774387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:43.996500 containerd[1730]: time="2024-06-25T18:31:43.996467549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.852669229s" Jun 25 18:31:43.996690 containerd[1730]: time="2024-06-25T18:31:43.996600589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jun 25 18:31:43.999554 containerd[1730]: time="2024-06-25T18:31:43.999379996Z" level=info msg="CreateContainer within sandbox \"337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 18:31:44.043558 containerd[1730]: time="2024-06-25T18:31:44.043477302Z" level=info msg="CreateContainer within sandbox \"337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"419b2a1489e6f73883c6d3f97590a2d12c5c416759a9fa38be3df15a48c4f2fe\"" Jun 25 18:31:44.044679 containerd[1730]: time="2024-06-25T18:31:44.043856183Z" level=info msg="StartContainer for \"419b2a1489e6f73883c6d3f97590a2d12c5c416759a9fa38be3df15a48c4f2fe\"" Jun 25 18:31:44.071477 systemd[1]: Started cri-containerd-419b2a1489e6f73883c6d3f97590a2d12c5c416759a9fa38be3df15a48c4f2fe.scope - libcontainer container 419b2a1489e6f73883c6d3f97590a2d12c5c416759a9fa38be3df15a48c4f2fe. Jun 25 18:31:44.106664 containerd[1730]: time="2024-06-25T18:31:44.106589814Z" level=info msg="StartContainer for \"419b2a1489e6f73883c6d3f97590a2d12c5c416759a9fa38be3df15a48c4f2fe\" returns successfully" Jun 25 18:31:44.108400 containerd[1730]: time="2024-06-25T18:31:44.108280418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 18:31:44.808919 kubelet[3240]: I0625 18:31:44.808050 3240 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:31:44.823961 systemd[1]: run-containerd-runc-k8s.io-d207c3c2803ea47ffc6ddac8d40f9e5a846a00bc27a805017353659d7d6e39e0-runc.FxiC9v.mount: Deactivated successfully. Jun 25 18:31:45.281370 containerd[1730]: time="2024-06-25T18:31:45.280737127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:45.283396 containerd[1730]: time="2024-06-25T18:31:45.283352013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jun 25 18:31:45.287694 containerd[1730]: time="2024-06-25T18:31:45.287639103Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:45.291839 containerd[1730]: time="2024-06-25T18:31:45.291726713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:45.292558 containerd[1730]: time="2024-06-25T18:31:45.292431955Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.183949936s" Jun 25 18:31:45.292558 containerd[1730]: time="2024-06-25T18:31:45.292468715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jun 25 18:31:45.295557 containerd[1730]: time="2024-06-25T18:31:45.295532362Z" level=info msg="CreateContainer within sandbox \"337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 18:31:45.363510 containerd[1730]: time="2024-06-25T18:31:45.363462646Z" level=info msg="CreateContainer within sandbox \"337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5b9930d166c76c6016ba7c0ecf450cebabc0f8c5a0b49aee22ab6a95fb4e8b93\"" Jun 25 18:31:45.364469 containerd[1730]: time="2024-06-25T18:31:45.364346128Z" level=info msg="StartContainer for \"5b9930d166c76c6016ba7c0ecf450cebabc0f8c5a0b49aee22ab6a95fb4e8b93\"" Jun 25 18:31:45.397465 systemd[1]: Started cri-containerd-5b9930d166c76c6016ba7c0ecf450cebabc0f8c5a0b49aee22ab6a95fb4e8b93.scope - libcontainer container 5b9930d166c76c6016ba7c0ecf450cebabc0f8c5a0b49aee22ab6a95fb4e8b93. Jun 25 18:31:45.431291 containerd[1730]: time="2024-06-25T18:31:45.431222770Z" level=info msg="StartContainer for \"5b9930d166c76c6016ba7c0ecf450cebabc0f8c5a0b49aee22ab6a95fb4e8b93\" returns successfully" Jun 25 18:31:46.045344 kubelet[3240]: I0625 18:31:46.045266 3240 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 18:31:46.045344 kubelet[3240]: I0625 18:31:46.045299 3240 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 18:31:46.164259 kubelet[3240]: I0625 18:31:46.164188 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9fsbs" podStartSLOduration=25.352053221 podStartE2EDuration="29.164169978s" podCreationTimestamp="2024-06-25 18:31:17 +0000 UTC" firstStartedPulling="2024-06-25 18:31:41.481423801 +0000 UTC m=+47.636239206" lastFinishedPulling="2024-06-25 18:31:45.293540518 +0000 UTC m=+51.448355963" observedRunningTime="2024-06-25 18:31:46.164054138 +0000 UTC m=+52.318869583" watchObservedRunningTime="2024-06-25 18:31:46.164169978 +0000 UTC m=+52.318985423" Jun 25 18:31:46.972372 kubelet[3240]: I0625 18:31:46.972065 3240 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:31:48.170976 systemd-networkd[1347]: vxlan.calico: Link UP Jun 25 18:31:48.170986 systemd-networkd[1347]: vxlan.calico: Gained carrier Jun 25 18:31:49.755068 systemd-networkd[1347]: vxlan.calico: Gained IPv6LL Jun 25 18:31:53.983642 containerd[1730]: time="2024-06-25T18:31:53.983540479Z" level=info msg="StopPodSandbox for \"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\"" Jun 25 18:31:54.049940 containerd[1730]: 2024-06-25 18:31:54.016 [WARNING][5405] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8cd02293-f883-498a-bd4f-78e9c942abd5", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 31, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-5284b277fa", ContainerID:"37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106", Pod:"coredns-7db6d8ff4d-9n2sl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali16abb49b0ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:31:54.049940 containerd[1730]: 2024-06-25 18:31:54.017 [INFO][5405] k8s.go 608: Cleaning up netns ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Jun 25 18:31:54.049940 containerd[1730]: 2024-06-25 18:31:54.017 [INFO][5405] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" iface="eth0" netns="" Jun 25 18:31:54.049940 containerd[1730]: 2024-06-25 18:31:54.017 [INFO][5405] k8s.go 615: Releasing IP address(es) ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Jun 25 18:31:54.049940 containerd[1730]: 2024-06-25 18:31:54.017 [INFO][5405] utils.go 188: Calico CNI releasing IP address ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Jun 25 18:31:54.049940 containerd[1730]: 2024-06-25 18:31:54.038 [INFO][5411] ipam_plugin.go 411: Releasing address using handleID ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" HandleID="k8s-pod-network.9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0" Jun 25 18:31:54.049940 containerd[1730]: 2024-06-25 18:31:54.038 [INFO][5411] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:31:54.049940 containerd[1730]: 2024-06-25 18:31:54.038 [INFO][5411] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:31:54.049940 containerd[1730]: 2024-06-25 18:31:54.046 [WARNING][5411] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" HandleID="k8s-pod-network.9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0" Jun 25 18:31:54.049940 containerd[1730]: 2024-06-25 18:31:54.046 [INFO][5411] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" HandleID="k8s-pod-network.9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0" Jun 25 18:31:54.049940 containerd[1730]: 2024-06-25 18:31:54.047 [INFO][5411] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:31:54.049940 containerd[1730]: 2024-06-25 18:31:54.048 [INFO][5405] k8s.go 621: Teardown processing complete. ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Jun 25 18:31:54.049940 containerd[1730]: time="2024-06-25T18:31:54.049824741Z" level=info msg="TearDown network for sandbox \"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\" successfully" Jun 25 18:31:54.049940 containerd[1730]: time="2024-06-25T18:31:54.049847941Z" level=info msg="StopPodSandbox for \"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\" returns successfully" Jun 25 18:31:54.050838 containerd[1730]: time="2024-06-25T18:31:54.050245302Z" level=info msg="RemovePodSandbox for \"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\"" Jun 25 18:31:54.050838 containerd[1730]: time="2024-06-25T18:31:54.050285982Z" level=info msg="Forcibly stopping sandbox \"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\"" Jun 25 18:31:54.113338 containerd[1730]: 2024-06-25 18:31:54.083 [WARNING][5430] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8cd02293-f883-498a-bd4f-78e9c942abd5", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 31, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-5284b277fa", ContainerID:"37c4db416c1f9b3a86ddbaabe2b45282e4bf326b7e445a13328842afd4731106", Pod:"coredns-7db6d8ff4d-9n2sl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali16abb49b0ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:31:54.113338 containerd[1730]: 2024-06-25 18:31:54.083 [INFO][5430] k8s.go 608: Cleaning up netns ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Jun 25 18:31:54.113338 containerd[1730]: 2024-06-25 18:31:54.083 [INFO][5430] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" iface="eth0" netns="" Jun 25 18:31:54.113338 containerd[1730]: 2024-06-25 18:31:54.084 [INFO][5430] k8s.go 615: Releasing IP address(es) ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Jun 25 18:31:54.113338 containerd[1730]: 2024-06-25 18:31:54.084 [INFO][5430] utils.go 188: Calico CNI releasing IP address ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Jun 25 18:31:54.113338 containerd[1730]: 2024-06-25 18:31:54.102 [INFO][5436] ipam_plugin.go 411: Releasing address using handleID ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" HandleID="k8s-pod-network.9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0" Jun 25 18:31:54.113338 containerd[1730]: 2024-06-25 18:31:54.102 [INFO][5436] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:31:54.113338 containerd[1730]: 2024-06-25 18:31:54.102 [INFO][5436] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:31:54.113338 containerd[1730]: 2024-06-25 18:31:54.109 [WARNING][5436] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" HandleID="k8s-pod-network.9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0" Jun 25 18:31:54.113338 containerd[1730]: 2024-06-25 18:31:54.109 [INFO][5436] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" HandleID="k8s-pod-network.9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--9n2sl-eth0" Jun 25 18:31:54.113338 containerd[1730]: 2024-06-25 18:31:54.111 [INFO][5436] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:31:54.113338 containerd[1730]: 2024-06-25 18:31:54.112 [INFO][5430] k8s.go 621: Teardown processing complete. ContainerID="9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b" Jun 25 18:31:54.113820 containerd[1730]: time="2024-06-25T18:31:54.113358037Z" level=info msg="TearDown network for sandbox \"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\" successfully" Jun 25 18:31:54.122875 containerd[1730]: time="2024-06-25T18:31:54.122831857Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:31:54.122975 containerd[1730]: time="2024-06-25T18:31:54.122951057Z" level=info msg="RemovePodSandbox \"9844225f1d2cf458749d6d0470efdf7fedfd1b0bc13c70e88184ef71b229927b\" returns successfully" Jun 25 18:31:54.123491 containerd[1730]: time="2024-06-25T18:31:54.123465698Z" level=info msg="StopPodSandbox for \"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\"" Jun 25 18:31:54.192230 containerd[1730]: 2024-06-25 18:31:54.163 [WARNING][5454] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7c63612-a464-4ae6-b2ed-1cf040476205", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 31, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-5284b277fa", ContainerID:"337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba", Pod:"csi-node-driver-9fsbs", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.80.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calide9111e14b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:31:54.192230 containerd[1730]: 2024-06-25 18:31:54.163 [INFO][5454] k8s.go 608: Cleaning up netns ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Jun 25 18:31:54.192230 containerd[1730]: 2024-06-25 18:31:54.163 [INFO][5454] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" iface="eth0" netns="" Jun 25 18:31:54.192230 containerd[1730]: 2024-06-25 18:31:54.163 [INFO][5454] k8s.go 615: Releasing IP address(es) ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Jun 25 18:31:54.192230 containerd[1730]: 2024-06-25 18:31:54.163 [INFO][5454] utils.go 188: Calico CNI releasing IP address ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Jun 25 18:31:54.192230 containerd[1730]: 2024-06-25 18:31:54.180 [INFO][5460] ipam_plugin.go 411: Releasing address using handleID ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" HandleID="k8s-pod-network.bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Workload="ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0" Jun 25 18:31:54.192230 containerd[1730]: 2024-06-25 18:31:54.180 [INFO][5460] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:31:54.192230 containerd[1730]: 2024-06-25 18:31:54.180 [INFO][5460] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:31:54.192230 containerd[1730]: 2024-06-25 18:31:54.188 [WARNING][5460] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" HandleID="k8s-pod-network.bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Workload="ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0" Jun 25 18:31:54.192230 containerd[1730]: 2024-06-25 18:31:54.188 [INFO][5460] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" HandleID="k8s-pod-network.bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Workload="ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0" Jun 25 18:31:54.192230 containerd[1730]: 2024-06-25 18:31:54.189 [INFO][5460] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:31:54.192230 containerd[1730]: 2024-06-25 18:31:54.191 [INFO][5454] k8s.go 621: Teardown processing complete. ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Jun 25 18:31:54.192658 containerd[1730]: time="2024-06-25T18:31:54.192363846Z" level=info msg="TearDown network for sandbox \"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\" successfully" Jun 25 18:31:54.192658 containerd[1730]: time="2024-06-25T18:31:54.192389246Z" level=info msg="StopPodSandbox for \"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\" returns successfully" Jun 25 18:31:54.193155 containerd[1730]: time="2024-06-25T18:31:54.192870407Z" level=info msg="RemovePodSandbox for \"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\"" Jun 25 18:31:54.193155 containerd[1730]: time="2024-06-25T18:31:54.192903047Z" level=info msg="Forcibly stopping sandbox \"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\"" Jun 25 18:31:54.260696 containerd[1730]: 2024-06-25 18:31:54.232 [WARNING][5478] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7c63612-a464-4ae6-b2ed-1cf040476205", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 31, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-5284b277fa", ContainerID:"337f059027244667a00cca5930da5b9b3816c4cabf2e81590431cbe6052e9fba", Pod:"csi-node-driver-9fsbs", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.80.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calide9111e14b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:31:54.260696 containerd[1730]: 2024-06-25 18:31:54.232 [INFO][5478] k8s.go 608: Cleaning up netns ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Jun 25 18:31:54.260696 containerd[1730]: 2024-06-25 18:31:54.232 [INFO][5478] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" iface="eth0" netns="" Jun 25 18:31:54.260696 containerd[1730]: 2024-06-25 18:31:54.232 [INFO][5478] k8s.go 615: Releasing IP address(es) ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Jun 25 18:31:54.260696 containerd[1730]: 2024-06-25 18:31:54.232 [INFO][5478] utils.go 188: Calico CNI releasing IP address ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Jun 25 18:31:54.260696 containerd[1730]: 2024-06-25 18:31:54.249 [INFO][5484] ipam_plugin.go 411: Releasing address using handleID ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" HandleID="k8s-pod-network.bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Workload="ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0" Jun 25 18:31:54.260696 containerd[1730]: 2024-06-25 18:31:54.249 [INFO][5484] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:31:54.260696 containerd[1730]: 2024-06-25 18:31:54.249 [INFO][5484] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:31:54.260696 containerd[1730]: 2024-06-25 18:31:54.257 [WARNING][5484] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" HandleID="k8s-pod-network.bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Workload="ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0" Jun 25 18:31:54.260696 containerd[1730]: 2024-06-25 18:31:54.257 [INFO][5484] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" HandleID="k8s-pod-network.bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Workload="ci--4012.0.0--a--5284b277fa-k8s-csi--node--driver--9fsbs-eth0" Jun 25 18:31:54.260696 containerd[1730]: 2024-06-25 18:31:54.258 [INFO][5484] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:31:54.260696 containerd[1730]: 2024-06-25 18:31:54.259 [INFO][5478] k8s.go 621: Teardown processing complete. ContainerID="bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63" Jun 25 18:31:54.260696 containerd[1730]: time="2024-06-25T18:31:54.260597432Z" level=info msg="TearDown network for sandbox \"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\" successfully" Jun 25 18:31:54.267793 containerd[1730]: time="2024-06-25T18:31:54.267750287Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:31:54.268669 containerd[1730]: time="2024-06-25T18:31:54.267807247Z" level=info msg="RemovePodSandbox \"bb6ca2a405bd89a4f660dbfdfe6a0830669e92c4babafb9beaa46bf99ab0bd63\" returns successfully" Jun 25 18:31:54.268669 containerd[1730]: time="2024-06-25T18:31:54.268205688Z" level=info msg="StopPodSandbox for \"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\"" Jun 25 18:31:54.326490 containerd[1730]: 2024-06-25 18:31:54.297 [WARNING][5502] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0", GenerateName:"calico-kube-controllers-7d4dd6c659-", Namespace:"calico-system", SelfLink:"", UID:"6cfbff32-c1d6-4bd6-b422-e4dce0d07843", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 31, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4dd6c659", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-5284b277fa", ContainerID:"2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d", Pod:"calico-kube-controllers-7d4dd6c659-jr6s7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif7688ec77b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:31:54.326490 containerd[1730]: 2024-06-25 18:31:54.297 [INFO][5502] k8s.go 608: Cleaning up netns ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Jun 25 18:31:54.326490 containerd[1730]: 2024-06-25 18:31:54.298 [INFO][5502] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" iface="eth0" netns="" Jun 25 18:31:54.326490 containerd[1730]: 2024-06-25 18:31:54.298 [INFO][5502] k8s.go 615: Releasing IP address(es) ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Jun 25 18:31:54.326490 containerd[1730]: 2024-06-25 18:31:54.298 [INFO][5502] utils.go 188: Calico CNI releasing IP address ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Jun 25 18:31:54.326490 containerd[1730]: 2024-06-25 18:31:54.314 [INFO][5508] ipam_plugin.go 411: Releasing address using handleID ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" HandleID="k8s-pod-network.e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Workload="ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0" Jun 25 18:31:54.326490 containerd[1730]: 2024-06-25 18:31:54.315 [INFO][5508] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:31:54.326490 containerd[1730]: 2024-06-25 18:31:54.315 [INFO][5508] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:31:54.326490 containerd[1730]: 2024-06-25 18:31:54.322 [WARNING][5508] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" HandleID="k8s-pod-network.e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Workload="ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0" Jun 25 18:31:54.326490 containerd[1730]: 2024-06-25 18:31:54.322 [INFO][5508] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" HandleID="k8s-pod-network.e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Workload="ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0" Jun 25 18:31:54.326490 containerd[1730]: 2024-06-25 18:31:54.323 [INFO][5508] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:31:54.326490 containerd[1730]: 2024-06-25 18:31:54.325 [INFO][5502] k8s.go 621: Teardown processing complete. ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Jun 25 18:31:54.326490 containerd[1730]: time="2024-06-25T18:31:54.326407972Z" level=info msg="TearDown network for sandbox \"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\" successfully" Jun 25 18:31:54.326490 containerd[1730]: time="2024-06-25T18:31:54.326433092Z" level=info msg="StopPodSandbox for \"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\" returns successfully" Jun 25 18:31:54.327482 containerd[1730]: time="2024-06-25T18:31:54.327172974Z" level=info msg="RemovePodSandbox for \"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\"" Jun 25 18:31:54.327482 containerd[1730]: time="2024-06-25T18:31:54.327202534Z" level=info msg="Forcibly stopping sandbox \"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\"" Jun 25 18:31:54.386721 containerd[1730]: 2024-06-25 18:31:54.358 [WARNING][5526] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0", GenerateName:"calico-kube-controllers-7d4dd6c659-", Namespace:"calico-system", SelfLink:"", UID:"6cfbff32-c1d6-4bd6-b422-e4dce0d07843", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 31, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4dd6c659", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-5284b277fa", ContainerID:"2136262e29fcc13d06670227e5cb21d805c33358131ef6a10c5617208d0f4a2d", Pod:"calico-kube-controllers-7d4dd6c659-jr6s7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif7688ec77b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:31:54.386721 containerd[1730]: 2024-06-25 18:31:54.358 [INFO][5526] k8s.go 608: Cleaning up netns ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Jun 25 18:31:54.386721 containerd[1730]: 2024-06-25 18:31:54.358 [INFO][5526] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" iface="eth0" netns="" Jun 25 18:31:54.386721 containerd[1730]: 2024-06-25 18:31:54.358 [INFO][5526] k8s.go 615: Releasing IP address(es) ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Jun 25 18:31:54.386721 containerd[1730]: 2024-06-25 18:31:54.358 [INFO][5526] utils.go 188: Calico CNI releasing IP address ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Jun 25 18:31:54.386721 containerd[1730]: 2024-06-25 18:31:54.375 [INFO][5532] ipam_plugin.go 411: Releasing address using handleID ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" HandleID="k8s-pod-network.e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Workload="ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0" Jun 25 18:31:54.386721 containerd[1730]: 2024-06-25 18:31:54.375 [INFO][5532] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:31:54.386721 containerd[1730]: 2024-06-25 18:31:54.375 [INFO][5532] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:31:54.386721 containerd[1730]: 2024-06-25 18:31:54.383 [WARNING][5532] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" HandleID="k8s-pod-network.e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Workload="ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0" Jun 25 18:31:54.386721 containerd[1730]: 2024-06-25 18:31:54.383 [INFO][5532] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" HandleID="k8s-pod-network.e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Workload="ci--4012.0.0--a--5284b277fa-k8s-calico--kube--controllers--7d4dd6c659--jr6s7-eth0" Jun 25 18:31:54.386721 containerd[1730]: 2024-06-25 18:31:54.384 [INFO][5532] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:31:54.386721 containerd[1730]: 2024-06-25 18:31:54.385 [INFO][5526] k8s.go 621: Teardown processing complete. ContainerID="e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92" Jun 25 18:31:54.387708 containerd[1730]: time="2024-06-25T18:31:54.387183382Z" level=info msg="TearDown network for sandbox \"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\" successfully" Jun 25 18:31:54.398421 containerd[1730]: time="2024-06-25T18:31:54.398293606Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:31:54.398421 containerd[1730]: time="2024-06-25T18:31:54.398375566Z" level=info msg="RemovePodSandbox \"e4c81b78f554021ae4bacc0c322fbf095a1c14a6e71901adfb628ab0921aba92\" returns successfully" Jun 25 18:31:54.399225 containerd[1730]: time="2024-06-25T18:31:54.399016687Z" level=info msg="StopPodSandbox for \"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\"" Jun 25 18:31:54.462482 containerd[1730]: 2024-06-25 18:31:54.433 [WARNING][5551] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"82ebf39f-e0ab-44c8-9f59-93bb792499e0", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 31, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-5284b277fa", ContainerID:"c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7", Pod:"coredns-7db6d8ff4d-kqz2s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73882737eab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:31:54.462482 containerd[1730]: 2024-06-25 18:31:54.433 [INFO][5551] k8s.go 608: Cleaning up netns ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Jun 25 18:31:54.462482 containerd[1730]: 2024-06-25 18:31:54.433 [INFO][5551] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" iface="eth0" netns="" Jun 25 18:31:54.462482 containerd[1730]: 2024-06-25 18:31:54.433 [INFO][5551] k8s.go 615: Releasing IP address(es) ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Jun 25 18:31:54.462482 containerd[1730]: 2024-06-25 18:31:54.433 [INFO][5551] utils.go 188: Calico CNI releasing IP address ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Jun 25 18:31:54.462482 containerd[1730]: 2024-06-25 18:31:54.451 [INFO][5557] ipam_plugin.go 411: Releasing address using handleID ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" HandleID="k8s-pod-network.0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0" Jun 25 18:31:54.462482 containerd[1730]: 2024-06-25 18:31:54.451 [INFO][5557] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:31:54.462482 containerd[1730]: 2024-06-25 18:31:54.451 [INFO][5557] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:31:54.462482 containerd[1730]: 2024-06-25 18:31:54.458 [WARNING][5557] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" HandleID="k8s-pod-network.0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0" Jun 25 18:31:54.462482 containerd[1730]: 2024-06-25 18:31:54.458 [INFO][5557] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" HandleID="k8s-pod-network.0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0" Jun 25 18:31:54.462482 containerd[1730]: 2024-06-25 18:31:54.460 [INFO][5557] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:31:54.462482 containerd[1730]: 2024-06-25 18:31:54.461 [INFO][5551] k8s.go 621: Teardown processing complete. ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Jun 25 18:31:54.463542 containerd[1730]: time="2024-06-25T18:31:54.462630823Z" level=info msg="TearDown network for sandbox \"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\" successfully" Jun 25 18:31:54.463542 containerd[1730]: time="2024-06-25T18:31:54.462696104Z" level=info msg="StopPodSandbox for \"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\" returns successfully" Jun 25 18:31:54.463542 containerd[1730]: time="2024-06-25T18:31:54.463138385Z" level=info msg="RemovePodSandbox for \"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\"" Jun 25 18:31:54.463542 containerd[1730]: time="2024-06-25T18:31:54.463167185Z" level=info msg="Forcibly stopping sandbox \"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\"" Jun 25 18:31:54.523823 containerd[1730]: 2024-06-25 18:31:54.494 [WARNING][5575] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"82ebf39f-e0ab-44c8-9f59-93bb792499e0", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 31, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-5284b277fa", ContainerID:"c2c15b7beaa44100751fa47721816e0fd03efe5711ecd1c29b6e05de2cab6ca7", Pod:"coredns-7db6d8ff4d-kqz2s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73882737eab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:31:54.523823 containerd[1730]: 2024-06-25 18:31:54.494 [INFO][5575] k8s.go 608: Cleaning up netns ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Jun 25 18:31:54.523823 containerd[1730]: 2024-06-25 18:31:54.494 [INFO][5575] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" iface="eth0" netns="" Jun 25 18:31:54.523823 containerd[1730]: 2024-06-25 18:31:54.494 [INFO][5575] k8s.go 615: Releasing IP address(es) ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Jun 25 18:31:54.523823 containerd[1730]: 2024-06-25 18:31:54.494 [INFO][5575] utils.go 188: Calico CNI releasing IP address ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Jun 25 18:31:54.523823 containerd[1730]: 2024-06-25 18:31:54.511 [INFO][5581] ipam_plugin.go 411: Releasing address using handleID ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" HandleID="k8s-pod-network.0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0" Jun 25 18:31:54.523823 containerd[1730]: 2024-06-25 18:31:54.512 [INFO][5581] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:31:54.523823 containerd[1730]: 2024-06-25 18:31:54.512 [INFO][5581] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:31:54.523823 containerd[1730]: 2024-06-25 18:31:54.519 [WARNING][5581] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" HandleID="k8s-pod-network.0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0" Jun 25 18:31:54.523823 containerd[1730]: 2024-06-25 18:31:54.519 [INFO][5581] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" HandleID="k8s-pod-network.0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Workload="ci--4012.0.0--a--5284b277fa-k8s-coredns--7db6d8ff4d--kqz2s-eth0" Jun 25 18:31:54.523823 containerd[1730]: 2024-06-25 18:31:54.521 [INFO][5581] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:31:54.523823 containerd[1730]: 2024-06-25 18:31:54.522 [INFO][5575] k8s.go 621: Teardown processing complete. ContainerID="0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372" Jun 25 18:31:54.523823 containerd[1730]: time="2024-06-25T18:31:54.523717794Z" level=info msg="TearDown network for sandbox \"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\" successfully" Jun 25 18:31:54.533965 containerd[1730]: time="2024-06-25T18:31:54.533583735Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:31:54.533965 containerd[1730]: time="2024-06-25T18:31:54.533650175Z" level=info msg="RemovePodSandbox \"0aa79c44353666382001e0f96187041914c95c0d421e4d485b8b35a319b4b372\" returns successfully" Jun 25 18:32:16.858255 kubelet[3240]: I0625 18:32:16.856577 3240 topology_manager.go:215] "Topology Admit Handler" podUID="ef5f2378-df88-451d-a380-439421dccb79" podNamespace="calico-apiserver" podName="calico-apiserver-86c749c987-pchn9" Jun 25 18:32:16.864912 systemd[1]: Created slice kubepods-besteffort-podef5f2378_df88_451d_a380_439421dccb79.slice - libcontainer container kubepods-besteffort-podef5f2378_df88_451d_a380_439421dccb79.slice. Jun 25 18:32:16.867595 kubelet[3240]: W0625 18:32:16.867102 3240 reflector.go:547] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4012.0.0-a-5284b277fa" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4012.0.0-a-5284b277fa' and this object Jun 25 18:32:16.867595 kubelet[3240]: E0625 18:32:16.867377 3240 reflector.go:150] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4012.0.0-a-5284b277fa" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4012.0.0-a-5284b277fa' and this object Jun 25 18:32:16.867595 kubelet[3240]: W0625 18:32:16.866967 3240 reflector.go:547] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4012.0.0-a-5284b277fa" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4012.0.0-a-5284b277fa' and this object Jun 25 18:32:16.867595 kubelet[3240]: E0625 18:32:16.867406 3240 reflector.go:150] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4012.0.0-a-5284b277fa" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4012.0.0-a-5284b277fa' and this object Jun 25 18:32:16.873953 kubelet[3240]: I0625 18:32:16.872752 3240 topology_manager.go:215] "Topology Admit Handler" podUID="36e5ef8d-3d05-49e2-b01b-af01a3c201a1" podNamespace="calico-apiserver" podName="calico-apiserver-86c749c987-xksn5" Jun 25 18:32:16.881029 systemd[1]: Created slice kubepods-besteffort-pod36e5ef8d_3d05_49e2_b01b_af01a3c201a1.slice - libcontainer container kubepods-besteffort-pod36e5ef8d_3d05_49e2_b01b_af01a3c201a1.slice. Jun 25 18:32:16.894515 kubelet[3240]: I0625 18:32:16.894484 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ef5f2378-df88-451d-a380-439421dccb79-calico-apiserver-certs\") pod \"calico-apiserver-86c749c987-pchn9\" (UID: \"ef5f2378-df88-451d-a380-439421dccb79\") " pod="calico-apiserver/calico-apiserver-86c749c987-pchn9" Jun 25 18:32:16.894710 kubelet[3240]: I0625 18:32:16.894697 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd7rf\" (UniqueName: \"kubernetes.io/projected/ef5f2378-df88-451d-a380-439421dccb79-kube-api-access-fd7rf\") pod \"calico-apiserver-86c749c987-pchn9\" (UID: \"ef5f2378-df88-451d-a380-439421dccb79\") " pod="calico-apiserver/calico-apiserver-86c749c987-pchn9" Jun 25 18:32:16.995975 kubelet[3240]: I0625 18:32:16.995937 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxppg\" (UniqueName: \"kubernetes.io/projected/36e5ef8d-3d05-49e2-b01b-af01a3c201a1-kube-api-access-zxppg\") pod \"calico-apiserver-86c749c987-xksn5\" (UID: \"36e5ef8d-3d05-49e2-b01b-af01a3c201a1\") " pod="calico-apiserver/calico-apiserver-86c749c987-xksn5" Jun 25 18:32:16.996249 kubelet[3240]: I0625 18:32:16.996192 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/36e5ef8d-3d05-49e2-b01b-af01a3c201a1-calico-apiserver-certs\") pod \"calico-apiserver-86c749c987-xksn5\" (UID: \"36e5ef8d-3d05-49e2-b01b-af01a3c201a1\") " pod="calico-apiserver/calico-apiserver-86c749c987-xksn5" Jun 25 18:32:18.070880 containerd[1730]: time="2024-06-25T18:32:18.070812751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c749c987-pchn9,Uid:ef5f2378-df88-451d-a380-439421dccb79,Namespace:calico-apiserver,Attempt:0,}" Jun 25 18:32:18.085249 containerd[1730]: time="2024-06-25T18:32:18.084878022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c749c987-xksn5,Uid:36e5ef8d-3d05-49e2-b01b-af01a3c201a1,Namespace:calico-apiserver,Attempt:0,}" Jun 25 18:32:18.245841 systemd-networkd[1347]: cali9f19114dd8d: Link UP Jun 25 18:32:18.247039 systemd-networkd[1347]: cali9f19114dd8d: Gained carrier Jun 25 18:32:18.260635 containerd[1730]: 2024-06-25 18:32:18.151 [INFO][5668] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--pchn9-eth0 calico-apiserver-86c749c987- calico-apiserver ef5f2378-df88-451d-a380-439421dccb79 869 0 2024-06-25 18:32:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86c749c987 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4012.0.0-a-5284b277fa calico-apiserver-86c749c987-pchn9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9f19114dd8d [] []}} ContainerID="cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592" Namespace="calico-apiserver" Pod="calico-apiserver-86c749c987-pchn9" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--pchn9-" Jun 25 18:32:18.260635 containerd[1730]: 2024-06-25 18:32:18.151 [INFO][5668] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592" Namespace="calico-apiserver" Pod="calico-apiserver-86c749c987-pchn9" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--pchn9-eth0" Jun 25 18:32:18.260635 containerd[1730]: 2024-06-25 18:32:18.190 [INFO][5690] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592" HandleID="k8s-pod-network.cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592" Workload="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--pchn9-eth0" Jun 25 18:32:18.260635 containerd[1730]: 2024-06-25 18:32:18.209 [INFO][5690] ipam_plugin.go 264: Auto assigning IP ContainerID="cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592" HandleID="k8s-pod-network.cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592" Workload="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--pchn9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000261e10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4012.0.0-a-5284b277fa", "pod":"calico-apiserver-86c749c987-pchn9", "timestamp":"2024-06-25 18:32:18.190030693 +0000 UTC"}, Hostname:"ci-4012.0.0-a-5284b277fa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:32:18.260635 containerd[1730]: 2024-06-25 18:32:18.213 [INFO][5690] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:32:18.260635 containerd[1730]: 2024-06-25 18:32:18.213 [INFO][5690] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:32:18.260635 containerd[1730]: 2024-06-25 18:32:18.213 [INFO][5690] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-5284b277fa' Jun 25 18:32:18.260635 containerd[1730]: 2024-06-25 18:32:18.216 [INFO][5690] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:32:18.260635 containerd[1730]: 2024-06-25 18:32:18.221 [INFO][5690] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-5284b277fa" Jun 25 18:32:18.260635 containerd[1730]: 2024-06-25 18:32:18.225 [INFO][5690] ipam.go 489: Trying affinity for 192.168.80.128/26 host="ci-4012.0.0-a-5284b277fa" Jun 25 18:32:18.260635 containerd[1730]: 2024-06-25 18:32:18.228 [INFO][5690] ipam.go 155: Attempting to load block cidr=192.168.80.128/26 host="ci-4012.0.0-a-5284b277fa" Jun 25 18:32:18.260635 containerd[1730]: 2024-06-25 18:32:18.230 [INFO][5690] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4012.0.0-a-5284b277fa" Jun 25 18:32:18.260635 containerd[1730]: 2024-06-25 18:32:18.230 [INFO][5690] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:32:18.260635 containerd[1730]: 2024-06-25 18:32:18.232 [INFO][5690] ipam.go 1685: Creating new handle: k8s-pod-network.cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592 Jun 25 18:32:18.260635 containerd[1730]: 2024-06-25 18:32:18.235 [INFO][5690] ipam.go 1203: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:32:18.260635 containerd[1730]: 2024-06-25 18:32:18.240 [INFO][5690] ipam.go 1216: Successfully claimed IPs: [192.168.80.133/26] block=192.168.80.128/26 handle="k8s-pod-network.cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:32:18.260635 containerd[1730]: 2024-06-25 18:32:18.240 [INFO][5690] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.133/26] handle="k8s-pod-network.cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:32:18.260635 containerd[1730]: 2024-06-25 18:32:18.240 [INFO][5690] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:32:18.260635 containerd[1730]: 2024-06-25 18:32:18.240 [INFO][5690] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.80.133/26] IPv6=[] ContainerID="cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592" HandleID="k8s-pod-network.cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592" Workload="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--pchn9-eth0" Jun 25 18:32:18.261151 containerd[1730]: 2024-06-25 18:32:18.242 [INFO][5668] k8s.go 386: Populated endpoint ContainerID="cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592" Namespace="calico-apiserver" Pod="calico-apiserver-86c749c987-pchn9" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--pchn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--pchn9-eth0", GenerateName:"calico-apiserver-86c749c987-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef5f2378-df88-451d-a380-439421dccb79", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 32, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c749c987", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-5284b277fa", ContainerID:"", Pod:"calico-apiserver-86c749c987-pchn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f19114dd8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:32:18.261151 containerd[1730]: 2024-06-25 18:32:18.243 [INFO][5668] k8s.go 387: Calico CNI using IPs: [192.168.80.133/32] ContainerID="cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592" Namespace="calico-apiserver" Pod="calico-apiserver-86c749c987-pchn9" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--pchn9-eth0" Jun 25 18:32:18.261151 containerd[1730]: 2024-06-25 18:32:18.243 [INFO][5668] dataplane_linux.go 68: Setting the host side veth name to cali9f19114dd8d ContainerID="cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592" Namespace="calico-apiserver" Pod="calico-apiserver-86c749c987-pchn9" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--pchn9-eth0" Jun 25 18:32:18.261151 containerd[1730]: 2024-06-25 18:32:18.246 [INFO][5668] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592" Namespace="calico-apiserver" Pod="calico-apiserver-86c749c987-pchn9" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--pchn9-eth0" Jun 25 18:32:18.261151 containerd[1730]: 2024-06-25 18:32:18.247 [INFO][5668] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592" Namespace="calico-apiserver" Pod="calico-apiserver-86c749c987-pchn9" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--pchn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--pchn9-eth0", GenerateName:"calico-apiserver-86c749c987-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef5f2378-df88-451d-a380-439421dccb79", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 32, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c749c987", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-5284b277fa", ContainerID:"cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592", Pod:"calico-apiserver-86c749c987-pchn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f19114dd8d", MAC:"ba:97:26:10:87:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:32:18.261151 containerd[1730]: 2024-06-25 18:32:18.255 [INFO][5668] k8s.go 500: Wrote updated endpoint to datastore ContainerID="cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592" Namespace="calico-apiserver" Pod="calico-apiserver-86c749c987-pchn9" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--pchn9-eth0" Jun 25 18:32:18.305351 containerd[1730]: time="2024-06-25T18:32:18.302105098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:32:18.305351 containerd[1730]: time="2024-06-25T18:32:18.302170859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:32:18.305351 containerd[1730]: time="2024-06-25T18:32:18.302189619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:32:18.305351 containerd[1730]: time="2024-06-25T18:32:18.302202619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:32:18.321160 systemd-networkd[1347]: cali52c1725ee82: Link UP Jun 25 18:32:18.322965 systemd-networkd[1347]: cali52c1725ee82: Gained carrier Jun 25 18:32:18.330480 systemd[1]: Started cri-containerd-cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592.scope - libcontainer container cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592. Jun 25 18:32:18.342574 containerd[1730]: 2024-06-25 18:32:18.174 [INFO][5678] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--xksn5-eth0 calico-apiserver-86c749c987- calico-apiserver 36e5ef8d-3d05-49e2-b01b-af01a3c201a1 873 0 2024-06-25 18:32:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86c749c987 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4012.0.0-a-5284b277fa calico-apiserver-86c749c987-xksn5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali52c1725ee82 [] []}} ContainerID="a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218" Namespace="calico-apiserver" Pod="calico-apiserver-86c749c987-xksn5" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--xksn5-" Jun 25 18:32:18.342574 containerd[1730]: 2024-06-25 18:32:18.174 [INFO][5678] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218" Namespace="calico-apiserver" Pod="calico-apiserver-86c749c987-xksn5" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--xksn5-eth0" Jun 25 18:32:18.342574 containerd[1730]: 2024-06-25 18:32:18.212 [INFO][5697] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218" HandleID="k8s-pod-network.a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218" Workload="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--xksn5-eth0" Jun 25 18:32:18.342574 containerd[1730]: 2024-06-25 18:32:18.229 [INFO][5697] ipam_plugin.go 264: Auto assigning IP ContainerID="a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218" HandleID="k8s-pod-network.a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218" Workload="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--xksn5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001fb2b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4012.0.0-a-5284b277fa", "pod":"calico-apiserver-86c749c987-xksn5", "timestamp":"2024-06-25 18:32:18.212587062 +0000 UTC"}, Hostname:"ci-4012.0.0-a-5284b277fa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:32:18.342574 containerd[1730]: 2024-06-25 18:32:18.229 [INFO][5697] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:32:18.342574 containerd[1730]: 2024-06-25 18:32:18.240 [INFO][5697] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:32:18.342574 containerd[1730]: 2024-06-25 18:32:18.240 [INFO][5697] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-5284b277fa' Jun 25 18:32:18.342574 containerd[1730]: 2024-06-25 18:32:18.243 [INFO][5697] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:32:18.342574 containerd[1730]: 2024-06-25 18:32:18.253 [INFO][5697] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-5284b277fa" Jun 25 18:32:18.342574 containerd[1730]: 2024-06-25 18:32:18.269 [INFO][5697] ipam.go 489: Trying affinity for 192.168.80.128/26 host="ci-4012.0.0-a-5284b277fa" Jun 25 18:32:18.342574 containerd[1730]: 2024-06-25 18:32:18.273 [INFO][5697] ipam.go 155: Attempting to load block cidr=192.168.80.128/26 host="ci-4012.0.0-a-5284b277fa" Jun 25 18:32:18.342574 containerd[1730]: 2024-06-25 18:32:18.276 [INFO][5697] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4012.0.0-a-5284b277fa" Jun 25 18:32:18.342574 containerd[1730]: 2024-06-25 18:32:18.276 [INFO][5697] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:32:18.342574 containerd[1730]: 2024-06-25 18:32:18.278 [INFO][5697] ipam.go 1685: Creating new handle: k8s-pod-network.a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218 Jun 25 18:32:18.342574 containerd[1730]: 2024-06-25 18:32:18.290 [INFO][5697] ipam.go 1203: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:32:18.342574 containerd[1730]: 2024-06-25 18:32:18.308 [INFO][5697] ipam.go 1216: Successfully claimed IPs: [192.168.80.134/26] block=192.168.80.128/26 handle="k8s-pod-network.a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:32:18.342574 containerd[1730]: 2024-06-25 18:32:18.308 [INFO][5697] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.134/26] handle="k8s-pod-network.a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218" host="ci-4012.0.0-a-5284b277fa" Jun 25 18:32:18.342574 containerd[1730]: 2024-06-25 18:32:18.308 [INFO][5697] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:32:18.342574 containerd[1730]: 2024-06-25 18:32:18.308 [INFO][5697] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.80.134/26] IPv6=[] ContainerID="a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218" HandleID="k8s-pod-network.a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218" Workload="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--xksn5-eth0" Jun 25 18:32:18.343727 containerd[1730]: 2024-06-25 18:32:18.312 [INFO][5678] k8s.go 386: Populated endpoint ContainerID="a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218" Namespace="calico-apiserver" Pod="calico-apiserver-86c749c987-xksn5" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--xksn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--xksn5-eth0", GenerateName:"calico-apiserver-86c749c987-", Namespace:"calico-apiserver", SelfLink:"", UID:"36e5ef8d-3d05-49e2-b01b-af01a3c201a1", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 32, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c749c987", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-5284b277fa", ContainerID:"", Pod:"calico-apiserver-86c749c987-xksn5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali52c1725ee82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:32:18.343727 containerd[1730]: 2024-06-25 18:32:18.312 [INFO][5678] k8s.go 387: Calico CNI using IPs: [192.168.80.134/32] ContainerID="a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218" Namespace="calico-apiserver" Pod="calico-apiserver-86c749c987-xksn5" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--xksn5-eth0" Jun 25 18:32:18.343727 containerd[1730]: 2024-06-25 18:32:18.312 [INFO][5678] dataplane_linux.go 68: Setting the host side veth name to cali52c1725ee82 ContainerID="a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218" Namespace="calico-apiserver" Pod="calico-apiserver-86c749c987-xksn5" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--xksn5-eth0" Jun 25 18:32:18.343727 containerd[1730]: 2024-06-25 18:32:18.323 [INFO][5678] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218" Namespace="calico-apiserver" Pod="calico-apiserver-86c749c987-xksn5" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--xksn5-eth0" Jun 25 18:32:18.343727 containerd[1730]: 2024-06-25 18:32:18.325 [INFO][5678] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218" Namespace="calico-apiserver" Pod="calico-apiserver-86c749c987-xksn5" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--xksn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--xksn5-eth0", GenerateName:"calico-apiserver-86c749c987-", Namespace:"calico-apiserver", SelfLink:"", UID:"36e5ef8d-3d05-49e2-b01b-af01a3c201a1", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 32, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c749c987", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-5284b277fa", ContainerID:"a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218", Pod:"calico-apiserver-86c749c987-xksn5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali52c1725ee82", MAC:"6a:52:65:1a:74:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:32:18.343727 containerd[1730]: 2024-06-25 18:32:18.339 [INFO][5678] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218" Namespace="calico-apiserver" Pod="calico-apiserver-86c749c987-xksn5" WorkloadEndpoint="ci--4012.0.0--a--5284b277fa-k8s-calico--apiserver--86c749c987--xksn5-eth0" Jun 25 18:32:18.394616 containerd[1730]: time="2024-06-25T18:32:18.394352901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:32:18.394616 containerd[1730]: time="2024-06-25T18:32:18.394463581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:32:18.394616 containerd[1730]: time="2024-06-25T18:32:18.394486461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:32:18.396159 containerd[1730]: time="2024-06-25T18:32:18.394499781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:32:18.419635 containerd[1730]: time="2024-06-25T18:32:18.419362916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c749c987-pchn9,Uid:ef5f2378-df88-451d-a380-439421dccb79,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592\"" Jun 25 18:32:18.422211 containerd[1730]: time="2024-06-25T18:32:18.421939361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 18:32:18.437478 systemd[1]: Started cri-containerd-a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218.scope - libcontainer container a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218. Jun 25 18:32:18.468151 containerd[1730]: time="2024-06-25T18:32:18.468098222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c749c987-xksn5,Uid:36e5ef8d-3d05-49e2-b01b-af01a3c201a1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218\"" Jun 25 18:32:19.962810 systemd-networkd[1347]: cali52c1725ee82: Gained IPv6LL Jun 25 18:32:19.964413 systemd-networkd[1347]: cali9f19114dd8d: Gained IPv6LL Jun 25 18:32:20.338425 containerd[1730]: time="2024-06-25T18:32:20.338287443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:20.340709 containerd[1730]: time="2024-06-25T18:32:20.340654528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jun 25 18:32:20.345337 containerd[1730]: time="2024-06-25T18:32:20.344113495Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:20.350212 containerd[1730]: time="2024-06-25T18:32:20.350169629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:20.352459 containerd[1730]: time="2024-06-25T18:32:20.352122473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 1.930148192s" Jun 25 18:32:20.352459 containerd[1730]: time="2024-06-25T18:32:20.352157153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jun 25 18:32:20.354368 containerd[1730]: time="2024-06-25T18:32:20.354181878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 18:32:20.355574 containerd[1730]: time="2024-06-25T18:32:20.355447240Z" level=info msg="CreateContainer within sandbox \"cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 18:32:20.389905 containerd[1730]: time="2024-06-25T18:32:20.389866636Z" level=info msg="CreateContainer within sandbox \"cd6110b61c94e63642d4b9f704529bc84dfbe6b2f9faf41776da78734020a592\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b286303c4964ded1e53891f1075d00891a38176a96846c607b87cd615117d679\"" Jun 25 18:32:20.390759 containerd[1730]: time="2024-06-25T18:32:20.390716238Z" level=info msg="StartContainer for \"b286303c4964ded1e53891f1075d00891a38176a96846c607b87cd615117d679\"" Jun 25 18:32:20.428633 systemd[1]: Started cri-containerd-b286303c4964ded1e53891f1075d00891a38176a96846c607b87cd615117d679.scope - libcontainer container b286303c4964ded1e53891f1075d00891a38176a96846c607b87cd615117d679. Jun 25 18:32:20.464465 containerd[1730]: time="2024-06-25T18:32:20.464419199Z" level=info msg="StartContainer for \"b286303c4964ded1e53891f1075d00891a38176a96846c607b87cd615117d679\" returns successfully" Jun 25 18:32:20.663242 containerd[1730]: time="2024-06-25T18:32:20.663190776Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:32:20.665880 containerd[1730]: time="2024-06-25T18:32:20.665845060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Jun 25 18:32:20.667580 containerd[1730]: time="2024-06-25T18:32:20.667547622Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 313.332664ms" Jun 25 18:32:20.667676 containerd[1730]: time="2024-06-25T18:32:20.667594422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jun 25 18:32:20.670844 containerd[1730]: time="2024-06-25T18:32:20.670813107Z" level=info msg="CreateContainer within sandbox \"a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 18:32:20.707542 containerd[1730]: time="2024-06-25T18:32:20.707493245Z" level=info msg="CreateContainer within sandbox \"a2fe1302c50016f2ed64b747f9919c8ac39ddb6fc625903f6c85db27ce1e7218\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"513c5e8c36bef3b85a881636509b9b7a9c1d26952e5de0909acfc6ea4804c071\"" Jun 25 18:32:20.708779 containerd[1730]: time="2024-06-25T18:32:20.708743167Z" level=info msg="StartContainer for \"513c5e8c36bef3b85a881636509b9b7a9c1d26952e5de0909acfc6ea4804c071\"" Jun 25 18:32:20.750977 systemd[1]: Started cri-containerd-513c5e8c36bef3b85a881636509b9b7a9c1d26952e5de0909acfc6ea4804c071.scope - libcontainer container 513c5e8c36bef3b85a881636509b9b7a9c1d26952e5de0909acfc6ea4804c071. Jun 25 18:32:20.799924 containerd[1730]: time="2024-06-25T18:32:20.799879869Z" level=info msg="StartContainer for \"513c5e8c36bef3b85a881636509b9b7a9c1d26952e5de0909acfc6ea4804c071\" returns successfully" Jun 25 18:32:21.250903 kubelet[3240]: I0625 18:32:21.250839 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86c749c987-xksn5" podStartSLOduration=3.052136137 podStartE2EDuration="5.250822055s" podCreationTimestamp="2024-06-25 18:32:16 +0000 UTC" firstStartedPulling="2024-06-25 18:32:18.469706866 +0000 UTC m=+84.624522271" lastFinishedPulling="2024-06-25 18:32:20.668392784 +0000 UTC m=+86.823208189" observedRunningTime="2024-06-25 18:32:21.250333134 +0000 UTC m=+87.405148579" watchObservedRunningTime="2024-06-25 18:32:21.250822055 +0000 UTC m=+87.405637500" Jun 25 18:32:22.265477 kubelet[3240]: I0625 18:32:22.265416 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86c749c987-pchn9" podStartSLOduration=4.333640286 podStartE2EDuration="6.265397841s" podCreationTimestamp="2024-06-25 18:32:16 +0000 UTC" firstStartedPulling="2024-06-25 18:32:18.421729881 +0000 UTC m=+84.576545326" lastFinishedPulling="2024-06-25 18:32:20.353487436 +0000 UTC m=+86.508302881" observedRunningTime="2024-06-25 18:32:21.266711719 +0000 UTC m=+87.421527164" watchObservedRunningTime="2024-06-25 18:32:22.265397841 +0000 UTC m=+88.420213286" Jun 25 18:32:56.628663 systemd[1]: run-containerd-runc-k8s.io-4ae646cba7b390c67d5e1f427a95767037f67d575d26bd9b713cf8cf3f6ecd8a-runc.ih086k.mount: Deactivated successfully. Jun 25 18:33:20.024271 systemd[1]: Started sshd@7-10.200.20.36:22-10.200.16.10:39446.service - OpenSSH per-connection server daemon (10.200.16.10:39446). Jun 25 18:33:20.474716 sshd[6071]: Accepted publickey for core from 10.200.16.10 port 39446 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:33:20.476758 sshd[6071]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:20.481371 systemd-logind[1683]: New session 10 of user core. Jun 25 18:33:20.485442 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:33:20.914760 sshd[6071]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:20.919076 systemd[1]: sshd@7-10.200.20.36:22-10.200.16.10:39446.service: Deactivated successfully. Jun 25 18:33:20.919224 systemd-logind[1683]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:33:20.921190 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:33:20.924038 systemd-logind[1683]: Removed session 10. Jun 25 18:33:26.005267 systemd[1]: Started sshd@8-10.200.20.36:22-10.200.16.10:33034.service - OpenSSH per-connection server daemon (10.200.16.10:33034). Jun 25 18:33:26.481821 sshd[6102]: Accepted publickey for core from 10.200.16.10 port 33034 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:33:26.483053 sshd[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:26.486772 systemd-logind[1683]: New session 11 of user core. Jun 25 18:33:26.493463 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:33:26.900355 sshd[6102]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:26.903495 systemd[1]: sshd@8-10.200.20.36:22-10.200.16.10:33034.service: Deactivated successfully. Jun 25 18:33:26.905729 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:33:26.907203 systemd-logind[1683]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:33:26.908973 systemd-logind[1683]: Removed session 11. Jun 25 18:33:31.986337 systemd[1]: Started sshd@9-10.200.20.36:22-10.200.16.10:33038.service - OpenSSH per-connection server daemon (10.200.16.10:33038). Jun 25 18:33:32.436983 sshd[6156]: Accepted publickey for core from 10.200.16.10 port 33038 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:33:32.438265 sshd[6156]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:32.442837 systemd-logind[1683]: New session 12 of user core. Jun 25 18:33:32.451454 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:33:32.817547 sshd[6156]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:32.821367 systemd-logind[1683]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:33:32.821961 systemd[1]: sshd@9-10.200.20.36:22-10.200.16.10:33038.service: Deactivated successfully. Jun 25 18:33:32.824189 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:33:32.825663 systemd-logind[1683]: Removed session 12. Jun 25 18:33:32.916911 systemd[1]: Started sshd@10-10.200.20.36:22-10.200.16.10:33042.service - OpenSSH per-connection server daemon (10.200.16.10:33042). Jun 25 18:33:33.392076 sshd[6170]: Accepted publickey for core from 10.200.16.10 port 33042 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:33:33.393604 sshd[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:33.397454 systemd-logind[1683]: New session 13 of user core. Jun 25 18:33:33.405439 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:33:33.830167 sshd[6170]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:33.833424 systemd[1]: sshd@10-10.200.20.36:22-10.200.16.10:33042.service: Deactivated successfully. Jun 25 18:33:33.837013 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:33:33.838023 systemd-logind[1683]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:33:33.839308 systemd-logind[1683]: Removed session 13. Jun 25 18:33:33.910022 systemd[1]: Started sshd@11-10.200.20.36:22-10.200.16.10:33052.service - OpenSSH per-connection server daemon (10.200.16.10:33052). Jun 25 18:33:34.353981 sshd[6180]: Accepted publickey for core from 10.200.16.10 port 33052 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:33:34.355466 sshd[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:34.363509 systemd-logind[1683]: New session 14 of user core. Jun 25 18:33:34.368474 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:33:34.752736 sshd[6180]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:34.756606 systemd[1]: sshd@11-10.200.20.36:22-10.200.16.10:33052.service: Deactivated successfully. Jun 25 18:33:34.756856 systemd-logind[1683]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:33:34.759543 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:33:34.760694 systemd-logind[1683]: Removed session 14. Jun 25 18:33:39.836981 systemd[1]: Started sshd@12-10.200.20.36:22-10.200.16.10:40828.service - OpenSSH per-connection server daemon (10.200.16.10:40828). Jun 25 18:33:40.289779 sshd[6205]: Accepted publickey for core from 10.200.16.10 port 40828 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:33:40.291159 sshd[6205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:40.295480 systemd-logind[1683]: New session 15 of user core. Jun 25 18:33:40.302517 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:33:40.687334 sshd[6205]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:40.690490 systemd[1]: sshd@12-10.200.20.36:22-10.200.16.10:40828.service: Deactivated successfully. Jun 25 18:33:40.692048 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:33:40.692899 systemd-logind[1683]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:33:40.694116 systemd-logind[1683]: Removed session 15. Jun 25 18:33:45.781561 systemd[1]: Started sshd@13-10.200.20.36:22-10.200.16.10:43140.service - OpenSSH per-connection server daemon (10.200.16.10:43140). Jun 25 18:33:46.260226 sshd[6248]: Accepted publickey for core from 10.200.16.10 port 43140 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:33:46.261706 sshd[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:46.266783 systemd-logind[1683]: New session 16 of user core. Jun 25 18:33:46.278539 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:33:46.669064 sshd[6248]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:46.672440 systemd-logind[1683]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:33:46.672974 systemd[1]: sshd@13-10.200.20.36:22-10.200.16.10:43140.service: Deactivated successfully. Jun 25 18:33:46.674756 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:33:46.675885 systemd-logind[1683]: Removed session 16. Jun 25 18:33:51.757619 systemd[1]: Started sshd@14-10.200.20.36:22-10.200.16.10:43150.service - OpenSSH per-connection server daemon (10.200.16.10:43150). Jun 25 18:33:52.202345 sshd[6262]: Accepted publickey for core from 10.200.16.10 port 43150 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:33:52.203834 sshd[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:52.208815 systemd-logind[1683]: New session 17 of user core. Jun 25 18:33:52.213469 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:33:52.588520 sshd[6262]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:52.592442 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:33:52.593386 systemd[1]: sshd@14-10.200.20.36:22-10.200.16.10:43150.service: Deactivated successfully. Jun 25 18:33:52.597210 systemd-logind[1683]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:33:52.598127 systemd-logind[1683]: Removed session 17. Jun 25 18:33:52.679922 systemd[1]: Started sshd@15-10.200.20.36:22-10.200.16.10:43158.service - OpenSSH per-connection server daemon (10.200.16.10:43158). Jun 25 18:33:53.168742 sshd[6275]: Accepted publickey for core from 10.200.16.10 port 43158 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:33:53.170045 sshd[6275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:53.173842 systemd-logind[1683]: New session 18 of user core. Jun 25 18:33:53.181728 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:33:53.702092 sshd[6275]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:53.705812 systemd[1]: sshd@15-10.200.20.36:22-10.200.16.10:43158.service: Deactivated successfully. Jun 25 18:33:53.707572 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:33:53.708229 systemd-logind[1683]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:33:53.709159 systemd-logind[1683]: Removed session 18. Jun 25 18:33:53.787538 systemd[1]: Started sshd@16-10.200.20.36:22-10.200.16.10:43174.service - OpenSSH per-connection server daemon (10.200.16.10:43174). Jun 25 18:33:54.229939 sshd[6286]: Accepted publickey for core from 10.200.16.10 port 43174 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:33:54.231258 sshd[6286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:54.235025 systemd-logind[1683]: New session 19 of user core. Jun 25 18:33:54.239463 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:33:56.165791 sshd[6286]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:56.169429 systemd[1]: sshd@16-10.200.20.36:22-10.200.16.10:43174.service: Deactivated successfully. Jun 25 18:33:56.171507 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:33:56.172738 systemd-logind[1683]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:33:56.173725 systemd-logind[1683]: Removed session 19. Jun 25 18:33:56.248417 systemd[1]: Started sshd@17-10.200.20.36:22-10.200.16.10:37326.service - OpenSSH per-connection server daemon (10.200.16.10:37326). Jun 25 18:33:56.699073 sshd[6311]: Accepted publickey for core from 10.200.16.10 port 37326 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:33:56.700465 sshd[6311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:56.704345 systemd-logind[1683]: New session 20 of user core. Jun 25 18:33:56.710468 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:33:57.200844 sshd[6311]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:57.204896 systemd[1]: sshd@17-10.200.20.36:22-10.200.16.10:37326.service: Deactivated successfully. Jun 25 18:33:57.207378 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:33:57.208402 systemd-logind[1683]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:33:57.209673 systemd-logind[1683]: Removed session 20. Jun 25 18:33:57.282952 systemd[1]: Started sshd@18-10.200.20.36:22-10.200.16.10:37340.service - OpenSSH per-connection server daemon (10.200.16.10:37340). Jun 25 18:33:57.731405 sshd[6342]: Accepted publickey for core from 10.200.16.10 port 37340 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:33:57.732868 sshd[6342]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:57.736679 systemd-logind[1683]: New session 21 of user core. Jun 25 18:33:57.742735 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 18:33:58.136569 sshd[6342]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:58.140994 systemd-logind[1683]: Session 21 logged out. Waiting for processes to exit. Jun 25 18:33:58.141332 systemd[1]: sshd@18-10.200.20.36:22-10.200.16.10:37340.service: Deactivated successfully. Jun 25 18:33:58.143840 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 18:33:58.144972 systemd-logind[1683]: Removed session 21. Jun 25 18:34:03.227592 systemd[1]: Started sshd@19-10.200.20.36:22-10.200.16.10:37356.service - OpenSSH per-connection server daemon (10.200.16.10:37356). Jun 25 18:34:03.714380 sshd[6359]: Accepted publickey for core from 10.200.16.10 port 37356 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:34:03.715783 sshd[6359]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:34:03.722004 systemd-logind[1683]: New session 22 of user core. Jun 25 18:34:03.725657 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 18:34:04.125430 sshd[6359]: pam_unix(sshd:session): session closed for user core Jun 25 18:34:04.129687 systemd[1]: sshd@19-10.200.20.36:22-10.200.16.10:37356.service: Deactivated successfully. Jun 25 18:34:04.132077 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 18:34:04.132981 systemd-logind[1683]: Session 22 logged out. Waiting for processes to exit. Jun 25 18:34:04.134176 systemd-logind[1683]: Removed session 22. Jun 25 18:34:09.217587 systemd[1]: Started sshd@20-10.200.20.36:22-10.200.16.10:48030.service - OpenSSH per-connection server daemon (10.200.16.10:48030). Jun 25 18:34:09.700821 sshd[6380]: Accepted publickey for core from 10.200.16.10 port 48030 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:34:09.702179 sshd[6380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:34:09.707191 systemd-logind[1683]: New session 23 of user core. Jun 25 18:34:09.714484 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 18:34:10.124544 sshd[6380]: pam_unix(sshd:session): session closed for user core Jun 25 18:34:10.127988 systemd[1]: sshd@20-10.200.20.36:22-10.200.16.10:48030.service: Deactivated successfully. Jun 25 18:34:10.129881 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 18:34:10.130704 systemd-logind[1683]: Session 23 logged out. Waiting for processes to exit. Jun 25 18:34:10.131737 systemd-logind[1683]: Removed session 23. Jun 25 18:34:15.206047 systemd[1]: Started sshd@21-10.200.20.36:22-10.200.16.10:53674.service - OpenSSH per-connection server daemon (10.200.16.10:53674). Jun 25 18:34:15.656116 sshd[6418]: Accepted publickey for core from 10.200.16.10 port 53674 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:34:15.657519 sshd[6418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:34:15.661665 systemd-logind[1683]: New session 24 of user core. Jun 25 18:34:15.667494 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 18:34:16.055286 sshd[6418]: pam_unix(sshd:session): session closed for user core Jun 25 18:34:16.059187 systemd[1]: sshd@21-10.200.20.36:22-10.200.16.10:53674.service: Deactivated successfully. Jun 25 18:34:16.061568 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 18:34:16.063018 systemd-logind[1683]: Session 24 logged out. Waiting for processes to exit. Jun 25 18:34:16.064532 systemd-logind[1683]: Removed session 24. Jun 25 18:34:21.141400 systemd[1]: Started sshd@22-10.200.20.36:22-10.200.16.10:53680.service - OpenSSH per-connection server daemon (10.200.16.10:53680). Jun 25 18:34:21.596981 sshd[6440]: Accepted publickey for core from 10.200.16.10 port 53680 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:34:21.598592 sshd[6440]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:34:21.603067 systemd-logind[1683]: New session 25 of user core. Jun 25 18:34:21.611538 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 18:34:21.995573 sshd[6440]: pam_unix(sshd:session): session closed for user core Jun 25 18:34:21.998333 systemd-logind[1683]: Session 25 logged out. Waiting for processes to exit. Jun 25 18:34:21.999907 systemd[1]: sshd@22-10.200.20.36:22-10.200.16.10:53680.service: Deactivated successfully. Jun 25 18:34:22.002051 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 18:34:22.003357 systemd-logind[1683]: Removed session 25. Jun 25 18:34:27.084228 systemd[1]: Started sshd@23-10.200.20.36:22-10.200.16.10:44624.service - OpenSSH per-connection server daemon (10.200.16.10:44624). Jun 25 18:34:27.567594 sshd[6478]: Accepted publickey for core from 10.200.16.10 port 44624 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:34:27.569017 sshd[6478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:34:27.573617 systemd-logind[1683]: New session 26 of user core. Jun 25 18:34:27.576549 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 18:34:27.979295 sshd[6478]: pam_unix(sshd:session): session closed for user core Jun 25 18:34:27.983050 systemd-logind[1683]: Session 26 logged out. Waiting for processes to exit. Jun 25 18:34:27.984043 systemd[1]: sshd@23-10.200.20.36:22-10.200.16.10:44624.service: Deactivated successfully. Jun 25 18:34:27.986002 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 18:34:27.987187 systemd-logind[1683]: Removed session 26. Jun 25 18:34:33.066586 systemd[1]: Started sshd@24-10.200.20.36:22-10.200.16.10:44640.service - OpenSSH per-connection server daemon (10.200.16.10:44640). Jun 25 18:34:33.508153 sshd[6517]: Accepted publickey for core from 10.200.16.10 port 44640 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:34:33.509561 sshd[6517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:34:33.515344 systemd-logind[1683]: New session 27 of user core. Jun 25 18:34:33.520504 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 18:34:33.892483 sshd[6517]: pam_unix(sshd:session): session closed for user core Jun 25 18:34:33.895429 systemd-logind[1683]: Session 27 logged out. Waiting for processes to exit. Jun 25 18:34:33.895678 systemd[1]: sshd@24-10.200.20.36:22-10.200.16.10:44640.service: Deactivated successfully. Jun 25 18:34:33.898057 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 18:34:33.899880 systemd-logind[1683]: Removed session 27. Jun 25 18:34:48.093463 systemd[1]: cri-containerd-a6cbcb5cb6e45c31cf95a5e59184d8798d99642aa750aa340f44c45840306ac5.scope: Deactivated successfully. Jun 25 18:34:48.093719 systemd[1]: cri-containerd-a6cbcb5cb6e45c31cf95a5e59184d8798d99642aa750aa340f44c45840306ac5.scope: Consumed 4.503s CPU time, 21.6M memory peak, 0B memory swap peak. Jun 25 18:34:48.120415 containerd[1730]: time="2024-06-25T18:34:48.119965628Z" level=info msg="shim disconnected" id=a6cbcb5cb6e45c31cf95a5e59184d8798d99642aa750aa340f44c45840306ac5 namespace=k8s.io Jun 25 18:34:48.120415 containerd[1730]: time="2024-06-25T18:34:48.120027868Z" level=warning msg="cleaning up after shim disconnected" id=a6cbcb5cb6e45c31cf95a5e59184d8798d99642aa750aa340f44c45840306ac5 namespace=k8s.io Jun 25 18:34:48.120415 containerd[1730]: time="2024-06-25T18:34:48.120035988Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:34:48.120257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6cbcb5cb6e45c31cf95a5e59184d8798d99642aa750aa340f44c45840306ac5-rootfs.mount: Deactivated successfully. Jun 25 18:34:48.384237 systemd[1]: cri-containerd-2ea5a390c1a81996947ecfb0346173ab0d01602570a18115c7f23ec7e9643c85.scope: Deactivated successfully. Jun 25 18:34:48.384507 systemd[1]: cri-containerd-2ea5a390c1a81996947ecfb0346173ab0d01602570a18115c7f23ec7e9643c85.scope: Consumed 5.313s CPU time. Jun 25 18:34:48.405116 containerd[1730]: time="2024-06-25T18:34:48.404995864Z" level=info msg="shim disconnected" id=2ea5a390c1a81996947ecfb0346173ab0d01602570a18115c7f23ec7e9643c85 namespace=k8s.io Jun 25 18:34:48.405116 containerd[1730]: time="2024-06-25T18:34:48.405051984Z" level=warning msg="cleaning up after shim disconnected" id=2ea5a390c1a81996947ecfb0346173ab0d01602570a18115c7f23ec7e9643c85 namespace=k8s.io Jun 25 18:34:48.405116 containerd[1730]: time="2024-06-25T18:34:48.405060824Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:34:48.406737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ea5a390c1a81996947ecfb0346173ab0d01602570a18115c7f23ec7e9643c85-rootfs.mount: Deactivated successfully. Jun 25 18:34:48.517841 kubelet[3240]: I0625 18:34:48.517623 3240 scope.go:117] "RemoveContainer" containerID="2ea5a390c1a81996947ecfb0346173ab0d01602570a18115c7f23ec7e9643c85" Jun 25 18:34:48.520702 kubelet[3240]: I0625 18:34:48.520329 3240 scope.go:117] "RemoveContainer" containerID="a6cbcb5cb6e45c31cf95a5e59184d8798d99642aa750aa340f44c45840306ac5" Jun 25 18:34:48.521657 containerd[1730]: time="2024-06-25T18:34:48.520508200Z" level=info msg="CreateContainer within sandbox \"c80e8d1a5ce37c3c42dc044a2fccc571d60d8f9de86dce518d995b02b0871490\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jun 25 18:34:48.523228 containerd[1730]: time="2024-06-25T18:34:48.523189084Z" level=info msg="CreateContainer within sandbox \"9d247b654e0c0589269e96829d97873ec41450bbc38cde938d2d5e8a520121e1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 25 18:34:48.567464 containerd[1730]: time="2024-06-25T18:34:48.567412752Z" level=info msg="CreateContainer within sandbox \"c80e8d1a5ce37c3c42dc044a2fccc571d60d8f9de86dce518d995b02b0871490\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"4456fbebd19ec4cf2718126d9b44915c9b912c5b30212b09f25a4b8a0bfb4b11\"" Jun 25 18:34:48.568380 containerd[1730]: time="2024-06-25T18:34:48.567860433Z" level=info msg="StartContainer for \"4456fbebd19ec4cf2718126d9b44915c9b912c5b30212b09f25a4b8a0bfb4b11\"" Jun 25 18:34:48.571581 containerd[1730]: time="2024-06-25T18:34:48.571381358Z" level=info msg="CreateContainer within sandbox \"9d247b654e0c0589269e96829d97873ec41450bbc38cde938d2d5e8a520121e1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d4f6aa7f33b2cc2870a7938b938f7779784a5c092df93aa56a080b37a6a0ac20\"" Jun 25 18:34:48.571875 containerd[1730]: time="2024-06-25T18:34:48.571848639Z" level=info msg="StartContainer for \"d4f6aa7f33b2cc2870a7938b938f7779784a5c092df93aa56a080b37a6a0ac20\"" Jun 25 18:34:48.595494 systemd[1]: Started cri-containerd-4456fbebd19ec4cf2718126d9b44915c9b912c5b30212b09f25a4b8a0bfb4b11.scope - libcontainer container 4456fbebd19ec4cf2718126d9b44915c9b912c5b30212b09f25a4b8a0bfb4b11. Jun 25 18:34:48.600496 systemd[1]: Started cri-containerd-d4f6aa7f33b2cc2870a7938b938f7779784a5c092df93aa56a080b37a6a0ac20.scope - libcontainer container d4f6aa7f33b2cc2870a7938b938f7779784a5c092df93aa56a080b37a6a0ac20. Jun 25 18:34:48.634472 containerd[1730]: time="2024-06-25T18:34:48.634209854Z" level=info msg="StartContainer for \"4456fbebd19ec4cf2718126d9b44915c9b912c5b30212b09f25a4b8a0bfb4b11\" returns successfully" Jun 25 18:34:48.649041 containerd[1730]: time="2024-06-25T18:34:48.648988837Z" level=info msg="StartContainer for \"d4f6aa7f33b2cc2870a7938b938f7779784a5c092df93aa56a080b37a6a0ac20\" returns successfully" Jun 25 18:34:48.898472 kubelet[3240]: E0625 18:34:48.898420 3240 controller.go:195] "Failed to update lease" err="Put \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-5284b277fa?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 25 18:34:49.339697 kubelet[3240]: E0625 18:34:49.339584 3240 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.36:38816->10.200.20.10:2379: read: connection timed out" Jun 25 18:34:53.883634 kubelet[3240]: E0625 18:34:53.883484 3240 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.36:38590->10.200.20.10:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4012.0.0-a-5284b277fa.17dc530f6a0e8f98 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4012.0.0-a-5284b277fa,UID:eb4f608165760b4d05def7e2edfddf50,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4012.0.0-a-5284b277fa,},FirstTimestamp:2024-06-25 18:34:43.417640856 +0000 UTC m=+229.572456301,LastTimestamp:2024-06-25 18:34:43.417640856 +0000 UTC m=+229.572456301,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4012.0.0-a-5284b277fa,}" Jun 25 18:34:54.607690 systemd[1]: cri-containerd-00c71c73e32a6f33d8e63c5579462ef9a7870a239e405fe66404e7c69b6163a1.scope: Deactivated successfully. Jun 25 18:34:54.607954 systemd[1]: cri-containerd-00c71c73e32a6f33d8e63c5579462ef9a7870a239e405fe66404e7c69b6163a1.scope: Consumed 1.884s CPU time, 16.0M memory peak, 0B memory swap peak. Jun 25 18:34:54.628955 containerd[1730]: time="2024-06-25T18:34:54.628899746Z" level=info msg="shim disconnected" id=00c71c73e32a6f33d8e63c5579462ef9a7870a239e405fe66404e7c69b6163a1 namespace=k8s.io Jun 25 18:34:54.628972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00c71c73e32a6f33d8e63c5579462ef9a7870a239e405fe66404e7c69b6163a1-rootfs.mount: Deactivated successfully. Jun 25 18:34:54.629826 containerd[1730]: time="2024-06-25T18:34:54.629432667Z" level=warning msg="cleaning up after shim disconnected" id=00c71c73e32a6f33d8e63c5579462ef9a7870a239e405fe66404e7c69b6163a1 namespace=k8s.io Jun 25 18:34:54.629826 containerd[1730]: time="2024-06-25T18:34:54.629451867Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:34:55.566848 kubelet[3240]: I0625 18:34:55.566812 3240 scope.go:117] "RemoveContainer" containerID="00c71c73e32a6f33d8e63c5579462ef9a7870a239e405fe66404e7c69b6163a1" Jun 25 18:34:55.568864 containerd[1730]: time="2024-06-25T18:34:55.568824733Z" level=info msg="CreateContainer within sandbox \"752610c1ce9e5b08fae10e03da275b7f92918bd2c962605538ea18c63da29f6f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 25 18:34:55.604180 containerd[1730]: time="2024-06-25T18:34:55.604131193Z" level=info msg="CreateContainer within sandbox \"752610c1ce9e5b08fae10e03da275b7f92918bd2c962605538ea18c63da29f6f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"7ba4845926120f095ef52909794e672bf1170d11a6ff6c8c43504fc4e2e3bb8c\"" Jun 25 18:34:55.604635 containerd[1730]: time="2024-06-25T18:34:55.604606554Z" level=info msg="StartContainer for \"7ba4845926120f095ef52909794e672bf1170d11a6ff6c8c43504fc4e2e3bb8c\"" Jun 25 18:34:55.641477 systemd[1]: Started cri-containerd-7ba4845926120f095ef52909794e672bf1170d11a6ff6c8c43504fc4e2e3bb8c.scope - libcontainer container 7ba4845926120f095ef52909794e672bf1170d11a6ff6c8c43504fc4e2e3bb8c. Jun 25 18:34:55.674510 containerd[1730]: time="2024-06-25T18:34:55.674059391Z" level=info msg="StartContainer for \"7ba4845926120f095ef52909794e672bf1170d11a6ff6c8c43504fc4e2e3bb8c\" returns successfully" Jun 25 18:34:58.939356 kubelet[3240]: I0625 18:34:58.939144 3240 status_manager.go:853] "Failed to get status for pod" podUID="23458f76-9bf7-460c-b90c-4dd8c3b36cab" pod="tigera-operator/tigera-operator-76ff79f7fd-fkxh2" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.36:38708->10.200.20.10:2379: read: connection timed out" Jun 25 18:34:59.340649 kubelet[3240]: E0625 18:34:59.340290 3240 controller.go:195] "Failed to update lease" err="Put \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-5284b277fa?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 25 18:34:59.902636 systemd[1]: cri-containerd-4456fbebd19ec4cf2718126d9b44915c9b912c5b30212b09f25a4b8a0bfb4b11.scope: Deactivated successfully. Jun 25 18:34:59.924175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4456fbebd19ec4cf2718126d9b44915c9b912c5b30212b09f25a4b8a0bfb4b11-rootfs.mount: Deactivated successfully. Jun 25 18:34:59.944515 containerd[1730]: time="2024-06-25T18:34:59.944388763Z" level=info msg="shim disconnected" id=4456fbebd19ec4cf2718126d9b44915c9b912c5b30212b09f25a4b8a0bfb4b11 namespace=k8s.io Jun 25 18:34:59.944515 containerd[1730]: time="2024-06-25T18:34:59.944507204Z" level=warning msg="cleaning up after shim disconnected" id=4456fbebd19ec4cf2718126d9b44915c9b912c5b30212b09f25a4b8a0bfb4b11 namespace=k8s.io Jun 25 18:34:59.944515 containerd[1730]: time="2024-06-25T18:34:59.944518364Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:35:00.579787 kubelet[3240]: I0625 18:35:00.579532 3240 scope.go:117] "RemoveContainer" containerID="2ea5a390c1a81996947ecfb0346173ab0d01602570a18115c7f23ec7e9643c85" Jun 25 18:35:00.580160 kubelet[3240]: I0625 18:35:00.579831 3240 scope.go:117] "RemoveContainer" containerID="4456fbebd19ec4cf2718126d9b44915c9b912c5b30212b09f25a4b8a0bfb4b11" Jun 25 18:35:00.580160 kubelet[3240]: E0625 18:35:00.580102 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-76ff79f7fd-fkxh2_tigera-operator(23458f76-9bf7-460c-b90c-4dd8c3b36cab)\"" pod="tigera-operator/tigera-operator-76ff79f7fd-fkxh2" podUID="23458f76-9bf7-460c-b90c-4dd8c3b36cab" Jun 25 18:35:00.581193 containerd[1730]: time="2024-06-25T18:35:00.581130999Z" level=info msg="RemoveContainer for \"2ea5a390c1a81996947ecfb0346173ab0d01602570a18115c7f23ec7e9643c85\"" Jun 25 18:35:00.589040 containerd[1730]: time="2024-06-25T18:35:00.588996972Z" level=info msg="RemoveContainer for \"2ea5a390c1a81996947ecfb0346173ab0d01602570a18115c7f23ec7e9643c85\" returns successfully" Jun 25 18:35:09.341023 kubelet[3240]: E0625 18:35:09.340935 3240 controller.go:195] "Failed to update lease" err="Put \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-5284b277fa?timeout=10s\": context deadline exceeded" Jun 25 18:35:13.978238 kubelet[3240]: I0625 18:35:13.977877 3240 scope.go:117] "RemoveContainer" containerID="4456fbebd19ec4cf2718126d9b44915c9b912c5b30212b09f25a4b8a0bfb4b11" Jun 25 18:35:13.982100 containerd[1730]: time="2024-06-25T18:35:13.981749243Z" level=info msg="CreateContainer within sandbox \"c80e8d1a5ce37c3c42dc044a2fccc571d60d8f9de86dce518d995b02b0871490\" for container &ContainerMetadata{Name:tigera-operator,Attempt:2,}" Jun 25 18:35:14.013703 containerd[1730]: time="2024-06-25T18:35:14.013654297Z" level=info msg="CreateContainer within sandbox \"c80e8d1a5ce37c3c42dc044a2fccc571d60d8f9de86dce518d995b02b0871490\" for &ContainerMetadata{Name:tigera-operator,Attempt:2,} returns container id \"fc6e71231f7744246d562d9ebc90c552d302b35a68f4079ca5ba3d4b54f98e4b\"" Jun 25 18:35:14.014231 containerd[1730]: time="2024-06-25T18:35:14.014197058Z" level=info msg="StartContainer for \"fc6e71231f7744246d562d9ebc90c552d302b35a68f4079ca5ba3d4b54f98e4b\"" Jun 25 18:35:14.041054 systemd[1]: run-containerd-runc-k8s.io-fc6e71231f7744246d562d9ebc90c552d302b35a68f4079ca5ba3d4b54f98e4b-runc.IgonBg.mount: Deactivated successfully. Jun 25 18:35:14.052511 systemd[1]: Started cri-containerd-fc6e71231f7744246d562d9ebc90c552d302b35a68f4079ca5ba3d4b54f98e4b.scope - libcontainer container fc6e71231f7744246d562d9ebc90c552d302b35a68f4079ca5ba3d4b54f98e4b. Jun 25 18:35:14.086654 containerd[1730]: time="2024-06-25T18:35:14.086307899Z" level=info msg="StartContainer for \"fc6e71231f7744246d562d9ebc90c552d302b35a68f4079ca5ba3d4b54f98e4b\" returns successfully" Jun 25 18:35:19.341348 kubelet[3240]: E0625 18:35:19.341194 3240 controller.go:195] "Failed to update lease" err="Put \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-5284b277fa?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jun 25 18:35:19.341348 kubelet[3240]: I0625 18:35:19.341240 3240 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jun 25 18:35:20.664258 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.664623 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.682481 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.682754 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.698470 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.698692 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.713364 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.713585 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.736277 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.736490 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.752568 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.752800 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.769215 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.769439 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.777653 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.793918 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.805713 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#304 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.805977 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#305 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.822179 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#306 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.822548 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.838652 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#307 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.838925 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.854856 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#308 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.855279 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#309 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.862948 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.879304 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#310 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.879699 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.895921 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#311 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.896290 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.912676 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.913053 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.929534 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.941894 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#304 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.942162 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#305 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.958336 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#306 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.958592 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#307 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.975209 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#308 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.975492 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#309 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.992084 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#310 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:20.992371 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#311 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.008599 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#312 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.017263 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#313 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.017634 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.033632 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#314 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.033901 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#315 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.048806 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.049119 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.065960 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#256 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.066344 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.082479 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#257 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.083128 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.098429 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#258 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.098747 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.106334 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#259 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.122506 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.122747 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#260 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.139834 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.140182 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.156900 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.165602 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.165934 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.182947 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.183329 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.199615 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.199885 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.214961 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.215250 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.230700 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.239050 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.239429 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.254135 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.254412 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#304 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.271230 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#305 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.271502 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#306 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.287884 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#307 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.288184 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#308 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.305197 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#309 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.305650 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#310 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.322469 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#311 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.322874 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#312 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.340104 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#313 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.340470 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#314 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.356259 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#315 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.356605 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#256 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.371589 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#257 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.371902 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#55 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.379343 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#258 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.394592 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#259 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.402325 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#260 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.402544 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.418013 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.418340 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.436479 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.436754 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.454104 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.454444 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#267 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.472066 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#268 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.472403 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#269 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.489654 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#270 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.489996 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#272 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.505795 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#271 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.506084 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#273 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.522576 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#275 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.531078 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#274 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.531444 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#276 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.548030 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.548298 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#277 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.566220 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.566565 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.583074 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.583410 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.601764 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.602123 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.611242 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.628073 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.628536 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.643713 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#288 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.643982 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#289 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.661406 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#290 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.670385 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#291 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.670675 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#292 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.689210 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#293 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.689625 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#294 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.707392 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#295 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.707697 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#296 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.723101 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#297 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.723521 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#298 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.739740 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#299 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:35:21.740031 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#300 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001